generate-reply-api
ai-research-agent / agents/generate-reply-api
Generate
CHAT_MODELS
const CHAT_MODELS: object;
List of default models for the chat providers and a list of models available for Groq
Type declaration
Name | Type |
---|---|
|
|
| {
|
|
|
|
|
|
|
|
|
generateLanguageModelReply()
function generateLanguageModelReply(query, options): Promise<{
content: string;
error: string;
}>
Generates a reply using specified AI provider and model:
- Groq Docs Groq Keys: Llama 3.2 3B, Llama 3.2 11B Vision, Llama 3.2 90B Vision, Llama 3.1 70B, Llama 3.1 8B, Mixtral 8x7B, Gemma2 9B
- OpenAI Docs OpenAI Keys: GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4 Omni, GPT-4 Omni Mini
- Anthropic Docs Anthropic Keys: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
- TogetherAI Docs TogetherAI Keys: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena.
- XAI Docs XAI Keys: Grok, Grok Vision
Google Vertex Docs Google Vertex Keys: Gemini
This function utilizes transformer-based language models
- Input Embedding: Converts input text into numerical vectors.
- Positional Encoding: Adds position information to maintain word order.
- Multi-Head Attention: Processes relationships between words in parallel.
- Feed-Forward Networks: Further processes the attention output.
- Layer Normalization & Residual Connections: Stabilizes learning and prevents vanishing gradients.
- Output Layer: Generates probabilities for next tokens.
Parameters
Parameter | Type | Description |
---|---|---|
|
| User's input query text string or LangChain messages array |
| { | Options |
|
| API key for the specified provider |
|
| Optional model name. If not provided, uses default |
|
| LLM provider: groq, openai, anthropic, together, xai, google |
|
| Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic). 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same. |
Returns
Promise
<{
content
: string
;
error
: string
;
}>
Generated response
Author
Example
const response = await generateLanguageModelReply(
"Explain neural networks", {provider: "groq", apiKey: "your-api-key"})
Other
ChatModel
Properties
id
id: string;
The internal ID of the model
model
model: string;
The model name
name
name: string;
The display name of the model
ChatModels
Properties
defaults
defaults: Object;
The default models for the chat providers
groq
groq: ChatModel[];
List of models available for Groq