generate-reply-api
Documentation / agents/generate-reply-api
Generate​
CHAT_MODELS​
const CHAT_MODELS: object;
Defined in: agents/generate-reply-api.js:160
List of default models for the chat providers and a list of models available for Groq
Type declaration​
Name | Type | Defined in |
---|---|---|
| agents/generate-reply-api.js:270 | |
{
| agents/generate-reply-api.js:161 | |
| agents/generate-reply-api.js:180 | |
| agents/generate-reply-api.js:243 | |
| agents/generate-reply-api.js:292 | |
| agents/generate-reply-api.js:168 |
generateLanguageModelReply()​
function generateLanguageModelReply(query: string | any[], options: object): Promise<{
content: string;
error: string;
}>;
Defined in: agents/generate-reply-api.js:57
Generates a reply using specified AI provider and model:
- Groq Docs Groq Keys: Llama 3.2 3B, Llama 3.2 11B Vision, Llama 3.2 90B Vision, Llama 3.1 70B, Llama 3.1 8B, Mixtral 8x7B, Gemma2 9B
- OpenAI Docs OpenAI Keys: GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4 Omni, GPT-4 Omni Mini
- Anthropic Docs Anthropic Keys: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
- TogetherAI Docs TogetherAI Keys: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena.
- XAI Docs XAI Keys: Grok, Grok Vision
Google Vertex Docs Google Vertex Keys: Gemini
This function utilizes transformer-based language models
- Input Embedding: Converts input text into numerical vectors.
- Positional Encoding: Adds position information to maintain word order.
- Multi-Head Attention: Processes relationships between words in parallel.
- Feed-Forward Networks: Further processes the attention output.
- Layer Normalization & Residual Connections: Stabilizes learning and prevents vanishing gradients.
- Output Layer: Generates probabilities for next tokens.
Parameters​
Parameter | Type | Description |
---|---|---|
|
| User's input query text string or LangChain messages array |
| { | Options |
|
| API key for the specified provider |
|
| If true, reply format is HTML. If false, Markdown |
|
| Optional model name. If not provided, uses default |
|
| LLM provider: groq, openai, anthropic, together, xai, google |
|
| Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic). 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same. |
Returns​
Promise
<{
content
: string
;
error
: string
;
}>
Generated response
Author​
Example​
const response = await generateLanguageModelReply(
"Explain neural networks", {provider: "groq", apiKey: "your-api-key"})
Other​
ChatModel​
Defined in: agents/generate-reply-api.js:150
Properties​
Property | Type | Description | Defined in |
---|---|---|---|
| The internal ID of the model | agents/generate-reply-api.js:152 | |
| The model name | agents/generate-reply-api.js:153 | |
| The display name of the model | agents/generate-reply-api.js:151 |
ChatModels​
Defined in: agents/generate-reply-api.js:155
Properties​
Property | Type | Description | Defined in |
---|---|---|---|
| The default models for the chat providers | agents/generate-reply-api.js:156 | |
List of models available for Groq | agents/generate-reply-api.js:157 |