Skip to main content

generate-reply-api

Documentation / agents/generate-reply-api

Generate​

CHAT_MODELS​

const CHAT_MODELS: object;

Defined in: agents/generate-reply-api.js:160

List of default models for the chat providers and a list of models available for Groq

Type declaration​

NameTypeDefined in

anthropic

object[]

agents/generate-reply-api.js:270

defaults

{ anthropic: string; groq: string; openai: string; together: string; xai: string; }

agents/generate-reply-api.js:161

groq

object[]

agents/generate-reply-api.js:180

openai

object[]

agents/generate-reply-api.js:243

together

object[]

agents/generate-reply-api.js:292

xai

object[]

agents/generate-reply-api.js:168


generateLanguageModelReply()​

function generateLanguageModelReply(query: string | any[], options: object): Promise<{
content: string;
error: string;
}>;

Defined in: agents/generate-reply-api.js:57

Generates a reply using specified AI provider and model:

Google Vertex Docs Google Vertex Keys: Gemini

This function utilizes transformer-based language models

  1. Input Embedding: Converts input text into numerical vectors.
  2. Positional Encoding: Adds position information to maintain word order.
  3. Multi-Head Attention: Processes relationships between words in parallel.
  4. Feed-Forward Networks: Further processes the attention output.
  5. Layer Normalization & Residual Connections: Stabilizes learning and prevents vanishing gradients.
  6. Output Layer: Generates probabilities for next tokens.

Parameters​

ParameterTypeDescription

query

string | any[]

User's input query text string or LangChain messages array

options

{ apiKey: string; html: boolean; model: string; provider: string; temperature: number; }

Options

options.apiKey

string

API key for the specified provider

options.html?

boolean

If true, reply format is HTML. If false, Markdown

options.model?

string

Optional model name. If not provided, uses default

options.provider

string

LLM provider: groq, openai, anthropic, together, xai, google

options.temperature?

number

Temperature is a way to control the overall confidence of the model's scores (the logits). What this means is that, if you use a lower value than 1.0, the relative distance between the tokens will become larger (more deterministic), and if you use a larger value than 1.0, the relative distance between the tokens becomes smaller (less deterministic). 1.0 Temperature is the original distribution that the model was trained to optimize for, since the scores remain the same.

Returns​

Promise<{ content: string; error: string; }>

Generated response

Author​

AI Research Contributors

Example​

const response = await generateLanguageModelReply(
"Explain neural networks", {provider: "groq", apiKey: "your-api-key"})

Other​

ChatModel​

Defined in: agents/generate-reply-api.js:150

Properties​

PropertyTypeDescriptionDefined in

id

string

The internal ID of the model

agents/generate-reply-api.js:152

model

string

The model name

agents/generate-reply-api.js:153

name

string

The display name of the model

agents/generate-reply-api.js:151


ChatModels​

Defined in: agents/generate-reply-api.js:155

Properties​

PropertyTypeDescriptionDefined in

defaults

any

The default models for the chat providers

agents/generate-reply-api.js:156

groq

ChatModel[]

List of models available for Groq

agents/generate-reply-api.js:157