Skip to main content

generate-language

Documentation / agents/generate-language

Generate​

generateLanguageResponse()​

function generateLanguageResponse(options: object): Promise<{
content: string;
extract: any;
error: string;
}>;

Defined in: packages/ai-research-agent/src/agents/generate-language.js:90

Generate Language Response​

Writes language response showing human-like understanding of the question and context.

  • Requires: LLM provider, API Key, agent name, and context input variables for agent.
  • Providers: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare
  • Agent Templates: any template from LangHub or custom: summarize-bullets(article), summarize(article), summary-longtext(summaries), suggest-followups(chat_history, article), answer-cite-sources(context, chat_history, query), query-resolution(chat_history, query), knowledge-graph-nodes(query, article)
  • How it Works: Language models are trained on vast amounts of text to predict the most likely next word or sequence of words given a prompt. They represent words and their contexts as high-dimensional vectors, allowing them to capture complex relationships and nuances in language. Using neural network architectures like transformers, these models analyze input text, apply attention mechanisms to understand context by multiplying scores of all other words, using multiple attention head starting points, and generate human-like responses based on learned patterns. How LangChain ReactAgent Tools Works

Parameters​

ParameterTypeDescription

options

{ provider: string; apiKey?: string; agent?: string; model?: string; temperature?: number; query?: string; article?: string; chat_history?: string; html?: boolean; applyContextLimit?: boolean; LANGCHAIN_API_KEY?: string; }

Configuration parameters for language model generation

options.provider

string

Language model provider to use. Supported providers:

  • groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare

options.apiKey?

string

API key for the specified provider. Not required for ollama. For cloudflare, use format: "key:accountId"

options.agent?

string

Name of the agent prompt template to use. Can include custom variables

options.model?

string

Specific model name to use. If not provided, uses provider's default model

options.temperature?

number

Controls response randomness:

  • Values < 1.0: More deterministic, focused responses
  • Values > 1.0: More creative, varied responses
  • Default: 1.0 (balanced)

options.query?

string

User's input query text (required for some agents)

options.article?

string

Article text to process (required for some agents)

options.chat_history?

string

Previous conversation history (required for some agents)

options.html?

boolean

Set to true to return response in HTML format, false for markdown

options.applyContextLimit?

boolean

Whether to enforce model's context length limits

options.LANGCHAIN_API_KEY?

string

API key for LangChain tracing functionality

Returns​

Promise<{ content: string; extract: any; error: string; }>

Response object containing:

  • content: Generated response in HTML/markdown format
  • extract: JSON object with extracted data (for supported agents)
  • error: Error message if generation fails

See​

Author​

Language Model Researchers

Example​

const response = await generateLanguageResponse({
query: "Explain neural networks",
agent: "question",
provider: "groq",
apiKey: "your-api-key"
})