generate-language
Documentation / agents/generate-language
Generate​
generateLanguageResponse()​
function generateLanguageResponse(options: object): Promise<{
content: string;
extract: any;
error: string;
}>;
Defined in: packages/ai-research-agent/src/agents/generate-language.js:90
Generate Language Response​
Writes language response showing human-like understanding of the question and context.
- Requires: LLM provider, API Key, agent name, and context input variables for agent.
- Providers: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare
- Agent Templates: any template from LangHub or custom: summarize-bullets(article), summarize(article), summary-longtext(summaries), suggest-followups(chat_history, article), answer-cite-sources(context, chat_history, query), query-resolution(chat_history, query), knowledge-graph-nodes(query, article)
- How it Works: Language models are trained on vast amounts of text to predict the most likely next word or sequence of words given a prompt. They represent words and their contexts as high-dimensional vectors, allowing them to capture complex relationships and nuances in language. Using neural network architectures like transformers, these models analyze input text, apply attention mechanisms to understand context by multiplying scores of all other words, using multiple attention head starting points, and generate human-like responses based on learned patterns. How LangChain ReactAgent Tools Works


Parameters​
Parameter | Type | Description |
---|---|---|
| { | Configuration parameters for language model generation |
|
| Language model provider to use. Supported providers:
|
|
| API key for the specified provider. Not required for ollama. For cloudflare, use format: "key:accountId" |
|
| Name of the agent prompt template to use. Can include custom variables |
|
| Specific model name to use. If not provided, uses provider's default model |
|
| Controls response randomness:
|
|
| User's input query text (required for some agents) |
|
| Article text to process (required for some agents) |
|
| Previous conversation history (required for some agents) |
|
| Set to true to return response in HTML format, false for markdown |
|
| Whether to enforce model's context length limits |
|
| API key for LangChain tracing functionality |
Returns​
Promise
<{
content
: string
;
extract
: any
;
error
: string
;
}>
Response object containing:
- content: Generated response in HTML/markdown format
- extract: JSON object with extracted data (for supported agents)
- error: Error message if generation fails
See​
- Groq Docs Groq Keys: Llama, Mixtral 8x7B, Gemma2 9B
- OpenAI Docs OpenAI Keys: GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4 Omni, GPT-4 Omni Mini
- Anthropic Docs Anthropic Keys: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
- TogetherAI Docs TogetherAI Keys: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena.
- XAI Docs XAI Keys: Grok, Grok Vision
- Google Vertex Docs Google Vertex Keys: Gemini
- Perplexity Docs Perplexity Keys: Sonar, Sonar Deep Research
Author​
Example​
const response = await generateLanguageResponse({
query: "Explain neural networks",
agent: "question",
provider: "groq",
apiKey: "your-api-key"
})