Generate Language
ai-research-agent / agents/generate-language
Generate
generateLanguageResponse()
function generateLanguageResponse(options: object): Promise<{
content: string;
error: string;
extract: any;
}>;Defined in: src/agents/generate-language.js:105
Generate Language Response
Writes language response showing human-like understanding of the question and context.
- Requires: LLM provider, API Key, agent name, and context input variables for agent.
- Providers: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare
- Agent Templates: any template from LangHub or custom: summarize-bullets(article), summarize(article), summary-longtext(summaries), suggest-followups(chat_history, article), answer-cite-sources(context, chat_history, query), query-resolution(chat_history, query), knowledge-graph-nodes(query, article)
- How it Works: Language models are trained on vast amounts of text to predict the most likely next word or sequence of words given a prompt. They represent words and their contexts as high-dimensional vectors, allowing them to capture complex relationships and nuances in language. Using neural network architectures like transformers, these models analyze input text, apply attention mechanisms to understand context by multiplying scores of all other words, using multiple attention head starting points, and generate human-like responses based on learned patterns.
Parameters
| Parameter | Type | Description |
|---|---|---|
| { | Configuration parameters for language model generation |
|
| Name of the agent prompt template to use. Can include custom variables |
|
| API key for the specified provider. Not required for ollama. For cloudflare, use format: "key:accountId" |
|
| Whether to enforce model's context length limits |
|
| Article text to process (required for some agents) |
|
| Previous conversation history (required for some agents) |
|
| Set to true to return response in HTML format, false for markdown |
|
| API key for LangChain tracing functionality |
|
| Specific model name to use. If not provided, uses provider's default model |
|
| Language model provider to use. Supported providers:
|
|
| User's input query text (required for some agents) |
|
| Controls response randomness:
|
Returns
Promise<{
content: string;
error: string;
extract: any;
}>
Response object containing:
- content: Generated response in HTML/markdown format
- extract: JSON object with extracted data (for supported agents)
- error: Error message if generation fails
See
- LangChain ReactAgent Tools Hugging Face Tutorials Transformer Overview Building Transformer Guide PyTorch Overview LLM Training Example
IDs: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare
- XAI š Docs š Keys š° 80B ($ valuation) šø 100M ($ 2024 revenue): Grok, Grok Vision
- Groq š Docs š Keys š° 2.8B: Llama, DeepSeek, Gemini, Mistral
- Ollama š Docs šø 3.2M: llama, mistral, mixtral, vicuna, gemma, qwen, deepseek, openchat, openhermes, codelama, codegemma, llava, minicpm, wizardcoder, wizardmath, meditrion, falcon
- OpenAI š Docs š Keys š° 300B šø 3.7B: o1, o1-mini, o4, o4-mini, gpt-4, gpt-4-turbo, gpt-4-omni
- Anthropic š Docs š Keys š° 61.5B šø 1B: Claude Sonnet, Claude Opus, Claude Haiku
- TogetherAI š Docs š Keys š° 3.3B šø 50M: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena
- Perplexity š Docs š Keys š° 18B šø 20M : Sonar, Sonar Deep Research
- Cloudflare š Docs š Keys š° 62.3B šø 1.67B: Llama, Gemma, Mistral, Phi, Qwen, DeepSeek, Hermes, SQL Coder, Code Llama
- Google Vertex š Docs š Keys: Gemini
Author
Example
const response = await generateLanguageResponse({
query: "Explain neural networks",
agent: "question",
provider: "groq",
apiKey: "your-api-key"
})