FunctionsAgents

Generate Language

ai-research-agent / agents/generate-language

Generate

generateLanguageResponse()

function generateLanguageResponse(options: object): Promise<{
  content: string;
  error: string;
  extract: any;
}>;

Defined in: src/agents/generate-language.js:105

Generate Language Response

Writes language response showing human-like understanding of the question and context.

  • Requires: LLM provider, API Key, agent name, and context input variables for agent.
  • Providers: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare
  • Agent Templates: any template from LangHub or custom: summarize-bullets(article), summarize(article), summary-longtext(summaries), suggest-followups(chat_history, article), answer-cite-sources(context, chat_history, query), query-resolution(chat_history, query), knowledge-graph-nodes(query, article)
  • How it Works: Language models are trained on vast amounts of text to predict the most likely next word or sequence of words given a prompt. They represent words and their contexts as high-dimensional vectors, allowing them to capture complex relationships and nuances in language. Using neural network architectures like transformers, these models analyze input text, apply attention mechanisms to understand context by multiplying scores of all other words, using multiple attention head starting points, and generate human-like responses based on learned patterns.

Parameters

ParameterTypeDescription

options

{ agent?: string; apiKey?: string; applyContextLimit?: boolean; article?: string; chat_history?: string; html?: boolean; LANGCHAIN_API_KEY?: string; model?: string; provider: string; query?: string; temperature?: number; }

Configuration parameters for language model generation

options.agent?

string

Name of the agent prompt template to use. Can include custom variables

options.apiKey?

string

API key for the specified provider. Not required for ollama. For cloudflare, use format: "key:accountId"

options.applyContextLimit?

boolean

Whether to enforce model's context length limits

options.article?

string

Article text to process (required for some agents)

options.chat_history?

string

Previous conversation history (required for some agents)

options.html?

boolean

Set to true to return response in HTML format, false for markdown

options.LANGCHAIN_API_KEY?

string

API key for LangChain tracing functionality

options.model?

string

Specific model name to use. If not provided, uses provider's default model

options.provider

string

Language model provider to use. Supported providers:

  • groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare

options.query?

string

User's input query text (required for some agents)

options.temperature?

number

Controls response randomness:

  • Values < 1.0: More deterministic, focused responses
  • Values > 1.0: More creative, varied responses
  • Default: 1.0 (balanced)

Returns

Promise<{ content: string; error: string; extract: any; }>

Response object containing:

  • content: Generated response in HTML/markdown format
  • extract: JSON object with extracted data (for supported agents)
  • error: Error message if generation fails

See

IDs: groq, togetherai, openai, anthropic, xai, google, perplexity, ollama, cloudflare

  • XAI šŸ“š Docs šŸ”‘ Keys šŸ’° 80B ($ valuation) šŸ’ø 100M ($ 2024 revenue): Grok, Grok Vision
  • Groq šŸ“š Docs šŸ”‘ Keys šŸ’° 2.8B: Llama, DeepSeek, Gemini, Mistral
  • Ollama šŸ“š Docs šŸ’ø 3.2M: llama, mistral, mixtral, vicuna, gemma, qwen, deepseek, openchat, openhermes, codelama, codegemma, llava, minicpm, wizardcoder, wizardmath, meditrion, falcon
  • OpenAI šŸ“š Docs šŸ”‘ Keys šŸ’° 300B šŸ’ø 3.7B: o1, o1-mini, o4, o4-mini, gpt-4, gpt-4-turbo, gpt-4-omni
  • Anthropic šŸ“š Docs šŸ”‘ Keys šŸ’° 61.5B šŸ’ø 1B: Claude Sonnet, Claude Opus, Claude Haiku
  • TogetherAI šŸ“š Docs šŸ”‘ Keys šŸ’° 3.3B šŸ’ø 50M: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena
  • Perplexity šŸ“š Docs šŸ”‘ Keys šŸ’° 18B šŸ’ø 20M : Sonar, Sonar Deep Research
  • Cloudflare šŸ“š Docs šŸ”‘ Keys šŸ’° 62.3B šŸ’ø 1.67B: Llama, Gemma, Mistral, Phi, Qwen, DeepSeek, Hermes, SQL Coder, Code Llama
  • Google Vertex šŸ“š Docs šŸ”‘ Keys: Gemini

Author

Language Model Researchers

Example

const response = await generateLanguageResponse({
  query: "Explain neural networks",
  agent: "question",
  provider: "groq",
  apiKey: "your-api-key"
})