## Generate language model reply using agent prompts
POST/agents
-
Requires: LLM provider, API Key, and agent name, and context variables.
-
Agent Templates: summarize-bullets(article), summarize(article), suggest-followups(chat_history, article), answer-cite-sources(context, chat_history, query), query-resolution(chat_history, query), knowledge-graph-nodes(query, article), summary-longtext(summaries)
-
How it Works: Language models are machine learning systems trained on vast amounts of text to predict the most likely next word or sequence of words given a prompt. They represent words and their contexts as high-dimensional vectors, allowing them to capture complex relationships and nuances in language. Using neural network architectures like transformers, these models analyze input text, apply attention mechanisms to understand context, and generate human-like responses based on learned patterns.
-
Providers: groq, togetherai, openai, anthropic, xai, google, perplexity
-
OpenAI Docs OpenAI Keys: GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, GPT-4 Omni, GPT-4 Omni Mini
-
Anthropic Docs Anthropic Keys: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
-
TogetherAI Docs TogetherAI Keys: Llama, Mistral, Mixtral, Qwen, Gemma, WizardLM, DBRX, DeepSeek, Hermes, SOLAR, StripedHyena.
-
Google Vertex Docs Google Vertex Keys: Gemini
-
Perplexity Docs Perplexity Keys: Sonar, Sonar Deep Research
Request​
Responses​
- 200
- 500
Generated language model response (in HTML or Markdown)
Server error or missing prompt parameter