This page is structured for direct use as integration context for a coding agent, or as a function-calling tool definition. Operational guidance is repeated inline so the page is self-contained.Documentation Index
Fetch the complete documentation index at: https://docs.linkup.so/llms.txt
Use this file to discover all available pages before exploring further.
Linkup /research integration guide
You are integrating the Linkup /research API: an autonomous agent that
investigates the web to handle questions a single search query cannot
resolve. Use cases include verified answers to precise questions, focused
investigations of a defined subject, and broad multi-angle reports. The agent
gathers evidence from multiple sources in parallel, iterates through
investigation, and returns a sourced response with inline citations.
Three modes ("Answer", "Investigate", "Research"; explicit selection
produces the most predictable behaviour), four reasoning depths ("S", "M",
"L", "XL"), two output types ("sourcedAnswer", "structured"). Latency: 2–20
minutes depending on depth. Async lifecycle.
When to use it
Use/research for multi-source synthesis, comparative analysis, or
audit-trail citations: questions where the value is the agent’s planning and
synthesis across many sources.
Other endpoints in the API:
- Search (
/search): synchronous web search, <1s–~30s depending on depth. Three modes, three output types. - Fetch (
/fetch): when the URL is already known. - Tasks (
/tasks): asynchronous batch wrapper.
Setup
Example (Python; adapt to the project’s language)
Tool definition (OpenAI function-calling format)
Remove the"type": "function" envelope and rename parameters to
input_schema for the Anthropic format. Note that this tool is async:
the handler should poll on the model’s behalf and return the completed
result, not the task id.
Operational guidance (inline)
Mode selection
Settingmode explicitly produces the most predictable latency, cost, and
output shape. Match the value to the workload:
"Answer"for precise, evidence-backed answers with a definitive solution."Investigate"for a focused report on a single defined subject."Research"for a structured report organized by theme covering many topics or entities in parallel.
mode is not provided, the agent classifies the question and selects
one of the three modes for the request; the selection depends on that
classification.
Reasoning-depth selection
Match the depth to budget, latency requirements, and the complexity of the request:- Short multi-step investigations, latency-sensitive:
"S"(2–5 min) - Routine use:
"M"(3–7 min) - High-quality answers under bounded latency (default):
"L"(5–10 min) - Deliverables where completeness takes precedence over latency:
"XL"(10–20 min)
Question phrasing
Research runs as an agentic loop: the agent interprets the question, plans its retrieval, executes searches in parallel, verifies claims, and synthesizes the result. Both terse and detailed inputs are accepted; more precise input produces more predictable, more thorough, and more aligned output. Useful dimensions to specify include the angles to cover, the leads to pursue, the facts to verify, the entities to compare, the constraints any answer must satisfy, and the structure expected from the final response.Source filtering
A handful of trusted domains inincludeDomains improves quality and reduces
latency. This setting is recommended whenever the use case has identifiable
authoritative sources.
"structured" schema design
Flat schemas with primitive fields are fastest and most reliable. Reshape
client-side when downstream code requires nesting.
Polling
- Initial interval: 2 seconds.
- Backoff: maximum of 10 seconds.
- Maximum poll rate: 1 request per second.
Constraints
- Date ranges should be set via
fromDate/toDaterather than embedded in the question. - Deeply nested
structuredOutputSchemavalues should be flattened and reshaped client-side. - Polling without backoff triggers rate limits without reducing time-to-completion.
TypeScript notes
- Import:
import { LinkupClient } from 'linkup-sdk'. - Methods:
await client.research.create({ ... }),await client.research.get(id). - Field names are camelCase:
outputType,reasoningDepth,includeDomains.