This page covers how to pick a mode and reasoning depth, how to phrase a research prompt, and how to poll and handle failures on the Research endpoint.Documentation Index
Fetch the complete documentation index at: https://docs.linkup.so/llms.txt
Use this file to discover all available pages before exploring further.
Choosing a mode
"Answer" typically iterates to verify the response. The agent reasons against itself, checks alternative response candidates, and cross-references the evidence to produce a definitive answer with a high level of certainty. Use it when verified answers are required, typically for high-stakes workflows where getting the answer right is crucial (finance, legal, research, etc.).
"Investigate" is optimized to go deep on a single topic or entity. The agent follows threads uncovered during the search, explores new trails as they are discovered, and verifies claims along the way. Use it to build deep-dive reports on single entities or for complex, multi-hop questions.
"Research" is optimized to go wide. The agent searches multiple threads in parallel to produce structured reports that cover one topic broadly, or many topics or entities at once. Use it to build industry reports or lists of entities.
Example prompts
- Answer
- Investigate
- Research
mode is not provided, the agent classifies the question and selects
one of the three modes for the request. Setting mode explicitly is the
recommended path because it produces the most predictable latency, cost, and
output shape. The canonical mode table lives on the
Research overview.
Choosing a reasoning depth
reasoningDepth controls how much effort the agent puts into the research. The search, retrieval, and iteration budget grows with the depth, and the agent is aware of its compute budget: it typically reasons until it is satisfied with the answer, within the limit of that budget. As such, a task run at "XL" does not necessarily produce a much longer answer if a satisfactory response does not require it — but the agent is more demanding and searches more.
Adapt the depth to budget, latency requirements, and the complexity of the
request. The canonical depth table (latency ranges per "S" / "M" / "L" /
"XL") lives on the
Research overview.
Question phrasing
Research runs as an agentic loop: the agent interprets the question, plans its retrieval, executes searches in parallel, verifies claims, and synthesizes the result. Both terse and detailed inputs are accepted, and more precise input produces more predictable, more thorough, and more aligned output. Useful dimensions to specify include:- the angles to cover,
- the leads to pursue,
- the facts to verify,
- the entities to compare,
- the constraints any answer must satisfy, and
- the structure expected from the final response.
Examples
A short prompt; the agent chooses the angles and the level of detail.Polling
Polling loops are a common source of integration errors. Recommended defaults:- Initial interval: 2 seconds.
- Backoff: double the interval up to 10 seconds for longer tasks.
- Maximum poll rate: 1 request per second. Higher rates trigger rate limits without reducing time-to-completion.
Failure handling
No credit is deducted for failed tasks or tasks that return no result. Retries are unrestricted. This policy is consistent with the other endpoints. See errors.Resources
- Research overview
- Structured output tutorial
- Filtering tutorial
- Tasks best practices (for long-running batches)
- Errors and pricing