Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

This page covers how to pick a mode and reasoning depth, how to phrase a research prompt, and how to poll and handle failures on the Research endpoint.

Choosing a mode

"Answer" typically iterates to verify the response. The agent reasons against itself, checks alternative response candidates, and cross-references the evidence to produce a definitive answer with a high level of certainty. Use it when verified answers are required, typically for high-stakes workflows where getting the answer right is crucial (finance, legal, research, etc.). "Investigate" is optimized to go deep on a single topic or entity. The agent follows threads uncovered during the search, explores new trails as they are discovered, and verifies claims along the way. Use it to build deep-dive reports on single entities or for complex, multi-hop questions. "Research" is optimized to go wide. The agent searches multiple threads in parallel to produce structured reports that cover one topic broadly, or many topics or entities at once. Use it to build industry reports or lists of entities.

Example prompts

What is the American company that generated the highest revenues in Europe in 2025?
If mode is not provided, the agent classifies the question and selects one of the three modes for the request. Setting mode explicitly is the recommended path because it produces the most predictable latency, cost, and output shape. The canonical mode table lives on the Research overview.

Choosing a reasoning depth

reasoningDepth controls how much effort the agent puts into the research. The search, retrieval, and iteration budget grows with the depth, and the agent is aware of its compute budget: it typically reasons until it is satisfied with the answer, within the limit of that budget. As such, a task run at "XL" does not necessarily produce a much longer answer if a satisfactory response does not require it — but the agent is more demanding and searches more. Adapt the depth to budget, latency requirements, and the complexity of the request. The canonical depth table (latency ranges per "S" / "M" / "L" / "XL") lives on the Research overview.

Question phrasing

Research runs as an agentic loop: the agent interprets the question, plans its retrieval, executes searches in parallel, verifies claims, and synthesizes the result. Both terse and detailed inputs are accepted, and more precise input produces more predictable, more thorough, and more aligned output. Useful dimensions to specify include:
  • the angles to cover,
  • the leads to pursue,
  • the facts to verify,
  • the entities to compare,
  • the constraints any answer must satisfy, and
  • the structure expected from the final response.

Examples

A short prompt; the agent chooses the angles and the level of detail.
What's going on in the European AI inference market?
A medium prompt; angle and deliverable are constrained, individual claims are not.
Survey the European AI inference market in 2026: identify the main
hyperscalers and independent inference providers operating in the region,
their pricing posture, and the regulatory pressure they face under the EU
AI Act.
A fully specified agent brief; entities, comparison axes, and verification expectations are all named.
Produce a competitive landscape of European AI inference providers
operating in 2026.

Scope:
- Cover at minimum: Mistral, Aleph Alpha, Silo AI, OVHcloud, Scaleway,
  and any independent inference provider with publicly disclosed
  funding above €20M.
- Exclude US- and APAC-headquartered hyperscalers unless they operate a
  sovereign EU inference offering.

For each provider, surface:
- Headquarters and primary inference regions.
- Models served and deployment modes (managed API, dedicated, on-prem).
- Disclosed pricing for one mid-sized open-weight model, normalized per
  million input tokens.
- Sovereignty and EU AI Act posture: data residency claims,
  certifications, and any public stance on GPAI obligations.
- Latest disclosed funding round and lead investors.

Verify all numeric claims against primary sources (provider pricing
pages, regulatory filings, or press releases). Flag any figure that
could only be sourced from secondary aggregators.

Polling

Polling loops are a common source of integration errors. Recommended defaults:
  • Initial interval: 2 seconds.
  • Backoff: double the interval up to 10 seconds for longer tasks.
  • Maximum poll rate: 1 request per second. Higher rates trigger rate limits without reducing time-to-completion.
For long-running research, submit via the Tasks endpoint and check periodically rather than maintaining a blocking polling loop.

Failure handling

No credit is deducted for failed tasks or tasks that return no result. Retries are unrestricted. This policy is consistent with the other endpoints. See errors.

Resources