Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

Research is Linkup’s autonomous research agent that investigates the web to handle questions a single search query cannot resolve. Use cases include:
  • verified answers to precise questions,
  • focused investigations of a defined subject, and
  • broad multi-angle reports.
The agent gathers evidence from multiple sources in parallel, iterates through investigation, and returns a sourced response with inline citations. Research is built around three modes, four reasoning depths, and two output types.
The Research endpoint is currently in beta. Behavior and parameters may change.

Modes

The mode parameter pins the type of investigation performed.
ModeDescriptionTypical use
"Answer"Returns a precise, evidence-backed answer to a question with a definitive solution.Hard questions with a single correct answer that require verification across multiple sources. Example: “Which 12 S&P 500 companies gained more than 50% with market capitalization above $5B in Q3 2025?”
"Investigate"Returns a focused report on a single defined subject, examining each angle and verifying claims.In-depth reads on a defined entity. Example: “Risk profile and regulatory history of company X.”
"Research"Returns a structured report organized by theme, covering many topics or entities in parallel.Open-ended questions requiring breadth across multiple subjects. Example: “State of the European generative AI market in 2026.”
Set mode explicitly to pin latency, cost, and output shape. Omitting it lets the agent classify the question and pick — convenient but less predictable.
If the mode parameter is not provided, the agent automatically classifies the question and selects one of the three modes for the request.

Reasoning depth

The reasoningDepth parameter controls thoroughness. Higher depths have more compute budget: they consult more sources, perform more iterations and cross-checking, produce longer outputs, and take longer to run.
DepthDescriptionOrder-of-magnitude latencyCost
"S"Light coverage. Suitable for short multi-step investigations.2–5 minutes$0.25 per call
"M"Balanced cost-to-quality ratio. Suitable for routine use.3–7 minutes$0.50 per call
"L"Thorough investigation. Suitable for high-quality answers under bounded latency.5–10 minutes$1.50 per call
"XL"Exhaustive coverage. Suitable for deliverables where completeness takes precedence over latency.10–20 minutes$2.50 per call
Omitting the parameter defaults to "L".

Output types

ValueDescription
"sourcedAnswer"Natural-language answer with inline citations.
"structured"JSON object conforming to the schema provided in structuredOutputSchema.
For "structured", see the structured output tutorial.

Async lifecycle

POST /v1/research returns immediately with a job identifier and status set to "pending". Subsequent calls to GET /v1/research/:id return the current state until status is "completed" or "failed". Typical completion times range from a couple of minutes for shallow configurations to twenty minutes for exhaustive ones. GET /v1/research is also available to list all research tasks for the account.
POST /research              GET /research/:id             GET /research/:id
        │                            │                            │
        ▼                            ▼                            ▼
   { id, status:           { id, status: "processing" }   { id, status: "completed",
     "pending" }                    (poll)                     output: { ... } }
Poll at 5–10 seconds for long-running tasks. Polling above 1 request per second will be rate-limited.

Example

Get your API key

Create a Linkup account for free to get your API key.
from linkup import LinkupClient
import time

client = LinkupClient(api_key="<YOUR_LINKUP_API_KEY>")

task = client.research.create(
    q="Compare the 2024 cloud revenue growth of Microsoft, Amazon, and Google.",
    output_type="sourcedAnswer",
    mode="Investigate",
    reasoning_depth="L",
)

while True:
    result = client.research.get(task.id)
    if result.status in ("completed", "failed"):
        break
    time.sleep(2)

print(result.output)
POST /v1/research returns the task envelope immediately, with status set to "pending" and output set to null. GET /v1/research/{id} returns the same envelope; once status is "completed", output is populated:
{
  "id": "01234-abcd-56789",
  "type": "research",
  "status": "completed",
  "createdAt": "2026-01-01T00:00:00.000Z",
  "updatedAt": "2026-01-01T00:08:42.000Z",
  "error": null,
  "input": {
    "q": "Compare the 2024 cloud revenue growth of Microsoft, Amazon, and Google.",
    "outputType": "sourcedAnswer",
    "mode": "Investigate",
    "reasoningDepth": "L"
  },
  "output": {
    "answer": "Microsoft Cloud revenue rose 23% to $137.4B in FY2024; AWS revenue rose 13% to $107.6B; Google Cloud rose 31% to $39.3B ...",
    "sources": [
      {
        "name": "Microsoft 2024 Annual Report",
        "url": "https://www.microsoft.com/investor/reports/ar24/index.html",
        "snippet": "Microsoft Cloud revenue increased 23% to $137.4 billion."
      }
      // ... additional sources
    ]
  }
}
When outputType is "structured", output is the JSON object described by structuredOutputSchema instead of { answer, sources }.

Next

Best practices

Mode and depth selection, question structure, schema design.

For AI agents

Tool definition and integration prompt.

API reference

Full parameter spec and response schema.