Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

This page is structured for direct use as integration context for a coding agent, or as a function-calling tool definition. Operational guidance is repeated inline so the page is self-contained.

Linkup /research integration guide

You are integrating the Linkup /research API: an autonomous agent that investigates the web to handle questions a single search query cannot resolve. Use cases include verified answers to precise questions, focused investigations of a defined subject, and broad multi-angle reports. The agent gathers evidence from multiple sources in parallel, iterates through investigation, and returns a sourced response with inline citations. Three modes ("Answer", "Investigate", "Research"; explicit selection produces the most predictable behaviour), four reasoning depths ("S", "M", "L", "XL"), two output types ("sourcedAnswer", "structured"). Latency: 2–20 minutes depending on depth. Async lifecycle.

When to use it

Use /research for multi-source synthesis, comparative analysis, or audit-trail citations: questions where the value is the agent’s planning and synthesis across many sources. Other endpoints in the API:
  • Search (/search): synchronous web search, <1s–~30s depending on depth. Three modes, three output types.
  • Fetch (/fetch): when the URL is already known.
  • Tasks (/tasks): asynchronous batch wrapper.

Setup

pip install linkup-sdk            # Python
# or
npm install linkup-sdk            # TypeScript
export LINKUP_API_KEY="your-api-key"

Example (Python; adapt to the project’s language)

import time
from linkup import LinkupClient

client = LinkupClient(api_key="<YOUR_LINKUP_API_KEY>")

# /research is async: POST returns immediately with an id. Poll until completed.
#
# mode: type of investigation. Setting it explicitly produces the most
# predictable latency, cost, and output shape. If omitted, the agent
# classifies the question and selects one of the three modes for the request.
# - Answer: precise, evidence-backed answer with a definitive solution.
# - Investigate: focused report on a single defined subject. Narrower than Research.
# - Research: structured report organized by theme, covering many topics in parallel.
#
# reasoningDepth (default L): trade latency for thoroughness.
# - S: light coverage, 2-5 min. Short multi-step investigations.
# - M: balanced cost-to-quality, 3-7 min. Routine use.
# - L: thorough investigation, 5-10 min. High-quality answers under bounded latency.
# - XL: exhaustive coverage, 10-20 min. Deliverables where completeness takes precedence over latency.
task = client.research.create(
    q="Compare the 2024 cloud revenue growth of Microsoft, Amazon, and Google.",
    output_type="sourcedAnswer",
    mode="Investigate",
    reasoning_depth="L",
)

# Poll: 2-second initial interval, back off to 10s.
interval = 2
while True:
    result = client.research.get(task.id)
    if result.status in ("completed", "failed"):
        break
    time.sleep(interval)
    interval = min(interval * 2, 10)

print(result.output)

Tool definition (OpenAI function-calling format)

Remove the "type": "function" envelope and rename parameters to input_schema for the Anthropic format. Note that this tool is async: the handler should poll on the model’s behalf and return the completed result, not the task id.
{
  "type": "function",
  "function": {
    "name": "linkup_research",
    "description": "Submits an autonomous research task and returns a synthesized, cited answer. Use for questions a single search query cannot resolve: verified answers to precise questions, focused investigations of a defined subject, or broad multi-angle reports. Latency is 2-20 minutes depending on reasoningDepth; the handler should poll until completion.",
    "parameters": {
      "type": "object",
      "properties": {
        "q": {
          "type": "string",
          "description": "The research question, phrased in natural language. Both terse and detailed inputs are accepted; more precise input produces more predictable, thorough, and aligned output. Useful dimensions to specify include the angles to cover, the leads to pursue, the facts to verify, the entities to compare, the constraints any answer must satisfy, and the structure expected from the final response. The agent plans and executes the retrieval steps."
        },
        "outputType": {
          "type": "string",
          "enum": ["sourcedAnswer", "structured"],
          "description": "sourcedAnswer for natural-language answer with citations. structured for JSON matching structuredOutputSchema (keep the schema shallow)."
        },
        "structuredOutputSchema": {
          "type": "string",
          "description": "Required when outputType=structured. JSON Schema (as a string) for the response. Root must be type 'object'. Keep flat. Deeply nested arrays of objects multiply latency and failure rate."
        },
        "mode": {
          "type": "string",
          "enum": ["Answer", "Investigate", "Research"],
          "description": "Type of investigation. Setting this field explicitly produces the most predictable latency, cost, and output shape, and is the recommended path. Answer for precise, evidence-backed answers with a definitive solution. Investigate for a focused report on a single defined subject. Research for a structured report organized by theme covering many topics or entities in parallel. If this field is omitted, the agent classifies the question and selects one of the three modes for the request."
        },
        "reasoningDepth": {
          "type": "string",
          "enum": ["S", "M", "L", "XL"],
          "description": "Controls thoroughness, trading latency for coverage. S: light coverage, 2-5 min, suitable for short multi-step investigations. M: balanced cost-to-quality, 3-7 min, suitable for routine use. L (default): thorough investigation, 5-10 min, suitable for high-quality answers under bounded latency. XL: exhaustive coverage, 10-20 min, suitable for deliverables where completeness takes precedence over latency. Match the depth to budget, latency requirements, and the complexity of the request.",
          "default": "L"
        },
        "includeDomains": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Restrict the search to up to 100 trusted domains. Strongly recommended. Improves quality and reduces latency."
        },
        "excludeDomains": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Domains to exclude."
        },
        "fromDate": {
          "type": "string",
          "description": "ISO 8601 (YYYY-MM-DD). Restrict to results on or after this date."
        },
        "toDate": {
          "type": "string",
          "description": "ISO 8601 (YYYY-MM-DD). Restrict to results on or before this date."
        }
      },
      "required": ["q", "outputType"]
    }
  }
}

Operational guidance (inline)

Mode selection

Setting mode explicitly produces the most predictable latency, cost, and output shape. Match the value to the workload:
  • "Answer" for precise, evidence-backed answers with a definitive solution.
  • "Investigate" for a focused report on a single defined subject.
  • "Research" for a structured report organized by theme covering many topics or entities in parallel.
If mode is not provided, the agent classifies the question and selects one of the three modes for the request; the selection depends on that classification.

Reasoning-depth selection

Match the depth to budget, latency requirements, and the complexity of the request:
  • Short multi-step investigations, latency-sensitive: "S" (2–5 min)
  • Routine use: "M" (3–7 min)
  • High-quality answers under bounded latency (default): "L" (5–10 min)
  • Deliverables where completeness takes precedence over latency: "XL" (10–20 min)

Question phrasing

Research runs as an agentic loop: the agent interprets the question, plans its retrieval, executes searches in parallel, verifies claims, and synthesizes the result. Both terse and detailed inputs are accepted; more precise input produces more predictable, more thorough, and more aligned output. Useful dimensions to specify include the angles to cover, the leads to pursue, the facts to verify, the entities to compare, the constraints any answer must satisfy, and the structure expected from the final response.

Source filtering

A handful of trusted domains in includeDomains improves quality and reduces latency. This setting is recommended whenever the use case has identifiable authoritative sources.

"structured" schema design

Flat schemas with primitive fields are fastest and most reliable. Reshape client-side when downstream code requires nesting.

Polling

  • Initial interval: 2 seconds.
  • Backoff: maximum of 10 seconds.
  • Maximum poll rate: 1 request per second.

Constraints

  • Date ranges should be set via fromDate / toDate rather than embedded in the question.
  • Deeply nested structuredOutputSchema values should be flattened and reshaped client-side.
  • Polling without backoff triggers rate limits without reducing time-to-completion.

TypeScript notes

  • Import: import { LinkupClient } from 'linkup-sdk'.
  • Methods: await client.research.create({ ... }), await client.research.get(id).
  • Field names are camelCase: outputType, reasoningDepth, includeDomains.

Beta status

The Research endpoint is currently in beta. Behavior, parameters, and pricing may change. Set the SDK version explicitly when stability is required.