Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

This page is structured for direct use as integration context for a coding agent, or as a function-calling tool definition. It is self-contained: operational guidance is repeated inline.
For coding agents reading this: the page below is the integration prompt. Adapt the example to the project’s language and pass it through as needed.

Linkup /search integration guide

You are integrating the Linkup /search API: synchronous web search that returns ranked sources, sourced answers, or structured JSON. Three modes ("fast", "standard", "deep"), three output types ("searchResults", "sourcedAnswer", "structured"). Latency: <1s–~30s depending on depth.

When to use it

Use /search when the model requires current facts, specific entities, or web content to ground a response. The endpoint performs a single synchronous round-trip: a natural-language query returns ranked results, optionally synthesized into an answer or a JSON object matching a caller-provided schema. Other endpoints in the API:
  • Fetch (/fetch): when the URL is already known.
  • Research (/research): autonomous research agent. Async, 2–20 minutes depending on depth.
  • Tasks (/tasks): asynchronous batch wrapper around Search, Fetch, and Research.

Setup

pip install linkup-sdk            # Python
# or
npm install linkup-sdk            # TypeScript
export LINKUP_API_KEY="your-api-key"   # treat like a password

Example (Python; adapt to the project’s language)

from linkup import LinkupClient

client = LinkupClient(api_key="<YOUR_LINKUP_API_KEY>")

# depth: "fast" | "standard" | "deep"
# - fast: sub-second, keyword-only, no LLM, no scraping, no chaining (beta)
# - standard: single-iteration agentic, ~1-3s, can scrape ONE URL provided in the query, parallel sub-searches OK
# - deep: up to 10 iterations, ~5-30s, can scrape multiple URLs, supports sequential search->scrape chains
# Use `deep` when the next step depends on the previous step's output.
#
# outputType:
# - searchResults: array of {name, url, content} — best for LLM grounding
# - sourcedAnswer: natural language + sources — best for end-user display
# - structured: JSON matching structuredOutputSchema — best for pipelines
response = client.search(
    query="What is Microsoft's 2024 revenue?",
    depth="standard",
    output_type="sourcedAnswer",
)
print(response)

Tool definition (OpenAI function-calling format)

Register this as a tool the model can call. Remove the "type": "function" envelope and rename parameters to input_schema for the Anthropic format.
{
  "type": "function",
  "function": {
    "name": "linkup_search",
    "description": "Searches the live web with an agentic search engine. Returns ranked sources, a sourced natural-language answer, or structured JSON. Use whenever the model needs current facts, specific named entities, recent events, or information that likely isn't in training data.",
    "parameters": {
      "type": "object",
      "properties": {
        "query": {
          "type": "string",
          "description": "Natural-language question or instruction. Be specific. Include the key entity, date range, or domain when known. For chained work (search then scrape), specify the order: 'First find X. Then scrape X. Then extract Y.'"
        },
        "depth": {
          "type": "string",
          "enum": ["fast", "standard", "deep"],
          "description": "fast: sub-second keyword search, no LLM, no scraping. standard: single-iteration agentic, ~1-3s, can scrape one URL given in the query. deep: up to 10 iterations, ~5-30s, can scrape multiple URLs, supports sequential search-then-scrape chains. Use deep when the next step depends on the previous step's output."
        },
        "outputType": {
          "type": "string",
          "enum": ["searchResults", "sourcedAnswer", "structured"],
          "description": "searchResults for LLM grounding. sourcedAnswer for end-user display. structured for pipelines (requires structuredOutputSchema)."
        },
        "structuredOutputSchema": {
          "type": "string",
          "description": "Required when outputType=structured. A JSON Schema (as a string) the response must match. Root must be type 'object'. Keep schemas shallow."
        },
        "includeDomains": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Restrict the search to up to 100 domains. Recommended when trusted sources for the use case are known."
        },
        "excludeDomains": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Domains to exclude from results."
        },
        "fromDate": {
          "type": "string",
          "description": "ISO 8601 (YYYY-MM-DD). Restrict to results on or after this date. Prefer this over embedding dates in the query."
        },
        "toDate": {
          "type": "string",
          "description": "ISO 8601 (YYYY-MM-DD). Restrict to results on or before this date."
        }
      },
      "required": ["query", "depth", "outputType"]
    }
  }
}

Operational guidance (inline, in case other context is unavailable)

Depth selection

Use caseRecommended setting
Single keyword lookup, latency-critical"fast"
Single-pass retrieval comparable to one Google search"standard"
Breadth across adjacent keywords"standard" with “run several searches with adjacent keywords”
Scrape one URL provided by the user"standard"
Scrape several known URLs (no additional search per URL)concurrent "standard" calls, each with one URL in the query
Scrape several known URLs and run several searches"deep"
Locate a URL, then scrape it"deep"
Iterative search-then-scrape-then-search chains"deep"
Workload requirements undetermined"deep"
"standard" accepts a URL in the query and will scrape it. Issuing several "standard" calls in parallel is an efficient way to combine search and scrape across multiple known pages. Use "deep" when the next step’s behavior depends on what the previous step found.

Query patterns

  • Keyword-style for "fast" and simple "standard": "NVIDIA Q4 2024 revenue".
  • Instruction-style for complex "standard" and all "deep": "Find Datadog's pricing page. Scrape it. Extract plan names, per-host prices, and included features."
  • Queries should describe what the search must find, not what the model should conclude. Reasoning happens on the calling side.
  • Include context: dates, locations, industries, domains.
  • For chained work, specify the order: “First find X. Then scrape X. Then extract Y.”

Output type

  • "searchResults" when the model will reason over the sources.
  • "sourcedAnswer" when a user sees the answer directly.
  • "structured" when code parses the output. Keep schemas shallow.

Constraints

  • Use "standard" or "deep" for instruction-style prompts; "fast" is keyword-only.
  • A single "standard" call scrapes at most one URL. For several known URLs, issue concurrent "standard" calls; for sequential chains, use "deep".
  • Set date ranges via fromDate / toDate rather than embedding dates in the query.
  • Always provide a structuredOutputSchema when outputType is "structured".

TypeScript notes

  • Import: import { LinkupClient } from 'linkup-sdk'.
  • Method: await client.search({ query, depth, outputType }). Single object argument.
  • Field names are camelCase in the SDK (outputType, includeDomains, fromDate).
  • Wrap in an async function and await the call.