This page is structured for direct use as integration context for a coding agent, or as a function-calling tool definition. It is self-contained: operational guidance is repeated inline.Documentation Index
Fetch the complete documentation index at: https://docs.linkup.so/llms.txt
Use this file to discover all available pages before exploring further.
For coding agents reading this: the page below is the integration prompt.
Adapt the example to the project’s language and pass it through as needed.
Linkup /search integration guide
You are integrating the Linkup /search API: synchronous web search that
returns ranked sources, sourced answers, or structured JSON.
Three modes ("fast", "standard", "deep"), three output types ("searchResults",
"sourcedAnswer", "structured"). Latency: <1s–~30s depending on depth.
When to use it
Use/search when the model requires current facts, specific entities, or web
content to ground a response. The endpoint performs a single synchronous
round-trip: a natural-language query returns ranked results, optionally
synthesized into an answer or a JSON object matching a caller-provided schema.
Other endpoints in the API:
- Fetch (
/fetch): when the URL is already known. - Research (
/research): autonomous research agent. Async, 2–20 minutes depending on depth. - Tasks (
/tasks): asynchronous batch wrapper around Search, Fetch, and Research.
Setup
Example (Python; adapt to the project’s language)
Tool definition (OpenAI function-calling format)
Register this as a tool the model can call. Remove the"type": "function"
envelope and rename parameters to input_schema for the Anthropic format.
Operational guidance (inline, in case other context is unavailable)
Depth selection
| Use case | Recommended setting |
|---|---|
| Single keyword lookup, latency-critical | "fast" |
| Single-pass retrieval comparable to one Google search | "standard" |
| Breadth across adjacent keywords | "standard" with “run several searches with adjacent keywords” |
| Scrape one URL provided by the user | "standard" |
| Scrape several known URLs (no additional search per URL) | concurrent "standard" calls, each with one URL in the query |
| Scrape several known URLs and run several searches | "deep" |
| Locate a URL, then scrape it | "deep" |
| Iterative search-then-scrape-then-search chains | "deep" |
| Workload requirements undetermined | "deep" |
"standard" accepts a URL in the query and will scrape it. Issuing several
"standard" calls in parallel is an efficient way to combine search and scrape
across multiple known pages. Use "deep" when the next step’s behavior depends
on what the previous step found.
Query patterns
- Keyword-style for
"fast"and simple"standard":"NVIDIA Q4 2024 revenue". - Instruction-style for complex
"standard"and all"deep":"Find Datadog's pricing page. Scrape it. Extract plan names, per-host prices, and included features." - Queries should describe what the search must find, not what the model should conclude. Reasoning happens on the calling side.
- Include context: dates, locations, industries, domains.
- For chained work, specify the order: “First find X. Then scrape X. Then extract Y.”
Output type
"searchResults"when the model will reason over the sources."sourcedAnswer"when a user sees the answer directly."structured"when code parses the output. Keep schemas shallow.
Constraints
- Use
"standard"or"deep"for instruction-style prompts;"fast"is keyword-only. - A single
"standard"call scrapes at most one URL. For several known URLs, issue concurrent"standard"calls; for sequential chains, use"deep". - Set date ranges via
fromDate/toDaterather than embedding dates in the query. - Always provide a
structuredOutputSchemawhenoutputTypeis"structured".
TypeScript notes
- Import:
import { LinkupClient } from 'linkup-sdk'. - Method:
await client.search({ query, depth, outputType }). Single object argument. - Field names are camelCase in the SDK (
outputType,includeDomains,fromDate). - Wrap in an
asyncfunction andawaitthe call.