This page covers how to pick a depth, how to phrase queries for each ofDocumentation Index
Fetch the complete documentation index at: https://docs.linkup.so/llms.txt
Use this file to discover all available pages before exploring further.
"fast", "standard", and "deep", how to choose an output type, and how to
apply source and date filtering on the Search endpoint.
Choosing depth
Thedepth parameter controls if / how agentic search is used within the search.
| Use case | Recommended setting |
|---|---|
| Keyword style lookup at sub-second latency | "fast" |
| Instruction-based retrieval comparable to one or a few Google searches | "standard" |
| Breadth across adjacent keywords (news, research) | "standard" with explicit “run several searches with adjacent keywords” |
| Scrape one URL provided in the query and run a search | "standard" |
| Scrape several known URLs and run several searches | "deep" |
| Find a URL, then scrape it | "deep" |
"fast"; if one or a few parallel
Google searches would answer the question → "standard"; if a human would
open multiple tabs or follow leads → "deep".
Per-depth guidance
- fast
- standard
- deep
"fast" is keyword-only and bypasses the LLM entirely.It returns
sub-second results for keyword-shaped queries where latency is the
binding constraint. Currently in beta.Behaviour
- Single-pass, keyword-like search with no LLM involvement.
- No query interpretation, no reformulation, no evaluation.
- The query string is passed to the index as-is.
- No in-query URL scraping, no sub-searches, no iteration.
When to use it
- Conversational AI use cases where low latency is critical.
- High-volume, low-latency pipelines.
- Lookups for one specific piece of information
When not to use it
- Anything that requires reading a page (use
"standard"for one URL,"deep"for several). - Any query whose intent depends on instruction parsing, ordering, or
multi-step retrieval —
"fast"will treat the instructions as keywords.
Query shape
Keep queries short and keyword-shaped:Chain with Fetch
An alternative to in-query URL scraping: use Search to find candidate URLs, then call Fetch on the most relevant ones. This gives the caller direct control over which pages get scraped and how their content is processed downstream.Prompting best practice
"standard" and "deep" mode use agentic search and can follow instruction-style queries. "fast" ignores natural language instructions.
Queries should be split between:
- What the search must retrieve: agentic search will optimize searches to find those elements.
- How the results should be reasoned over: for
"sourcedAnswer"and"searchResults", how to use the data to answer a question.
| Original phrasing | Recommended phrasing | Why |
|---|---|---|
| ”How to estimate the annual IT costs of Total SA?" | "Find data sources that quantify Total SA’s IT spend (annual reports, tech-vendor case studies, IT services contracts mentioning Total SA). For each, extract the figure and the year.” | The first phrasing requests an answer; the second specifies retrievable evidence the agent can locate. |
| ”Tell me about the company linkup.so" | "Find the homepage, product pages, and about page for linkup.so. Extract: what the company does, target customers, pricing model, and known investors.” | The first phrasing is unscoped; the second names targets and extraction fields. |
How to construct a query
How to construct a query
A retrieval query has four components. The same shape applies across
"standard" and "deep", with longer instructions and explicit ordering on
"deep".Scope
Where the agent should look.Example: “On the company domain
{company_domain}, analyze homepage, about, and blog”Method
What to extract.Example: “Include products, business model, target market, value proposition”
Output type selection
| Downstream consumer | Recommended setting |
|---|---|
| LLM that will reason over the sources | "searchResults" |
| End user, displayed directly | "sourcedAnswer" |
| Code that parses fields | "structured" (with structuredOutputSchema) |
"structured", see the
structured output tutorial
for JSON-schema mechanics.
Source filtering
The full filtering parameter list lives on the Search overview.- Use
includeDomains(up to 100) andexcludeDomains(unlimited) for control over sources. See the filtering tutorial for more. - Use
fromDateandtoDate(ISO 8601,YYYY-MM-DD) to restrict the index window. Note that some webpages (product pages, news) might have metadata publish date different from their latest update date, which makes filtering unstable.
LinkedIn data extraction
LinkedIn extraction is only available on the Search endpoint. The
Fetch endpoint does
not retrieve LinkedIn content.
| Target | Query formulation |
|---|---|
| Person or company profile | {linkedin_url} + Return the profile details. |
| Person or company posts | {linkedin_url} + Return the recent posts. |
| Person or company comments | {linkedin_url} + Return the comments. |
| Topic search | Search for LinkedIn posts on {keyword}. |
{linkedin_url} is a person URL (linkedin.com/in/{slug}) or a company
URL (linkedin.com/company/{slug}).
LinkedIn data extraction only works with the exact LinkedIn profile
or company URL. Shortened links, search-result fragments, or partial
slugs will not return reliable data.
"deep" to find the profile and scrape it in the same call.
Common pitfalls
Bad → Fix pairs grounded in observed integration failures. Each pair targets a single decision: rewrite the prompt, not the surrounding code.- Reasoning instead of retrieving
- Unscoped 'tell me about' prompts
- Dates in the query string
- Instruction prompts on fast
- Using Fetch for LinkedIn
Bad
Fix
Resources
Search overview
Parameters, modes, output types, pricing, and the minimal first call.
Filtering tutorial
Domain and date filtering: how to combine
includeDomains,
excludeDomains, fromDate, and toDate.Structured output
JSON-schema mechanics for
outputType set to "structured".Prompt Optimizer
Rewrite a draft prompt into the retrieval-shaped form the Search endpoint expects.
Templates available.