Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

This page covers how to pick a depth, how to phrase queries for each of "fast", "standard", and "deep", how to choose an output type, and how to apply source and date filtering on the Search endpoint.

Choosing depth

The depth parameter controls if / how agentic search is used within the search.
Use caseRecommended setting
Keyword style lookup at sub-second latency"fast"
Instruction-based retrieval comparable to one or a few Google searches"standard"
Breadth across adjacent keywords (news, research)"standard" with explicit “run several searches with adjacent keywords”
Scrape one URL provided in the query and run a search"standard"
Scrape several known URLs and run several searches"deep"
Find a URL, then scrape it"deep"
Rule of thumb: chat or keyword lookup → "fast"; if one or a few parallel Google searches would answer the question → "standard"; if a human would open multiple tabs or follow leads → "deep".
"standard" and "deep" can scrape URLs provided in the query. "standard" accepts one URL; "deep" accepts several and scrapes them with JavaScript rendering.

Per-depth guidance

"fast" is keyword-only and bypasses the LLM entirely.It returns sub-second results for keyword-shaped queries where latency is the binding constraint. Currently in beta.

Behaviour

  • Single-pass, keyword-like search with no LLM involvement.
  • No query interpretation, no reformulation, no evaluation.
  • The query string is passed to the index as-is.
  • No in-query URL scraping, no sub-searches, no iteration.

When to use it

  • Conversational AI use cases where low latency is critical.
  • High-volume, low-latency pipelines.
  • Lookups for one specific piece of information

When not to use it

  • Anything that requires reading a page (use "standard" for one URL, "deep" for several).
  • Any query whose intent depends on instruction parsing, ordering, or multi-step retrieval — "fast" will treat the instructions as keywords.

Query shape

Keep queries short and keyword-shaped:
NVIDIA Q4 2024 revenue
Current EUR/USD exchange rate

Chain with Fetch

An alternative to in-query URL scraping: use Search to find candidate URLs, then call Fetch on the most relevant ones. This gives the caller direct control over which pages get scraped and how their content is processed downstream.
search = client.search(query="Datadog pricing tiers", depth="standard", output_type="searchResults")
for r in search.results[:3]:
    page = client.fetch(url=r.url, render_js=True)
    # process page yourself

Prompting best practice

"standard" and "deep" mode use agentic search and can follow instruction-style queries. "fast" ignores natural language instructions. Queries should be split between:
  1. What the search must retrieve: agentic search will optimize searches to find those elements.
  2. How the results should be reasoned over: for "sourcedAnswer" and "searchResults", how to use the data to answer a question.
Original phrasingRecommended phrasingWhy
”How to estimate the annual IT costs of Total SA?""Find data sources that quantify Total SA’s IT spend (annual reports, tech-vendor case studies, IT services contracts mentioning Total SA). For each, extract the figure and the year.”The first phrasing requests an answer; the second specifies retrievable evidence the agent can locate.
”Tell me about the company linkup.so""Find the homepage, product pages, and about page for linkup.so. Extract: what the company does, target customers, pricing model, and known investors.”The first phrasing is unscoped; the second names targets and extraction fields.
A retrieval query has four components. The same shape applies across "standard" and "deep", with longer instructions and explicit ordering on "deep".
1

Role

From which perspective should the agent think.Example: “You are an expert GTM consultant”
2

Scope

Where the agent should look.Example: “On the company domain {company_domain}, analyze homepage, about, and blog”
3

Method

What to extract.Example: “Include products, business model, target market, value proposition”
4

Format

Shape of the answer ("sourcedAnswer" or "structured").Example: “Concise, business-oriented prose”

Output type selection

Downstream consumerRecommended setting
LLM that will reason over the sources"searchResults"
End user, displayed directly"sourcedAnswer"
Code that parses fields"structured" (with structuredOutputSchema)
For "structured", see the structured output tutorial for JSON-schema mechanics.

Source filtering

The full filtering parameter list lives on the Search overview.
  • Use includeDomains (up to 100) and excludeDomains (unlimited) for control over sources. See the filtering tutorial for more.
  • Use fromDate and toDate (ISO 8601, YYYY-MM-DD) to restrict the index window. Note that some webpages (product pages, news) might have metadata publish date different from their latest update date, which makes filtering unstable.

LinkedIn data extraction

LinkedIn extraction is only available on the Search endpoint. The Fetch endpoint does not retrieve LinkedIn content.
The Search endpoint can extract structured data from LinkedIn profile and company pages, and can surface posts by keyword.
TargetQuery formulation
Person or company profile{linkedin_url} + Return the profile details.
Person or company posts{linkedin_url} + Return the recent posts.
Person or company comments{linkedin_url} + Return the comments.
Topic searchSearch for LinkedIn posts on {keyword}.
{linkedin_url} is a person URL (linkedin.com/in/{slug}) or a company URL (linkedin.com/company/{slug}).
LinkedIn data extraction only works with the exact LinkedIn profile or company URL. Shortened links, search-result fragments, or partial slugs will not return reliable data.
When the URL isn’t known up front, use "deep" to find the profile and scrape it in the same call.
First find the LinkedIn profile for {person_name} at {company}.
Then scrape that URL and return the profile details.

Common pitfalls

Bad → Fix pairs grounded in observed integration failures. Each pair targets a single decision: rewrite the prompt, not the surrounding code.
Bad
How to estimate Total SA's annual IT spend?
Fix
Find Total SA's annual reports and IT-services contracts that mention IT
spend. For each source, extract the disclosed IT-spend figure and the year,
with the citation URL.

Resources

Search overview

Parameters, modes, output types, pricing, and the minimal first call.

Filtering tutorial

Domain and date filtering: how to combine includeDomains, excludeDomains, fromDate, and toDate.

Structured output

JSON-schema mechanics for outputType set to "structured".

Prompt Optimizer

Rewrite a draft prompt into the retrieval-shaped form the Search endpoint expects. Templates available.
Errors and rate limits: Errors · Rate limits