Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.linkup.so/llms.txt

Use this file to discover all available pages before exploring further.

This page is structured for direct use as integration context for a coding agent. Operational guidance is repeated inline so the page is self-contained.

Linkup /tasks integration guide

You are integrating the Linkup /tasks API: an asynchronous batch wrapper around Search, Fetch, and Research. A single submission accepts up to 100 tasks, in any mix of the three endpoints, with the same parameters and pricing as direct calls. There is no batching surcharge.

When to use it

Use Tasks for bulk and asynchronous workloads: CRM enrichment, backfills, scheduled pipelines, mixed-endpoint batches.
  • Hundreds of queries to process: Tasks.
  • Mixed Search + Fetch + Research workflow: Tasks (one submission).
  • Long-running research that should not block: Tasks.
  • Interactive single-shot calls: call the synchronous endpoint directly.
Each task is billed exactly as a direct synchronous call to the same endpoint. No batching surcharge, no batching discount.

Setup

pip install linkup-sdk            # Python
# or
npm install linkup-sdk            # TypeScript
export LINKUP_API_KEY="your-api-key"

Example (Python; adapt to the project’s language)

import time
from linkup import LinkupClient

client = LinkupClient(api_key="<YOUR_LINKUP_API_KEY>")

# Submit up to 100 tasks per call. Mix types freely.
# Each task carries the same parameters as the corresponding synchronous endpoint.
tasks = client.tasks.create([
    {
        "type": "search",
        "input": {
            "q": "What is Microsoft's 2024 revenue?",
            "depth": "standard",
            "outputType": "sourcedAnswer",
        },
    },
    {
        "type": "fetch",
        "input": {
            "url": "https://docs.linkup.so",
            "renderJs": True,
        },
    },
    {
        "type": "research",
        "input": {
            "q": "Compare 2024 cloud revenue growth of MSFT, AMZN, and GOOG.",
            "outputType": "sourcedAnswer",
            "mode": "Investigate",
            "reasoningDepth": "L",
        },
    },
])

# Poll. Errors are per-task; a failure does not fail the batch.
remaining = {t.id for t in tasks}
while remaining:
    for t in client.tasks.list():
        if t.id in remaining and t.status in ("completed", "failed"):
            if t.status == "completed":
                print(t.id, t.output)
            else:
                print(t.id, "FAILED:", t.error)
            remaining.discard(t.id)
    if remaining:
        time.sleep(2)

Tool definition (OpenAI function-calling format)

Remove the "type": "function" envelope and rename parameters to input_schema for the Anthropic format. Note that this tool is async: the handler should poll on the model’s behalf and return completed results, not task ids.
{
  "type": "function",
  "function": {
    "name": "linkup_tasks",
    "description": "Submits a batch of /search, /fetch, and /research tasks asynchronously. Up to 100 tasks per call. Use for bulk workloads, scheduled pipelines, or mixed-endpoint workflows. Same parameters and pricing as the synchronous endpoints. For interactive single-shot calls, call linkup_search / linkup_fetch / linkup_research directly.",
    "parameters": {
      "type": "object",
      "properties": {
        "tasks": {
          "type": "array",
          "maxItems": 100,
          "description": "Array of task objects to submit. Each task is { type, input }.",
          "items": {
            "type": "object",
            "properties": {
              "type": {
                "type": "string",
                "enum": ["search", "fetch", "research"],
                "description": "Which endpoint to invoke for this task."
              },
              "input": {
                "type": "object",
                "description": "Parameters for the chosen endpoint. For 'search', same as POST /search. For 'fetch', same as POST /fetch. For 'research', same as POST /research."
              }
            },
            "required": ["type", "input"]
          }
        }
      },
      "required": ["tasks"]
    }
  }
}

Operational guidance (inline)

Appropriate workloads

  • Bulk workloads (hundreds of queries).
  • Asynchronous workflows that would otherwise hold an HTTP connection open.
  • Mixed-endpoint batches in one submission.

Inappropriate workloads

  • Single-shot interactive calls. Use the synchronous endpoint directly.
  • Cost reduction. Pricing is per-task identical to direct calls.

Submission

  • Up to 100 tasks per POST /tasks call. For more, submit parallel batches.
  • Tasks run in parallel; submission order does not constrain execution order.
  • For dependent work (search results feeding fetch URLs), submit the second batch after the first completes.

Polling

  • Mostly "search" ("fast"/"standard") and "fetch": 1–2 second intervals.
  • Mostly "research": 5 second intervals, with backoff to 30 seconds for long-running batches.
  • Mixed: start at 2 seconds, back off to 10.
  • Maximum poll rate: 1 request per second.
  • Use GET /tasks (list) for bulk polling. Fewer API calls than per-task polling.

Error handling

  • Each task succeeds or fails independently.
  • A failure in one task does not fail the batch.
  • Inspect error on failed tasks; retry only the failures.
  • No credit is deducted for failed tasks.

Result storage

  • Completed task results are retrievable for a bounded period.
  • Persist to durable storage as soon as a task completes.

Constraints

  • Tasks is not cheaper than direct calls. Pricing is per-task identical.
  • Polling above 1 request per second triggers rate limits.
  • Tasks is not durable result storage.

TypeScript notes

  • Import: import { LinkupClient } from 'linkup-sdk'.
  • Methods: await client.tasks.create([...]), await client.tasks.list(), await client.tasks.get(id).
  • tasks.create takes the array directly (not wrapped in { tasks: [...] }).