Linkup with Langchain

How to use the Linkup API with LangChain.

👀 Overview

Linkup can be used with LangChain as a Retriever. This tutorial shows to setup a basic LangChain pipeline leveraging Linkup as a Retriever to get contextual information from the internet.

📦 Installation

Install the LangChain integration using pip:

pip install langchain-linkup

🛠️ Usage

Setting Up Your Environment

  1. 🔑 Obtain an API Key:

    Sign up on Linkup to get your API key.

  2. ⚙️ Set-up the API Key:

    Option 1: Export the LINKUP_API_KEY environment variable in your shell before using the Linkup
    LangChain component.

    export LINKUP_API_KEY='YOUR_LINKUP_API_KEY'
    

    Option 2: Set the LINKUP_API_KEY environment variable directly within Python, using for
    instance os.environ or python-dotenv with a
    .env file (python-dotenv needs to be installed separately in this case), before creating the
    Linkup LangChain component.

    import os
    from langchain_linkup import LinkupRetriever
    
    os.environ["LINKUP_API_KEY"] = "YOUR_LINKUP_API_KEY"
    # or dotenv.load_dotenv()
    retriever = LinkupRetriever(...)
    ...
    

    Option 3: Pass the Linkup API key to the Linkup LangChain component when creating it.

    from langchain_linkup import LinkupRetriever
    
    retriever = LinkupRetriever(api_key="YOUR_LINKUP_API_KEY", ...)
    ...
    

Use the Linkup Retriever

You can then use the Linkup retriever to get information on your question from the web.

documents = retriever.invoke(input="<YOUR_QUESTION_HERE>")
print(documents)

📋 Example

Linkup search queries can be used with one of two very different modes:

  • with depth="standard", the search will be straightforward and fast, suited for relatively simple
    queries (e.g. "What's the weather in Paris today?")
  • with depth="deep", the search will use an agentic workflow, which makes it in general slower,
    but it will be able to solve more complex queries (e.g. "What is the company profile of LangChain
    accross the last few years, and how does it compare to its concurrents?")

🔍 Retriever (Easy)

from langchain_linkup import LinkupRetriever
import os

os.environ["LINKUP_API_KEY"] = "YOUR_LINKUP_API_KEY"

retriever = LinkupRetriever(
    depth="deep",  # "standard" or "deep"
)

# Perform a search query
documents = retriever.invoke(input="What is Linkup, the new French AI startup?")
print(documents)

🤖 RAG with the OpenAI LLM (Intermediate)

This is a RAG example using the Linkup API and LangChain's LCEL (LangChain Expression Language). This RAG uses OpenAI.

Additionally, you need an API key for Linkup, and another one for OpenAI (for the final generation),
which you can set manually as the LINKUP_API_KEY and OPENAI_API_KEY environment variables, or you
can duplicate the file .env.example in a .env file, fill the missing values, and the environment
variables will be automatically loaded from it, or you can replace the corresponding variables
below.

#pip install langchain-openai

from typing import Any, Literal
from dotenv import load_dotenv
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI

from langchain_linkup import LinkupRetriever

os.environ["LINKUP_API_KEY"] = "YOUR_LINKUP_API_KEY"
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

# You can change the RAG query and parameters here. If you prefer not to use environment variables
# you can fill them here.
query: str = "What is Linkup, the new French AI startup?"
linkup_depth: Literal["standard", "deep"] = "standard"
linkup_api_key = None
open_ai_model: str = "gpt-4o-mini"
openai_api_key = None

load_dotenv()  # Load environment variables from .env file if there is one


retriever = LinkupRetriever(linkup_api_key=linkup_api_key, depth=linkup_depth)


def format_retrieved_documents(docs: list[Document]) -> str:
    """Format the documents retrieved by the Linkup API as a text."""

    return "\n\n".join(
        [
            f"{document.metadata['name']} ({document.metadata['url']}):\n{document.page_content}"
            for document in docs
        ]
    )


def inspect_context(state: dict[str, Any]) -> dict[str, Any]:
    """Print the context retrieved by the retriever."""
    print(f"Context: {state['context']}\n\n")
    return state


generation_prompt_template = """Answer the question based only on the following context:

{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(generation_prompt_template)
model = ChatOpenAI(model=open_ai_model, api_key=openai_api_key)


chain: Runnable[Any, str] = (
    {"context": retriever | format_retrieved_documents, "question": RunnablePassthrough()}
    | RunnableLambda(inspect_context)
    | prompt
    | model
    | StrOutputParser()
)
response = chain.invoke(input=query)
print(f"Response: {response}")