Keywords AI is an LLM engineering platform that allows you to do monitoring, prompt management, and LLM evals.
This tutorial will show you how to set up Linkup in the Keywords AI API payload to monitor LLM performance and usage.
Build the Keywords AI request
import requests
def demo_call(
company,
model="gpt-4o-mini",
token="KEYWORDSAI_API_KEY"
):
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {token}',
}
data = {
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is " + company + "'s 2024 revenue? Base your answer on the following trusted data: \n\n{% if linkup_search_response %}{{ linkup_search_response.results }}{% else %}I don't have that information.{% endif %}}"
}
],
"linkup_params": {
"apiKey": "LINKUP_API_KEY",
"q": "What is " + company + "'s 2024 revenue?",
"depth": "deep",
"outputType": "searchResults",
"includeImages": False
},
"model": "gpt-4o-mini"
}
response = requests.post('https://api.keywordsai.co/api/chat/completions', headers=headers, json=data)
return response
input_text = "Microsoft"
response = demo_call(input_text)
print(response.json())
You can also use prompt templates as follows:
Please provide information about {{ company_name }}'s 2024 revenue and cite your sources.
{% if linkup_search_response %}
Here's what I found:
{{ linkup_search_response.answer }}
Sources:
{% for source in linkup_search_response.sources %}
- {{ source.name }}: {{ source.url }}
{% endfor %}
{% endif %}
Monitor LLM performance and usage
After you set up the environment and run the request, you can see LLM logs in Keywords AI.