Langgraph example connecting to Sap Generative AI Hub via LiteLLM
How Langgraph works
Installation
%pip install langchain langgraph langchain-litellm litellm
Credentials for SAP Gen AI Hub
Get the service key from your SAP BTP tenant with AI subscription.
Add the following variables from the service key in a file called “.env” and put it in the same folder where you run the notebook:
AICORE_AUTH_URL="https://* * * .authentication.sap.hana.ondemand.com/oauth/token",
AICORE_CLIENT_ID=" *** ",
AICORE_CLIENT_SECRET=" *** ",
AICORE_RESOURCE_GROUP=" *** ",
AICORE_BASE_URL="https://api.ai.***.cfapps.sap.hana.ondemand.com/
Run Langgraph with LiteLLM and SAP provider
[ ]:
from langchain.tools import tool
from langchain_litellm import ChatLiteLLM
from langgraph.graph import add_messages
from langchain_core.messages import (
SystemMessage,
HumanMessage,
ToolCall,
)
from langchain_core.messages import BaseMessage
from langgraph.func import entrypoint, task
from dotenv import load_dotenv
Load your credentials as environment variables that Litellm can use automatically.
[ ]:
load_dotenv()
Define the model with the SAP LLM.
[ ]:
model = ChatLiteLLM(model="sap/gpt-4o", temperature=0)
Define the agent tool
[ ]:
@tool
def get_weather(city: str):
"""
Returns weather information for a given city.
:param city:
:return: weather information
"""
city_normalized = city.lower().replace(" ", "")
mock_weather_db = {
"newyork": "The weather in New York is sunny with a temperature of 25°C.",
"london": "It's cloudy in London with a temperature of 15°C.",
"tokyo": "Tokyo is experiencing light rain and a temperature of 18°C.",
}
if city_normalized in mock_weather_db:
return mock_weather_db[city_normalized]
else:
return f"The weather in {city} is sunny with a temperature of 20°C."
Augment the LLM with tools
[ ]:
tools = [get_weather]
tools_by_name = {tool.name: tool for tool in tools}
model_with_tools = model.bind_tools(tools)
Define model node
[ ]:
@task
def call_llm(messages: list[BaseMessage]):
"""LLM decides whether to call a tool or not"""
return model_with_tools.invoke(
[
SystemMessage(
content="You are a helpful weather assistant. "
"When the user asks you about a specific city, "
"use the 'get_weather' tool to find the information about the weather. "
"Answer with a TV weather report in two sentences including a small joke."
)
]
+ messages
)
Define tool node
[ ]:
@task
def call_tool(tool_call: ToolCall):
"""Performs the tool call"""
tool = tools_by_name[tool_call["name"]]
return tool.invoke(tool_call)
Define agent
[ ]:
@entrypoint()
def agent(messages: list[BaseMessage]):
model_response = call_llm(messages).result()
while True:
if not model_response.tool_calls:
break
# Execute tools
tool_result_futures = [
call_tool(tool_call) for tool_call in model_response.tool_calls
]
tool_results = [fut.result() for fut in tool_result_futures]
messages = add_messages(messages, [model_response, *tool_results])
model_response = call_llm(messages).result()
messages = add_messages(messages, model_response)
return messages
User can select a city.
[ ]:
city = 'London'
Run invoke
[ ]:
input_message = [HumanMessage(content=f"What's the weather in {city}?")]
for chunk in agent.stream(input_message, stream_mode="updates"):
print(chunk)
print("\n")