AgentScope example connecting to Sap Generative AI Hub via LiteLLM

How AgentScope works

LLM access via LiteLLM Proxy

AgentScope supports the LiteLLM Proxy for access via OpenAI API calls.

Follow details in LiteLLM Proxy setup for SAP Gen. AI Hub

Installation

%pip install agentscope

Set env variables

Add the following variables from the service key in a file called “.env” and put it in the same folder where you run the notebook:

LITELLM_PROXY_API_KEY=sk-1234
PROXY_BASE_URL=http://localhost:4000

Run the AgentScope with LiteLLM and SAP LLMs

[ ]:
import asyncio
import os

import litellm
from agentscope.agent import ReActAgent
from agentscope.formatter import DashScopeChatFormatter
from agentscope.memory import InMemoryMemory
from agentscope.message import Msg, TextBlock
from agentscope.model import OpenAIChatModel
from agentscope.tool import Toolkit, ToolResponse
from dotenv import load_dotenv

Load your credentials as environment variables.

[ ]:
litellm.use_litellm_proxy = True
load_dotenv()
api_base = os.getenv("PROXY_BASE_URL")
api_key = os.getenv("LITELLM_PROXY_API_KEY")

Set up the model with your proxy params

[ ]:
sap_model = OpenAIChatModel(model_name='sap/gpt-4o',
                            api_key=api_key,
                            client_args={"base_url": api_base},
                            stream=False)

Define the agent tools with a properly formatted docstring. Including the Args section is necessary for the correct functioning of the tool. Also the function must return its response as a ToolResponse object.

[ ]:
def get_weather(city: str) -> ToolResponse:
    """Retrieves the current weather report for a specified city.
    Args:
        city (str): The name of the city to retrieve weather information for.
            Examples: "New York", "London", "Tokyo".
    """
    city_normalized = city.lower().replace(" ", "")

    mock_weather_db = {
        "newyork": "The weather in New York is sunny with a temperature of 25°C.",
        "london": "It's cloudy in London with a temperature of 15°C.",
        "tokyo": "Tokyo is experiencing light rain and a temperature of 18°C.",
    }

    if city_normalized in mock_weather_db:
        return ToolResponse(content=[
            TextBlock(type="text",
                      text=mock_weather_db[city_normalized])
        ])
    else:
        return ToolResponse(content=[
            TextBlock(type="text",
                      text=f"The weather in {city} is sunny with a temperature of 20°C.")
        ])

Register the tools in a toolkit

[ ]:
toolkit = Toolkit()
toolkit.register_tool_function(get_weather)

Define the Agent with the SAP LLM and the tool, memory and formatter objects.

[ ]:
agent = ReActAgent(
        name="weather agent",
        sys_prompt="You are a helpful weather assistant. "
                "When the user asks for the weather in a specific city, "
                "use the 'get_weather' tool to find the information. "
                "If the tool returns an error, inform the user politely. "
                "If the tool is successful, write a couple of sentences for a "
                "TV weather report in the city including a small joke.",
        model=sap_model,
        formatter=DashScopeChatFormatter(),
        toolkit=toolkit,
        memory=InMemoryMemory(),
    )

Create a message

[ ]:
msg = Msg(
        name="user",
        content="What is the weather like in Tbilisi?",
        role="user",
    )

Run agent inside the async function

[ ]:
async def run_conversation():
    result = await agent(msg)
    print(result)

Run the conversation function

[ ]:
await run_conversation()