flowchart TD
A[User prompt] --> B[Add to session]
B --> C[Get conversation history]
C --> D[Call LLM with tools]
D --> E{Tool calls?}
E -->|Yes| F[Execute tools]
F --> G[Add results to session]
G --> C
E -->|No| H[Return final output]
C --> I{Loop count >= 10?}
I -->|Yes| J[Return error]
3 Agent and Runner
With tools ready, let’s build the core: Agent definition and the Runner that executes it.
Note
Code Reference: code/v0.1/src/agentsilex/
agent.pyrunner.pyrun_result.py
3.1 ToolsSet: Managing Tools
Before we define Agent, we need a way to manage tools. The ToolsSet class (agent.py) handles tool registration and execution:
class ToolsSet:
def __init__(self, tools: List[FunctionTool]):
self.tools = tools
self.registry = {tool.name: tool for tool in tools}
def get_specification(self):
"""Generate tool specs for LLM API."""
spec = []
for tool in self.tools:
spec.append({
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.parameters_specification
}
})
return spec
def execute_function_call(self, call_spec):
"""Execute a tool call from the LLM."""
import json
tool = self.registry.get(call_spec.function.name)
if not tool:
raise ValueError(f"Tool {call_spec.function.name} not found")
args = json.loads(call_spec.function.arguments)
result = tool(**args)
return {
"role": "tool",
"tool_call_id": call_spec.id,
"content": str(result)
}Two key methods:
get_specification()— Returns the JSON format LLMs expectexecute_function_call()— Parses LLM’s tool call and runs the function
3.2 The Agent Class
The Agent itself is remarkably simple:
class Agent:
def __init__(
self,
name: str,
model: Any,
instructions: str,
tools: List[FunctionTool],
):
self.name = name
self.model = model
self.instructions = instructions
self.tools = tools
self.tools_set = ToolsSet(tools)That’s it! Just:
name: Identifier for logging/debuggingmodel: LLM model string (e.g., “gpt-4o”)instructions: System prompttools: List ofFunctionToolobjectstools_set: Convenience wrapper around tools
3.3 RunResult: The Output
What does running an agent return? (run_result.py):
from dataclasses import dataclass
@dataclass
class RunResult:
final_output: strCurrently just the final text output. We’ll extend this later.
3.4 The Runner: Execution Engine
The Runner class (runner.py) is where the magic happens:
from litellm import completion
class Runner:
def __init__(self, session: Session):
self.session = session
def run(self, agent: Agent, prompt: str) -> RunResult:
should_stop = False
msg = user_msg(prompt)
self.session.add_new_messages([msg])
loop_count = 0
while loop_count < 10 and not should_stop:
# Get conversation history
dialogs = self.session.get_dialogs()
# Get tool specifications
tools_spec = agent.tools_set.get_specification()
# Call LLM
response = completion(
model=agent.model,
messages=dialogs,
tools=tools_spec if tools_spec else None,
)
response_message = response.choices[0].message
# Add response to session
self.session.add_new_messages([response_message])
# Check if LLM wants to call tools
if not response_message.tool_calls:
should_stop = True
return RunResult(
final_output=response_message.content,
)
# Execute all tool calls
tools_response = [
agent.tools_set.execute_function_call(call_spec)
for call_spec in response_message.tool_calls
]
self.session.add_new_messages(tools_response)
loop_count += 1
return RunResult(
final_output="Error: Exceeded max iterations",
)3.5 The Agent Loop Explained
Key points:
- Max 10 iterations — Prevents infinite loops
- Session tracks history — LLM sees full conversation
- Tool results fed back — LLM can reason about results
- Loop until no tool calls — Natural stopping condition
3.6 Helper Functions
def user_msg(content: str) -> dict:
return {"role": "user", "content": content}
def bot_msg(content: str) -> dict:
return {"role": "assistant", "content": content}Simple utilities for creating message dicts.
3.7 Using LiteLLM
We use LiteLLM for model abstraction:
from litellm import completion
response = completion(
model="gpt-4o", # OpenAI
# model="claude-sonnet-4-20250514", # Anthropic
# model="deepseek/deepseek-chat", # DeepSeek
messages=[...],
tools=[...],
)One API, 100+ models. Switch providers by changing one string.
3.8 Complete Example
from agentsilex import Agent, Runner, Session, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Weather in {city}: 72°F, sunny"
@tool
def get_time(timezone: str = "UTC") -> str:
"""Get current time in a timezone."""
from datetime import datetime
return f"Current time ({timezone}): {datetime.now()}"
# Create agent
agent = Agent(
name="weather_bot",
model="gpt-4o",
instructions="You are a helpful assistant. Use tools when needed.",
tools=[get_weather, get_time],
)
# Create session and runner
session = Session()
runner = Runner(session)
# Run!
result = runner.run(agent, "What's the weather in Tokyo?")
print(result.final_output)
# → "The weather in Tokyo is 72°F and sunny!"3.9 What Happens Under the Hood
- User says: “What’s the weather in Tokyo?”
- Runner adds message to session
- LLM receives: system prompt + user message + tool specs
- LLM decides to call
get_weather(city="Tokyo") - Runner executes
get_weather("Tokyo")→ “Weather in Tokyo: 72°F, sunny” - Result added to session
- LLM receives updated history
- LLM generates final response (no more tool calls)
- Runner returns
RunResult
3.10 Key Design Decisions
| Decision | Why |
|---|---|
| Runner takes Agent + Session | Clear separation of concerns |
| Max 10 iterations | Safety against infinite loops |
| LiteLLM for completion | Model flexibility |
| Tool results as strings | Simple, works with any return type |
TipCheckpoint
cd code/v0.1You now have a working agent! Try it with your own tools.