Tools and Function Calling in AI Agents
An AI Agent without tools is like a brilliant expert locked in a room with no phone, no internet, and no books. The thinking ability is there, but there is no way to act on the world. Tools are what give agents hands — the ability to search the web, run code, access databases, send emails, and much more.
Function Calling is the technical mechanism that allows an LLM to request the execution of a specific tool by outputting a structured call, which the agent framework then executes and feeds back to the LLM.
What is a Tool?
In the context of AI Agents, a tool is any function or service that an agent can call to interact with the outside world or perform a computation.
Common Examples of Agent Tools
| Tool Name | What It Does | Example Use |
|---|---|---|
| web_search | Searches the internet for information | "Find the latest news about AI in India" |
| calculator | Performs mathematical calculations | "What is 18% GST on ₹12,500?" |
| read_file | Reads the contents of a local file | "Summarise the contents of report.pdf" |
| send_email | Sends an email to a specified address | "Email the summary to the manager" |
| get_weather | Fetches current weather data | "What is the weather in Hyderabad now?" |
| run_sql | Executes a database query | "How many orders were placed this week?" |
| generate_image | Creates an image from a text description | "Generate a banner image for a sale event" |
| execute_python | Runs a Python code snippet | "Run this script and show the output" |
What is Function Calling?
Modern LLMs (like GPT-4o and Claude 3) support a feature called function calling (also called tool use). This means instead of the LLM just generating text, it can also output a structured request to call a specific function with specific parameters.
How It Works — Step by Step
Step 1: Developer defines tools available to the agent
Step 2: User sends a request
User: "What is the weather in Chennai right now?"
Step 3: LLM decides it needs the weather tool
LLM Output (not shown to user):
{
"tool": "get_weather",
"parameters": {
"city": "Chennai"
}
}
Step 4: Agent framework calls the actual function
result = get_weather(city="Chennai")
→ Returns: {"temp": 34, "condition": "Sunny", "humidity": "72%"}
Step 5: Result is sent back to LLM as context
Step 6: LLM generates the final human-readable answer
"The current weather in Chennai is 34°C with sunny skies
and 72% humidity."
Defining a Tool for an LLM
Each tool must be defined with a clear name, description, and parameter schema so the LLM knows when and how to call it.
Tool Definition Format (OpenAI Style)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a given city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city, e.g., Mumbai"
}
},
"required": ["city"]
}
}
}
]
The description field is very important — it tells the LLM when to use this tool. A vague or missing description leads to the tool being called at the wrong time or not called at all.
Writing the Actual Tool Function
The tool definition tells the LLM about the tool. The actual Python function does the real work:
import requests
def get_weather(city: str) -> dict:
"""Fetch current weather for a given city using a weather API."""
api_url = f"https://api.weatherapi.com/v1/current.json"
params = {
"key": "your-weather-api-key",
"q": city
}
response = requests.get(api_url, params=params)
data = response.json()
return {
"city": city,
"temperature_c": data["current"]["temp_c"],
"condition": data["current"]["condition"]["text"],
"humidity": data["current"]["humidity"]
}
Full Example: Agent With a Tool
import openai
import json
client = openai.OpenAI(api_key="your-api-key")
# Tool definition
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
}
}]
# Actual function
def get_weather(city):
# In a real app, call a weather API here
return {"city": city, "temp": "34°C", "condition": "Sunny"}
# Messages
messages = [
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What is the weather in Mumbai?"}
]
# First LLM call
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools
)
# Check if LLM wants to call a tool
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# Call the actual function
result = get_weather(**tool_args)
# Add tool result to conversation
messages.append(response.choices[0].message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})
# Second LLM call — generate final answer
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools
)
print(final_response.choices[0].message.content)
Output: "The current weather in Mumbai is 34°C with sunny conditions."
Parallel Tool Calling
Modern LLMs can call multiple tools at the same time when needed — this is called parallel tool calling. This speeds up multi-step tasks significantly.
Example
User: "What is the weather in Delhi and Bangalore right now?"
LLM calls two tools simultaneously:
- get_weather("Delhi") → 28°C, Cloudy
- get_weather("Bangalore") → 22°C, Partly cloudy
Agent combines results:
"Delhi is 28°C with cloudy skies. Bangalore is 22°C and partly cloudy."
Best Practices for Writing Tools
| Best Practice | Why It Matters |
|---|---|
| Write a clear, specific tool description | LLM uses the description to decide when to call it |
| Return structured data (dictionary/JSON) | LLM can parse and reason about structured output easily |
| Handle errors gracefully | Return an error message instead of crashing the agent |
| Keep tools focused on one task | Tools that do one thing well are easier to use correctly |
| Validate input parameters | Prevent incorrect calls from causing failures |
| Log tool calls and results | Makes debugging and auditing much easier |
Tool Router Pattern
When an agent has many tools, it is useful to have a tool router — a single function that receives the tool name from the LLM and dispatches to the correct function:
def run_tool(tool_name: str, tool_args: dict):
tool_map = {
"get_weather": get_weather,
"web_search": web_search,
"calculate": calculate,
"send_email": send_email,
"lookup_order": lookup_order
}
if tool_name not in tool_map:
return {"error": f"Unknown tool: {tool_name}"}
return tool_map[tool_name](**tool_args)
This pattern keeps the agent's main loop clean and makes adding new tools as simple as adding one line to the dictionary.
Summary
Tools transform an LLM from a text generator into an active agent that can interact with the real world. Function calling is the bridge between the LLM's reasoning and the tools' execution — the LLM outputs a structured call, the framework runs the real function, and the result is fed back for the next round of reasoning. Designing clear tool definitions and reliable tool functions is one of the most important skills in AI Agent development.
