Prompt Engineering for AI Agents
A prompt is the text sent to an LLM to get a response. Prompt Engineering is the art and science of crafting those prompts in a way that gets the best possible output from the LLM.
For AI Agents, prompt engineering is not just about asking questions well — it is about designing the agent's instructions, defining its role, telling it how to reason, and specifying how it should use tools. A poorly written prompt leads to a poorly performing agent. A well-designed prompt can make an agent extremely powerful.
Why Prompt Engineering Matters for Agents
Consider these two prompts asking the same thing:
Weak Prompt
"Search the web and tell me about Python."
Strong Agent Prompt
"You are a technical research assistant. When asked about a programming language, use the web_search tool to find the most recent information, then provide a structured summary with: (1) Overview, (2) Key Features, (3) Common Use Cases, and (4) Getting Started. Keep responses concise and beginner-friendly."
The second prompt produces a focused, structured, and actionable response. The first produces whatever the LLM feels like.
The Anatomy of an Agent Prompt
A well-structured agent prompt has four main components:
| Component | Purpose | Example |
|---|---|---|
| Role / Persona | Tells the LLM what kind of agent it is | "You are a financial research assistant" |
| Instructions | Defines how the agent should behave | "Always verify information before responding" |
| Tools Available | Lists what tools the agent can use | "You have access to: web_search, calculator" |
| Output Format | Specifies how the response should be structured | "Reply in JSON with fields: answer, source, confidence" |
Types of Messages in an Agent Prompt
When calling an LLM, messages are sent in a specific structure. There are three types of message roles:
1. System Message
This sets the overall behaviour and identity of the agent. It is written by the developer and is not visible to the end user.
{
"role": "system",
"content": "You are a helpful customer support agent for an e-commerce
store. Answer questions politely. If a customer asks about
their order, use the lookup_order tool. Never make up
order information."
}
2. User Message
This is what the user sends to the agent.
{
"role": "user",
"content": "Where is my order? Order ID is #45821"
}
3. Assistant Message
This is what the agent/LLM responded with in a previous turn. It is included in the prompt for multi-turn conversations.
{
"role": "assistant",
"content": "Let me look that up for you right now."
}
Key Prompt Engineering Techniques
Technique 1 — Zero-Shot Prompting
Directly ask the LLM to do something without giving any examples. Works well for straightforward tasks.
"Translate the following sentence to Hindi: 'Good morning, how are you?'"
Technique 2 — Few-Shot Prompting
Provide 2–3 examples of input-output pairs before the actual task. Helps the LLM understand exactly what format and style is expected.
"Convert product names to slugs. Here are some examples: 'Blue Running Shoes' → 'blue-running-shoes' 'Winter Jacket XL' → 'winter-jacket-xl' Now convert: 'Organic Green Tea 500g'"
The LLM learns the pattern from examples and applies it accurately.
Technique 3 — Chain-of-Thought Prompting
Ask the LLM to think step by step before giving an answer. This dramatically improves accuracy on complex reasoning tasks.
"A customer bought 3 items at ₹250 each and got a 10% discount. What is the final bill? Think step by step." LLM Response: Step 1: 3 items × ₹250 = ₹750 Step 2: 10% discount = ₹75 Step 3: Final bill = ₹750 - ₹75 = ₹675 The final bill is ₹675.
Technique 4 — Role Prompting
Assign a specific persona or expertise to the LLM. This shapes the tone, depth, and style of the response.
"You are a senior Python developer with 10 years of experience. Review the following code and suggest improvements:"
Technique 5 — Instruction + Constraint Prompting
Give clear instructions and set boundaries on what the agent should or should not do.
"You are a diet planning assistant. Rules: - Only suggest foods that are vegetarian - Never recommend fewer than 1500 calories per day - Always include a note to consult a doctor before major diet changes"
Technique 6 — Output Format Specification
Tell the LLM exactly what format the response should be in. This is critical when the output will be parsed by code.
"Extract the name, email, and phone number from the following text.
Respond ONLY in this JSON format:
{
'name': '...',
'email': '...',
'phone': '...'
}
Text: 'Hi, I am Rahul Sharma. Reach me at rahul@email.com or 9876543210'"
Writing System Prompts for AI Agents
The system prompt is the most important prompt when building an agent. It defines the agent's entire personality, capabilities, and limitations. Here is a complete example:
SYSTEM PROMPT for a Research Agent: "You are ResearchBot, an expert research assistant that helps users find accurate and up-to-date information. CAPABILITIES: - You have access to the web_search tool - You have access to the summarise_text tool BEHAVIOUR: - Always search the web before answering factual questions - Summarise long content before presenting it to the user - If unsure, say so — never fabricate information - Keep answers clear, structured, and concise OUTPUT STYLE: - Use bullet points for lists - Include sources at the end of each response - Keep responses under 300 words unless asked for detail"
Common Prompt Engineering Mistakes
| Mistake | Problem | Fix |
|---|---|---|
| Vague instructions | Agent gives inconsistent answers | Be specific and explicit about behaviour |
| No output format specified | Response cannot be parsed by code | Always specify JSON, markdown, or plain text |
| No constraints on tool use | Agent calls tools unnecessarily | Tell the agent when and when not to use tools |
| Overloading the prompt | Agent gets confused by too many rules | Keep instructions clear and prioritised |
| No fallback instruction | Agent breaks when it cannot do something | Add "If you cannot complete the task, explain why" |
Prompt Template for a General-Purpose Agent
SYSTEM: "You are [Agent Name], a [role description]. TOOLS AVAILABLE: [List each tool with a brief description of when to use it] INSTRUCTIONS: 1. Read the user's request carefully 2. Decide if a tool is needed — if yes, use it first 3. Think step by step before responding 4. Keep responses [short/detailed] and [formal/friendly] CONSTRAINTS: - [List anything the agent should NOT do] - Never guess or fabricate data — use tools to verify OUTPUT FORMAT: [Specify JSON, markdown, bullet points, plain text, etc.]"
Iterative Prompt Improvement
Great prompts are not written in one shot. The process of improving them is iterative:
- Write an initial prompt and test it
- Observe where the agent goes wrong
- Add a specific instruction to fix that issue
- Test again with different inputs
- Repeat until the agent behaves reliably
Summary
Prompt engineering is the backbone of every AI Agent. By carefully crafting system messages, using techniques like few-shot examples and chain-of-thought reasoning, specifying output formats, and setting clear constraints, the agent's behaviour can be precisely shaped and controlled. Great prompt engineering is what separates a useful agent from an unreliable one — it is a skill that improves with practice and experimentation.
