Building Agents with LangChain
LangChain is an open-source Python framework that simplifies the process of building AI Agents and LLM-powered applications. Instead of writing all the agent loop logic, tool management, and memory handling from scratch, LangChain provides ready-made components that can be assembled quickly.
LangChain is not a replacement for understanding how agents work — it is a productivity layer on top of that understanding. Everything covered in previous topics (tools, memory, planning, ReAct) is what LangChain implements under the hood.
Why Use LangChain?
| Without LangChain | With LangChain |
|---|---|
| Write the full agent loop manually | Use AgentExecutor with one function call |
| Manage conversation history manually | Use ConversationBufferMemory out of the box |
| Write all tool definitions and router logic | Use @tool decorator and pre-built tools |
| Build RAG pipelines step by step | Use RetrievalQA chain with 5 lines |
| Implement prompt templates manually | Use ChatPromptTemplate |
LangChain's Core Building Blocks
| Component | What It Does |
|---|---|
| ChatOpenAI | Connects to the OpenAI API (or other LLM providers) |
| ChatPromptTemplate | Manages reusable prompt templates with variables |
| @tool decorator | Turns any Python function into an agent tool |
| AgentExecutor | Runs the full ReAct loop with tools and memory |
| Memory classes | Manages short-term and long-term memory |
| Document Loaders | Loads PDFs, web pages, CSV files, etc. |
| Text Splitters | Splits documents into chunks for RAG |
| Vector Stores | Stores and searches document embeddings |
| Chains | Sequences of LLM calls and transformations |
Installation
pip install langchain langchain-openai langchain-community chromadb
Part 1 — Basic LangChain Chat
# basic_chat.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
load_dotenv()
# Initialise the LLM
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.3,
api_key=os.getenv("OPENAI_API_KEY")
)
# Create messages
messages = [
SystemMessage(content="You are a helpful Python tutor. Keep answers brief."),
HumanMessage(content="What is a Python dictionary?")
]
# Call the LLM
response = llm.invoke(messages)
print(response.content)
Part 2 — Using Prompt Templates
# prompt_templates.py
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0.3)
# Define a reusable template
template = ChatPromptTemplate.from_messages([
("system", "You are an expert in {subject}. Answer concisely."),
("human", "Explain {concept} in simple terms.")
])
# Create a chain (template + LLM)
chain = template | llm
# Run with different inputs
response1 = chain.invoke({"subject": "Python", "concept": "list comprehension"})
print(response1.content)
response2 = chain.invoke({"subject": "finance", "concept": "compound interest"})
print(response2.content)
Part 3 — Creating Tools with @tool
# tools_langchain.py
from langchain_core.tools import tool
import requests
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city. Use when asked about weather."""
# In production: replace with real API call
mock_data = {
"mumbai": "28°C, Humid",
"delhi": "35°C, Sunny",
"bangalore": "22°C, Cloudy"
}
city_lower = city.lower()
if city_lower in mock_data:
return f"Weather in {city}: {mock_data[city_lower]}"
return f"Weather data for {city} is not available."
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression like '25 * 4' or '(100 + 50) / 2'."""
try:
result = eval(expression, {"__builtins__": {}}, {})
return f"Result: {result}"
except Exception as e:
return f"Error: {str(e)}"
@tool
def get_stock_tip(company: str) -> str:
"""Get a mock stock market tip for a company."""
tips = {
"tcs": "TCS is trading at ₹3,820. Up 1.2% today.",
"infosys": "Infosys is trading at ₹1,540. Down 0.5% today."
}
return tips.get(company.lower(), f"No data available for {company}.")
# Inspect tool metadata
print(get_weather.name) # "get_weather"
print(get_weather.description) # "Get the current weather..."
Part 4 — Building a Full LangChain Agent
# langchain_agent.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from tools_langchain import get_weather, calculate, get_stock_tip
load_dotenv()
# 1. Initialise LLM
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.2,
api_key=os.getenv("OPENAI_API_KEY")
)
# 2. Define tools
tools = [get_weather, calculate, get_stock_tip]
# 3. Create prompt
prompt = ChatPromptTemplate.from_messages([
("system", """You are a smart financial and weather assistant.
Use the available tools to answer user questions accurately.
Always use a tool when factual information is needed.
Be concise and clear in your responses."""),
("human", "{input}"),
("placeholder", "{agent_scratchpad}") # Required for tool call history
])
# 4. Create agent
agent = create_tool_calling_agent(llm, tools, prompt)
# 5. Create agent executor (handles the loop)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True, # Shows reasoning steps
max_iterations=5
)
# 6. Run the agent
result = agent_executor.invoke({
"input": "What is the weather in Delhi and also calculate 15% of 80000?"
})
print("\nFinal Answer:", result["output"])
Part 5 — Adding Memory to a LangChain Agent
# agent_with_memory.py
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from tools_langchain import get_weather, calculate
llm = ChatOpenAI(model="gpt-4o", temperature=0.2)
tools = [get_weather, calculate]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
MessagesPlaceholder(variable_name="chat_history"), # Injects memory here
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
# Session memory store
session_store = {}
def get_session_history(session_id: str) -> ChatMessageHistory:
if session_id not in session_store:
session_store[session_id] = ChatMessageHistory()
return session_store[session_id]
# Wrap with memory
agent_with_memory = RunnableWithMessageHistory(
agent_executor,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history"
)
# Simulate multi-turn conversation
session = {"configurable": {"session_id": "user_001"}}
response1 = agent_with_memory.invoke({"input": "My name is Rohit."}, config=session)
print("Agent:", response1["output"])
response2 = agent_with_memory.invoke({"input": "What is my name?"}, config=session)
print("Agent:", response2["output"])
# Output: "Your name is Rohit."
LangChain vs From-Scratch Agent Comparison
| Feature | From Scratch | LangChain |
|---|---|---|
| Full control over logic | Yes | Limited — framework handles internals |
| Speed of development | Slower | Much faster |
| Learning curve | Understand agent fundamentals first | Moderate — many classes to learn |
| Production-readiness | Requires extra work | Better tooling and integrations built-in |
| LLM provider flexibility | Manually integrate each provider | Supports 50+ providers out of the box |
Summary
LangChain is the most widely used framework for building AI Agents in Python. It provides ready-made components for LLM integration, prompt templates, the @tool decorator, AgentExecutor (which handles the agent loop), and conversation memory. Learning to use LangChain on top of a solid understanding of agent fundamentals is the fastest path to building production-ready AI applications.
