Blog Post #30: Assembling the Pieces: Building Your First, Simple ReAct Agent in LangChain

For 29 posts, we’ve gathered our tools and learned the theory. We’ve meticulously explored the components of Agentic AI, mastering them one by one. Today, we assemble them. Today, we build our first intelligent agent.

This capstone post will guide you through combining everything we’ve learned into a single, functioning application. We will build a simple “Local Assistant” agent for our location, Sodepur, West Bengal. This agent will be able to reason, and to act upon the world using custom tools.

We will be using:

  • Our Professional Dev Environment with venv and .env files (Posts #12 & #13)
  • The declarative power of LCEL for chaining (Post #25)
  • ChatPromptTemplates with placeholders for dynamic content (Post #26)
  • A powerful Chat Model as the agent’s brain (Post #27)
  • Our own Custom Tools to give the agent new capabilities (Post #29)

Let’s begin.


Step 1: Setting the Stage – Project Setup

As always, we start with a clean, professional workspace.

  1. Create your project: mkdir my-first-langchain-agent && cd my-first-langchain-agent
  2. Set up the environment: python -m venv venv && source venv/bin/activate # On Windows: .\venv\Scripts\activate git init
  3. Install dependencies: pip install langchain langchain-openai python-dotenv pytz
  4. Create your .env and .gitignore files:
    • .env: OPENAI_API_KEY="sk-..."
    • .gitignore: Add venv/, __pycache__/, and .env
  5. Save dependencies: pip freeze > requirements.txt

Step 2: Defining the Agent’s Capabilities (The Tools)

An agent is only as good as its tools. We will create two simple, custom tools that give our agent knowledge about its local context. Create a new file, tools.py.

# tools.py
from langchain_core.tools import tool
from datetime import datetime
import pytz

@tool
def get_current_time(timezone: str) -> str:
    """
    Retrieves the current time for a specified IANA timezone.

    Args:
        timezone (str): The IANA timezone name, e.g., 'America/New_York' or 'Asia/Kolkata'.
    """
    try:
        tz = pytz.timezone(timezone)
        current_time = datetime.now(tz)
        return current_time.strftime('%I:%M:%S %p %Z')
    except pytz.UnknownTimeZoneError:
        return f"Error: Invalid timezone '{timezone}'."

@tool
def search_local_restaurants(cuisine: str) -> str:
    """
    Searches for restaurants in Sodepur, West Bengal with a specific cuisine.

    Args:
        cuisine (str): The type of food to search for, such as 'Bengali' or 'Chinese'.
    """
    # This is a mock function. A real version would query a database or API.
    if "bengali" in cuisine.lower():
        return "Recommendations for Bengali cuisine in Sodepur include Dada Boudi Hotel and Bhojohori Manna."
    elif "chinese" in cuisine.lower():
        return "Wow! Momo is a popular spot for Chinese food in the Sodepur area."
    else:
        return f"Sorry, I couldn't find any recommendations for {cuisine} cuisine in Sodepur."

These two well-documented functions, decorated with @tool, are now ready to be used by our agent.

Step 3: Assembling the Agent in main.py

This is where all the pieces come together. We’ll build the agent using the modern, tool-calling approach, which is the most reliable method.

Create a main.py file:

# main.py
import os
from dotenv import load_dotenv

# Import our custom tools
from tools import get_current_time, search_local_restaurants

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
    format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser

# 1. SETUP: Load environment variables and define tools
load_dotenv()
tools = [get_current_time, search_local_restaurants]
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# 2. PROMPT: Craft the brain of the agent
# The MessagesPlaceholder is crucial for the ReAct loop
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful local assistant for Sodepur, West Bengal. Today is Monday, September 29, 2025."),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# 3. AGENT: Bind the tools and create the core agent logic
llm_with_tools = llm.bind_tools(tools)

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
    }
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
)

# 4. EXECUTOR: The runtime that powers the agent
# verbose=True allows us to see the agent's thought process
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 5. RUN: Invoke the agent with a user query
if __name__ == "__main__":
    print("\n--- Agent Run 1: Asking for the time ---")
    response1 = agent_executor.invoke({
        "input": "What time is it in Kolkata?"
    })
    print("\n--- Final Answer ---")
    print(response1["output"])

    print("\n\n--- Agent Run 2: Asking for food ---")
    response2 = agent_executor.invoke({
        "input": "I'm hungry for some authentic Bengali food. Any suggestions?"
    })
    print("\n--- Final Answer ---")
    print(response2["output"])

Understanding the Assembly

  • Prompt (Step 2): This is the agent’s core directive. The MessagesPlaceholder for "agent_scratchpad" is the key to the ReAct loop. The AgentExecutor will automatically populate this with the history of tool calls and their results, allowing the agent to “observe” its actions.
  • Agent (Step 3): This LCEL chain defines the agent’s logic. It takes the user’s input and the scratchpad, formats them using our prompt, sends them to the llm (which knows about our tools), and then parses the LLM’s decision using the OpenAIToolsAgentOutputParser.
  • Executor (Step 4): This is the runtime. It takes the agent’s decision (e.g., “call get_current_time with timezone Asia/Kolkata“), executes the actual Python function, gets the result (“02:00:03 PM IST”), and puts it back into the agent_scratchpad for the next loop.

When you run python main.py, the verbose=True flag will let you see this entire process in your terminal—the LLM’s reasoning, the tool calls it makes, and the observations it receives.

Conclusion: You’ve Built an Agent

You did it. You’ve taken all the individual concepts—secure environments, LCEL, prompt engineering, and custom tools—and assembled them into a functioning AI agent.

This simple assistant is the culmination of our foundational journey. The patterns you used here are the same ones that power sophisticated, production-grade agentic systems. You have successfully bridged the vast gap between theory and a tangible, working application.

This is, of course, just the beginning. From this solid foundation, you can now explore more advanced agents, connect to real-world APIs, add complex memory, and begin solving truly unique problems. You haven’t just learned about Agentic AI; you’ve become an agent builder.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment