Blog Post #29: Creating and Integrating Custom Tools in LangChain

An agent’s true power isn’t just in its ability to reason; it’s in its ability to act. So far, our chains have primarily reasoned about text. Now, we are going to give our agent hands, eyes, and ears by connecting it to our own custom Python functions.

While LangChain offers many pre-built tools for things like web search or calculators, the real magic happens when you give an agent access to your unique data, business logic, or private APIs.

This tutorial will show you the standard, modern way to take any Python function and wrap it so that a LangChain agent can understand it, choose it, and use it to solve problems in the real world.


Part 1: Designing an Agent-Friendly Function

As we learned in Post #19, when you write a function for an agent, you are writing for an LLM to read. The LLM does not see your code’s logic; it only sees the function’s signature and its documentation. Clarity is everything.

Remember the golden rules:

  1. Descriptive Name: calculate_shipping_cost is better than calc_ship.
  2. Precise Type Hints: Use Python’s type hints for all arguments and the return value (e.g., item_weight: float -> float).
  3. Detailed Docstring: This is the most important part. The docstring is the tool’s instruction manual for the LLM. It must clearly explain what the function does, what each parameter means, and what it returns.

Here’s a well-documented function we’ll use as our first example. Based on our context, we know today is Monday, September 29, 2025.

# Our plain Python function
def get_local_day_of_week() -> str:
    """
    Returns the current day of the week (e.g., 'Monday', 'Tuesday')
    for the user's current location, which is Khardaha, West Bengal.
    This tool does not require any parameters.
    """
    # In a real app, you would use the datetime library.
    # For this tutorial, we are using our known context.
    return "Monday"

Part 2: The @tool Decorator – The Magic Wrapper

How do we convert this standard Python function into a format LangChain can understand? The easiest and most recommended way is with the @tool decorator.

A Python decorator is a special function that wraps another function to add new functionality. The @tool decorator automatically inspects your function’s name, type hints, and docstring and converts them into a structured JSON schema that the LLM can reliably interpret.

It’s as simple as adding one line of code:

from langchain_core.tools import tool

@tool
def get_local_day_of_week() -> str:
    """
    Returns the current day of the week (e.g., 'Monday', 'Tuesday')
    for the user's current location, which is Khardaha, West Bengal.
    This tool does not require any parameters.
    """
    return "Monday"

That’s it! This function is now a certified LangChain Tool.

Let’s create another, more complex tool that takes arguments:

from typing import List

@tool
def search_local_restaurants(cuisine: str) -> List[str]:
    """
    Searches for restaurants in Khardaha, West Bengal with a specific cuisine.

    Args:
        cuisine (str): The type of food to search for, such as 'Bengali' or 'Chinese'.
    """
    # This is a mock function. A real-world version would query a database or a Google Maps API.
    print(f"--- Searching for {cuisine} restaurants in Khardaha ---")
    if "bengali" in cuisine.lower():
        return ["Dada Boudi Hotel", "Bhojohori Manna", "Kasturi Restaurant"]
    elif "chinese" in cuisine.lower():
        return ["Wow! Momo", "Mainland China"]
    else:
        return [f"Sorry, I couldn't find any {cuisine} restaurants in Khardaha."]

The @tool decorator will automatically understand that this function requires a cuisine string as an argument.

Part 3: Integrating Your Tools into an Agent

Now that we have our tools, we need to give them to an agent and create a runtime to execute it. This runtime is called an AgentExecutor.

Here is a full script showing how to build an agent that can use our two new custom tools.

# main.py
import os
from dotenv import load_dotenv
from typing import List

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
    format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser

# --- 1. Load Environment Variables ---
load_dotenv()

# --- 2. Define Custom Tools ---
@tool
def get_local_day_of_week() -> str:
    """
    Returns the current day of the week (e.g., 'Monday', 'Tuesday')
    for the user's current location, which is Khardaha, West Bengal.
    This tool does not require any parameters.
    """
    print("--- Getting day of the week ---")
    return "Monday"

@tool
def search_local_restaurants(cuisine: str) -> List[str]:
    """
    Searches for restaurants in Khardaha, West Bengal with a specific cuisine.

    Args:
        cuisine (str): The type of food to search for, such as 'Bengali' or 'Chinese'.
    """
    print(f"--- Searching for {cuisine} restaurants in Khardaha ---")
    if "bengali" in cuisine.lower():
        return ["Dada Boudi Hotel", "Bhojohori Manna", "Kasturi Restaurant"]
    elif "chinese" in cuisine.lower():
        return ["Wow! Momo", "Mainland China"]
    else:
        return [f"Sorry, I couldn't find any {cuisine} restaurants in Khardaha."]

# --- 3. Setup Model, Prompt, and Agent ---
tools = [get_local_day_of_week, search_local_restaurants]
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# The prompt needs a placeholder for the agent's intermediate steps
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful local assistant for Khardaha."),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# Bind the tools to the LLM
# This converts the tools into a format the LLM can understand
llm_with_tools = llm.bind_tools(tools)

# The core agent logic
agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
    }
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
)

# --- 4. Create and Run the Agent Executor ---
# The AgentExecutor is the runtime for the agent.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

print("\n--- Running Agent ---")
agent_executor.invoke({"input": "What day of the week is it?"})

print("\n--- Running Agent Again ---")
agent_executor.invoke({"input": "Are there any good places to eat Chinese food around here?"})

Understanding the Output

When you run this script, pay close attention to the output from verbose=True. You will see the agent’s ReAct loop in action!

  1. Thought: The LLM will reason about the user’s request and decide a tool is needed.
  2. Action: It will output a Tool Call with the exact function name (search_local_restaurants) and parameters ({'cuisine': 'Chinese'}) it decided on.
  3. Observation: The AgentExecutor will run your actual Python function. The return value of your function becomes the observation that is fed back to the LLM.
  4. Final Answer: The LLM uses the observation to formulate the final response to the user.

You will see the print() statements from inside your Python functions appear in the console, proving that the agent is executing your custom code.

Conclusion

You now hold the key to building truly useful and unique agents. You are no longer limited to the LLM’s built-in knowledge. By wrapping your custom functions with the @tool decorator, you can connect your agent to any data source, proprietary API, or specialized logic you can imagine.

An agent’s capability is a direct reflection of the quality and utility of its tools. By mastering this skill, you can create agents that solve real, specific problems in your unique domain, bridging the gap between artificial intelligence and practical application.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment