Blog Post #33: When an Agent Picks the Wrong Tool: A Deep Dive into Tool Routing

Your agent’s toolkit is growing. You’ve equipped it with a super-fast database_search for internal product specifications and a general-purpose web_search for everything else. A user asks, “What are the specs for our new product, the T-800?” The agent, instead of using the cheap, precise database tool, initiates a full web search. It finds the answer, but it was the scenic route—slower, more expensive, and less reliable.

This is the tool ambiguity problem. As an agent’s capabilities expand, the chances of two or more tools having overlapping functionalities increase. Relying on a single LLM call to always pick the most optimal tool from a dozen options can be a gamble.

To solve this, we introduce a more robust architectural pattern: the Tool Router. A router is a dedicated chain whose specific job is to analyze the user’s input and decide which tool or sub-chain is the most appropriate to use, before handing off the work.


Why Go Beyond Basic Tool Calling?

In the simple ReAct agent we’ve built, a single LLM call is responsible for a lot of “cognitive load”:

  1. Understanding the user’s intent.
  2. Remembering the conversation history.
  3. Scanning the descriptions of all available tools.
  4. Selecting the best tool.
  5. Formulating the correct parameters for that tool.
  6. Generating a final answer.

A Router Chain isolates the decision-making process. It’s a specialized LLM call that is laser-focused on one task: “Given this query and these options, which option is the best fit?” This separation of concerns makes your agent’s decision-making process more explicit, debuggable, and reliable.

Building a Router with LangChain Expression Language (LCEL)

Let’s build a simple agent that has to choose between two specialized areas: physics and math. If the query doesn’t fit either, it will use a general-purpose chain.

Step 1: Define the Specialist Chains

First, we’ll create our three “tools,” which will actually be self-contained chains. Each is primed with a system prompt that makes it an expert in its domain.

# (Assuming .env setup and basic imports)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Chain 1: Physics expert
physics_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert in physics. Answer the user's question clearly and concisely."),
    ("user", "{question}")
])
physics_chain = {"question": lambda x: x["question"]} | physics_prompt | llm | StrOutputParser()

# Chain 2: Math expert
math_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert in mathematics. Answer the user's question with precision."),
    ("user", "{question}")
])
math_chain = {"question": lambda x: x["question"]} | math_prompt | llm | StrOutputParser()

# Chain 3: General purpose fallback
general_prompt = ChatPromptTemplate.from_template("{question}")
general_chain = general_prompt | llm | StrOutputParser()

Step 2: Create the Router Logic

Now, we’ll create the router itself. This involves a special prompt that describes our available chains and asks the LLM to choose one.

from langchain_core.runnables import RunnableBranch

# This prompt instructs the LLM to output ONLY the name of the best chain
router_prompt_template = """Given the user question below, classify it as either being about `math`, `physics`, or `general`.
Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""

router_prompt = ChatPromptTemplate.from_template(router_prompt_template)
router_chain = router_prompt | llm | StrOutputParser()

Step 3: Combine with RunnableBranch

RunnableBranch is the LCEL component for creating conditional logic—an “if/elif/else” statement for your chains. It takes a series of (condition, runnable) pairs and a default runnable. It will execute the first runnable whose condition evaluates to True.

We will use this to execute the decision made by our router_chain.

# The branch will use the output of the router_chain to decide which specialist chain to run.
branch = RunnableBranch(
    # The condition is a lambda function that checks the output of the router
    (lambda x: "physics" in x["topic"].lower(), physics_chain),
    (lambda x: "math" in x["topic"].lower(), math_chain),
    # The final runnable is the default case
    general_chain,
)

# --- The Full Chain ---
# 1. The user input is passed to the router_chain to get the topic ("math", "physics", etc.)
# 2. The input AND the router's topic choice are passed to the RunnableBranch.
# 3. The RunnableBranch selects the correct specialist chain to run.
full_chain = {
    "topic": router_chain,
    "question": lambda x: x["question"] # Pass the original question through
} | branch

The Full Code and Demonstration

Here is the complete, runnable main.py script.

# main.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableBranch
from dotenv import load_dotenv

load_dotenv()

# --- Define Chains ---
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Physics Chain
physics_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert in physics. You provide clear, concise answers."),
    ("user", "{question}")
])
physics_chain = {"question": lambda x: x["question"]} | physics_prompt | llm | StrOutputParser()

# Math Chain
math_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert in mathematics. You provide precise answers."),
    ("user", "{question}")
])
math_chain = {"question": lambda x: x["question"]} | math_prompt | llm | StrOutputParser()

# General Chain
general_chain = ChatPromptTemplate.from_template("{question}") | llm | StrOutputParser()

# --- Define Router ---
router_prompt_template = """Given the user question below, classify it as being about `math`, `physics`, or `general`.
Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
router_prompt = ChatPromptTemplate.from_template(router_prompt_template)
router_chain = router_prompt | llm | StrOutputParser()

# --- Define RunnableBranch ---
branch = RunnableBranch(
    (lambda x: "physics" in x["topic"].lower(), physics_chain),
    (lambda x: "math" in x["topic"].lower(), math_chain),
    general_chain,
)

# --- Full Chain ---
full_chain = {
    "topic": router_chain,
    "question": lambda x: x["question"]
} | branch

# --- Run examples ---
if __name__ == "__main__":
    physics_question = "What is the theory of general relativity?"
    print(f"Human: {physics_question}")
    print("AI:", full_chain.invoke({"question": physics_question}))

    print("-" * 30)

    math_question = "Explain the Fermat's Last Theorem in simple terms."
    print(f"Human: {math_question}")
    print("AI:", full_chain.invoke({"question": math_question}))

    print("-" * 30)

    general_question = "What is the historical significance of Khardaha, West Bengal?"
    print(f"Human: {general_question}")
    print("AI:", full_chain.invoke({"question": general_question}))

When you run this, the first query will be classified as “physics” and passed to the physics expert. The second will be routed to the math expert, and the third, failing the first two conditions, will go to the default general chain.

Conclusion

As your agent’s toolkit grows, its ability to make the right choice becomes just as important as the tools themselves. A router chain introduces a crucial layer of intelligent decision-making, separating the act of choosing a tool from the act of using it.

This pattern leads to more robust, efficient, and predictable agents. By thinking in terms of routers and conditional execution with RunnableBranch, you move from simply hoping your agent makes the right choice to designing a system that ensures it does.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment