Our agents have learned to think, plan, and use mock tools that exist only in our code. To be truly useful, however, they need to connect to the real, live, ever-changing world. They need to access data that isn’t in their training set—data that was generated just a second ago.
Today, we break our agent out of its sandbox. We are going to build an agent that can provide the current, real-time weather for any city in the world by calling a live web API.
In this hands-on project, you will learn the complete workflow for giving your agent a new sense: the ability to perceive live data from the internet. We will cover:
- Getting credentials for a live API.
- Reading API documentation—a developer’s most crucial skill.
- Writing a custom tool that makes the API call.
- Integrating this live tool into our LangChain agent.
Part 1: The Mission Briefing – OpenWeatherMap API
For this project, we’ll use the popular and free OpenWeatherMap API.
Step 1: Get Your API Key
First, you need to get your credentials.
- Go to OpenWeatherMap’s website and sign up for a free account.
- Navigate to the ‘API keys’ tab in your user dashboard.
- A default API key will be generated for you. Copy this key.
- In your project folder, open your
.env
file and add the key, reinforcing our security best practices from Post #13.
# .env
OPENAI_API_KEY="sk-..."
OPENWEATHERMAP_API_KEY="your_actual_key_from_the_website"
Step 2: Read the Documentation
This is the heart of working with any new service. You don’t guess; you read the manual. We will use the “Current Weather Data” endpoint.
Let’s extract the key information an agent (or a developer) needs:
- Base URL:
https://api.openweathermap.org/data/2.5/weather
- Authentication: The API key is passed as a query parameter named
appid
. - Required Parameters:
q
: The city name (e.g.,q=London
).appid
: Your API key.
- Optional Parameters:
units
: We can specifymetric
for Celsius orimperial
for Fahrenheit. This is a great feature for our tool!
- Success Response: The API returns a JSON object. We are interested in the
weather[0].description
(e.g., “clear sky”) and themain.temp
(e.g., 25.5).
Now we have our mission plan. Let’s build the tool.
Part 2: Building the Weather Tool
First, make sure you have the requests
library installed to make HTTP calls:
pip install requests
Now, let’s write a well-documented Python function in a tools.py
file that encapsulates all the API logic.
# tools.py
import os
import requests
from langchain_core.tools import tool
from typing import Literal
# Load our API key
WEATHER_API_KEY = os.getenv("OPENWEATHERMAP_API_KEY")
@tool
def get_live_weather(city: str, units: Literal["metric", "imperial"] = "metric") -> str:
"""
Retrieves the current, real-time weather for a specified city from OpenWeatherMap.
Args:
city (str): The name of the city to get the weather for, e.g., 'Sodepur' or 'London'.
units (str): The unit system for the temperature. Can be 'metric' (for Celsius)
or 'imperial' (for Fahrenheit). Defaults to 'metric'.
"""
if not WEATHER_API_KEY:
return "Error: OpenWeatherMap API key not found. Please set it in your .env file."
base_url = "https://api.openweathermap.org/data/2.5/weather"
params = {
"q": city,
"appid": WEATHER_API_KEY,
"units": units,
}
try:
response = requests.get(base_url, params=params)
response.raise_for_status() # Raises an HTTPError for bad responses (4xx or 5xx)
data = response.json()
# Extract only the necessary information. Don't return the whole JSON blob.
description = data['weather'][0]['description']
temperature = data['main']['temp']
unit_symbol = "°C" if units == "metric" else "°F"
return f"The current weather in {city} is {temperature}{unit_symbol} with {description}."
except requests.exceptions.HTTPError as http_err:
if response.status_code == 404:
return f"Error: The city '{city}' was not found."
else:
return f"Error: An HTTP error occurred: {http_err}"
except Exception as e:
return f"An unexpected error occurred: {e}"
This function is a perfect agent tool: it’s well-documented, uses type hints (Literal
ensures the agent can only choose valid units), handles errors gracefully, and returns a clean, simple string for the agent to observe.
Part 3: Integrating the Live Tool into Our Agent
Now we simply need to give this new tool to the AgentExecutor
we built in our previous posts.
# main.py
import os
from dotenv import load_dotenv
from tools import get_live_weather # Import our new tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor, create_tool_calling_agent
# --- SETUP ---
load_dotenv()
# We only need one tool for this agent
tools = [get_live_weather]
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# --- PROMPT ---
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful weather assistant."),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# --- AGENT ---
agent = create_tool_calling_agent(llm, tools, prompt)
# --- EXECUTOR ---
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# --- RUN ---
if __name__ == "__main__":
print("\n--- Running Weather Agent ---")
query = "What's the weather like in my current city, Sodepur, in Celsius?"
response = agent_executor.invoke({"input": query})
print("\n--- Final Answer ---")
print(response["output"])
print("\n--- Running Weather Agent Again ---")
query = "How about in New York in Fahrenheit?"
response = agent_executor.invoke({"input": query})
print("\n--- Final Answer ---")
print(response["output"])
The “Aha!” Moment
When you run this script, watch the verbose=True
output. You will see the agent’s complete ReAct loop based on live data:
- Thought: The LLM will reason that it needs to find the weather for “New York”.
- Action: It will formulate a
Tool Call
:get_live_weather(city='New York', units='imperial')
. Notice how it correctly parsed the city and the unit from your natural language query based on our tool’s docstring! - Observation: The
AgentExecutor
runs our Python function, which makes a real API call to OpenWeatherMap. The live weather data is returned as a string. - Final Answer: The LLM takes this observation and presents it to you in a friendly format.
Conclusion
You have successfully built an agent that can perceive and report on the current state of the real world. It is no longer confined to its static training data.
The workflow you’ve mastered today is fundamental to building practical AI applications:
- Get Credentials for a service.
- Read the Documentation to understand how to use it.
- Build a Python Function to handle the API logic and error handling.
- Wrap it with
@tool
and a high-quality docstring. - Give it to your agent.
This exact pattern can be used to connect your agent to almost any service on the internet, from Spotify to Slack to your company’s internal databases. You’ve just unlocked a new universe of possibilities for your AI creations.
Author

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com