If the LLM is the engine of your AI application, then the prompt is the fuel. The quality, structure, and clarity of your prompt directly determine the performance of your entire system. While you could just use Python’s f-strings to create prompts, this quickly becomes messy, hard to reuse, and doesn’t integrate with the powerful LangChain ecosystem.
LangChain provides a suite of PromptTemplate classes that solve this problem. These are not just simple string formatters; they are structured, composable, and reusable objects that form the robust starting point of any chain.
This tutorial is a deep dive into creating dynamic and reusable prompts, giving you precise control over the instructions you send to your LLM.
The Foundation: PromptTemplate
for Simple Inputs
The most basic class is PromptTemplate
. It’s designed for LLMs that take a simple string as input. You define a template string and specify the input variables it expects.
# (Assuming you have your .env and basic setup)
from langchain_core.prompts import PromptTemplate
# 1. Define the template with placeholders in curly braces
template_string = "You are a branding expert. Create a single, catchy tagline for a company that makes {product} targeted at {audience}."
# 2. Create the PromptTemplate instance
prompt_template = PromptTemplate(
input_variables=["product", "audience"],
template=template_string,
)
# 3. Format the prompt with concrete values to see the result
formatted_prompt = prompt_template.format(
product="artisanal coffee beans",
audience="busy professionals"
)
print(formatted_prompt)
# Expected Output:
# You are a branding expert. Create a single, catchy tagline for a company that makes artisanal coffee beans targeted at busy professionals.
This object can now be the first link in an LCEL chain, ready to receive a dictionary with product
and audience
keys.
The Standard: ChatPromptTemplate
for Chat Models
Modern LLMs are primarily chat models. They don’t expect a single string; they expect a list of messages, each with a specific role (like system
, human
, or ai
). ChatPromptTemplate
is the essential tool for building this list.
The most common way to create one is using the from_messages
class method, which takes a list of tuples.
from langchain_core.prompts import ChatPromptTemplate
chat_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI assistant that translates English to {language}."),
("human", "Translate the following sentence: {sentence}")
])
# Let's format the prompt to see the structured output
formatted_messages = chat_template.format_messages(
language="French",
sentence="I love programming."
)
print(formatted_messages)
Output:
[
SystemMessage(content='You are a helpful AI assistant that translates English to French.'),
HumanMessage(content='Translate the following sentence: I love programming.')
]
This structured list of SystemMessage
and HumanMessage
objects is the exact format that ChatOpenAI
and other chat models require.
Handling Dynamic Content: MessagesPlaceholder
for History
How do you build a chatbot that remembers the conversation? You can’t hardcode the history. You need a way to dynamically insert a variable number of messages into your prompt.
This is the job of MessagesPlaceholder
. It tells the template, “A list of message objects will be provided here at runtime.”
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
# A template with a placeholder for the conversation history
chat_template_with_history = ChatPromptTemplate.from_messages([
("system", "You are a friendly chatbot having a conversation with a human."),
# The 'history' variable will be a list of Message objects
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
])
# Let's simulate a past conversation
conversation_history = [
HumanMessage(content="Hello! I'm looking for book recommendations."),
AIMessage(content="Of course! What genre are you in the mood for?")
]
# Now, format the prompt with the history and the new user input
final_prompt_messages = chat_template_with_history.format_messages(
history=conversation_history,
input="Something in science fiction."
)
print(final_prompt_messages)
Output:
[
SystemMessage(content='You are a friendly chatbot having a conversation with a human.'),
HumanMessage(content="Hello! I'm looking for book recommendations."),
AIMessage(content='Of course! What genre are you in the mood for?'),
HumanMessage(content='Something in science fiction.')
]
As you can see, the MessagesPlaceholder
seamlessly injected our list of past messages into the correct spot. This is the cornerstone of building agent memory.
Reusability and Composition
The real power of prompt templates comes from their composability. You can create generic templates and then specialize them for different tasks.
Partial Formatting
The .partial()
method lets you pre-fill some variables in a template, creating a new, more specific template.
# A generic template for generating questions
generic_prompt = PromptTemplate.from_template(
"Generate a {difficulty} trivia question about the topic of {topic}."
)
# Create two specialized templates from the generic one
easy_question_prompt = generic_prompt.partial(difficulty="easy")
hard_question_prompt = generic_prompt.partial(difficulty="expert-level")
# Now you can use them with just the 'topic' variable
print("EASY:", easy_question_prompt.format(topic="the solar system"))
print("HARD:", hard_question_prompt.format(topic="the solar system"))
This is a fantastic way to reduce repetition and build a library of reusable prompt components.
Conclusion
LangChain’s prompt templates are far more than simple f-strings. They are structured, type-safe, and composable objects that provide the reliability needed to build complex applications. By mastering them, you gain precise control over your LLM’s inputs.
- Use
PromptTemplate
for simple string-in/string-out models. - Use
ChatPromptTemplate
for modern chat models. - Use
MessagesPlaceholder
to dynamically handle conversation history. - Use
.partial()
to create specialized, reusable prompts.
Prompt engineering is a fundamental skill in AI development. These primitives give you a professional framework to manage, version, and deploy your most valuable asset: the instructions that guide your AI.
Author

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com