Blog Post #25: The Heart of LangChain: A Deep Dive into LangChain Expression Language (LCEL)

In our last post, we built our first application using a magical | symbol to connect our components. This pipe operator wasn’t just a convenient shortcut; it’s the core of the modern LangChain framework, known as the LangChain Expression Language (LCEL).

LCEL is a declarative way to compose chains. Instead of writing complex, step-by-step procedural code, you simply declare the logical flow of your components, and LangChain handles the execution. Think of it as snapping LEGOs together versus wiring individual components on a circuit board.

Mastering this simple but powerful syntax is the key to building everything from simple chatbots to complex agents. This post is a deep dive into the LCEL syntax, using simple examples to build a strong foundation.


The Basic Pipe |: Sequential Execution

The most fundamental part of LCEL is the pipe | operator. It connects components in a sequence, where the output of the component on the left becomes the input for the component on the right.

Let’s revisit our first chain, but with a new prompt, and trace the data flow.

# (Assuming you have your .env and basic setup from the last post)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-3.5-turbo")
parser = StrOutputParser()

prompt = ChatPromptTemplate.from_template(
    "Tell me a short, funny story about a programmer and their {animal}."
)

# A simple sequential chain
chain = prompt | llm | parser

response = chain.invoke({"animal": "cat"})
print(response)

What is happening here?

  1. The dictionary {"animal": "cat"} is passed to the prompt component.
  2. The prompt component formats this into a PromptValue object, filling in the template.
  3. This PromptValue is “piped” as the input to the llm component.
  4. The llm calls the OpenAI API and returns a complex AIMessage object.
  5. This AIMessage is piped to the parser, which extracts just the string content of the message and returns it.

The pipe creates a simple, readable, and powerful linear sequence.


The Dictionary {}: Parallel Execution

This is where LCEL’s power truly begins to show. What if you want to run multiple chains at the same time and combine their results? You can define a dictionary where each value is a runnable chain. LangChain is smart enough to execute these chains in parallel.

This is incredibly efficient, especially when making multiple API calls.

# (Continuing from the previous example)
from langchain_core.runnables import RunnablePassthrough

# Chain 1: Generates a story
story_prompt = ChatPromptTemplate.from_template("Tell me a short story about a {topic}.")
story_chain = story_prompt | llm | parser

# Chain 2: Writes a single-sentence summary for that story
summary_prompt = ChatPromptTemplate.from_template("Summarize this story in one sentence: {story}")
summary_chain = {"story": story_chain} | summary_prompt | llm | parser

# Now, let's run them in parallel for the same topic
parallel_chains = {
    "story": story_chain,
    "summary": summary_chain
}

response = parallel_chains.invoke({"topic": "a brave knight"})
print(response)

When you call invoke, LangChain sees the dictionary structure and triggers both story_chain and summary_chain concurrently. The result is a dictionary containing the outputs of both.

Output:

{
  'story': 'Sir Reginald, the bravest knight in the kingdom, was known for his courage and his magnificent mustache... (rest of story)',
  'summary': "Despite his fearsome reputation, Sir Reginald's greatest battle was against a particularly stubborn pickle jar."
}

RunnablePassthrough: Passing Data Through Your Chains

Look closely at the summary_chain in the example above. We used {"story": story_chain}. This structure is a common LCEL pattern. It means:

  1. Take the input to this dictionary (which is {"topic": "a brave knight"}).
  2. Pass that input to the story_chain.
  3. The result of story_chain will be placed under the key "story".
  4. This new dictionary {"story": "..."} becomes the input for the next component in the chain, summary_prompt.

But what if you need to access the original input later in the chain? For this, we use RunnablePassthrough. It’s a simple component that “passes through” its input.

# Goal: Get a story and then return a final dictionary with both the original topic and the story.

# The story_chain is the same as before
story_chain = ChatPromptTemplate.from_template("Tell me a story about {topic}.") | llm | parser

# The final chain uses RunnablePassthrough
final_chain = {
    "topic": RunnablePassthrough(), # This will pass the original input straight through
    "story": story_chain
}

response = final_chain.invoke("a lonely robot")
print(response)

Output:

{
  'topic': 'a lonely robot',
  'story': 'Unit 734 stood on the red sands of Mars, its metallic chassis gleaming under the weak sun... (rest of story)'
}

RunnablePassthrough is the key to managing the flow of data and structuring the inputs and outputs of your chains.


Putting it all Together: A More Complex Flow

Let’s combine these ideas. We’ll run two chains in parallel and then pipe their combined output into a final formatting step.

# Goal: Given a country, find its capital and a fun fact, then format them into a single string.

capital_prompt = ChatPromptTemplate.from_template("What is the capital of {country}?")
fact_prompt = ChatPromptTemplate.from_template("Tell me a fun fact about {country}.")

# Run the two chains in parallel
parallel_chains = {
    "capital": capital_prompt | llm | parser,
    "fact": fact_prompt | llm | parser
}

# The final formatting prompt will receive a dictionary with 'capital' and 'fact' keys
formatting_prompt = ChatPromptTemplate.from_template(
    "Report for {country}:\nCapital: {capital}\nFun Fact: {fact}"
)

# Here, we need to pass the original 'country' along with the parallel results
final_chain = {
    "country": RunnablePassthrough(), # Pass through the original input country
    "capital": (lambda x: x['country']) | capital_prompt | llm | parser,
    "fact": (lambda x: x['country']) | fact_prompt | llm | parser
} | formatting_prompt

# The more elegant way using RunnablePassthrough
final_chain_passthrough = {
    "capital": capital_prompt | llm | parser,
    "fact": fact_prompt | llm | parser,
    "country": RunnablePassthrough() # Pass the whole input dict through
} | ChatPromptTemplate.from_template(
    "Report for {country[country]}:\nCapital: {capital}\nFun Fact: {fact}"
)


response = final_chain_passthrough.invoke({"country": "Japan"})
# This final output is a PromptValue object, so we convert to string to see it
print(response.to_string())

Output:

Report for Japan:
Capital: Tokyo
Fun Fact: Japan has more than 6,800 islands.

This example shows the true power of LCEL: you can build a complex graph of operations (parallel generation followed by sequential formatting) in just a few lines of highly readable code.

Conclusion

LCEL is the fluent, Pythonic heart of the modern LangChain library. It allows you to declaratively build complex computational graphs that are automatically optimized for parallel, asynchronous, and streaming execution. By mastering the core primitives—the | for sequence, {} for parallelism, and RunnablePassthrough for data flow—you unlock the ability to construct powerful and efficient LLM applications with remarkable clarity and ease.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment