Blog Post #27: LangChain Primitives: Interacting with Models (LLMs and Chat Models)

In the world of communication, sometimes you need to send a formal, one-way memo, and other times you need a dynamic, back-and-forth conversation. LLM providers offer models optimized for both scenarios, and LangChain gives us a clean, standardized way to interact with each.

After mastering Prompt Templates, the next logical step in our chain is the model itself. LangChain abstracts the dozens of available models into two primary interfaces:

  1. LLM: For older, text-completion style models.
  2. ChatModel: For modern, conversational, message-based models.

Understanding this distinction is crucial for building effective applications, as it dictates how you structure your prompts and handle the model’s output.


The Classic: The LLM Interface (Text-In, Text-Out)

This is the original way of interacting with large language models. The LLM interface is designed for models that perform a simple, powerful task: given a string of text, predict the most likely text to come next.

  • Input: A single string.
  • Output: A single string.

Think of it like the world’s most powerful autocomplete. It’s a “text-in, text-out” machine. This interface is best paired with the basic PromptTemplate we learned about in the last post.

Example

We’ll use OpenAI’s gpt-3.5-turbo-instruct model, which is specifically designed for this kind of text completion.

# (Assuming .env setup from previous posts)
from langchain_openai import OpenAI
from langchain_core.prompts import PromptTemplate
import os
from dotenv import load_dotenv

load_dotenv()

# Note: We are using OpenAI, NOT ChatOpenAI for this interface.
# This model is specifically for text-completion.
llm = OpenAI(model="gpt-3.5-turbo-instruct")

prompt = PromptTemplate.from_template(
    "You are a tour guide. Describe the Dakshineswar Kali Temple near {location} in one sentence."
)

# A simple LCEL chain: PromptTemplate -> LLM
chain = prompt | llm

print("Invoking the LLM (completion) chain...")
response = chain.invoke({"location": "Khardaha, West Bengal"})
print(response)

Expected Output:

Invoking the LLM (completion) chain...

The Dakshineswar Kali Temple, a revered Hindu shrine on the eastern bank of the Hooghly River, is famous for its association with the 19th-century mystic Ramakrishna Paramahamsa.

This is straightforward, but it lacks the ability to understand different roles (like system instructions vs. human queries), which is a major limitation for building sophisticated agents. For this reason, the LLM interface is now considered a legacy approach for most applications.


The Modern Standard: The ChatModel Interface (Message-In, Message-Out)

This is the interface you will use for almost all modern development. ChatModel is designed for powerful models like GPT-4o, Google’s Gemini, and Anthropic’s Claude 3. These models are optimized for multi-turn dialogue and can differentiate between instructions, user queries, and their own previous responses.

  • Input: A list of Message objects (e.g., SystemMessage, HumanMessage, AIMessage).
  • Output: A single AIMessage object.

This is not autocomplete; this is a conversational partner. It understands who said what and can maintain the context of a dialogue. This interface is designed to be used with ChatPromptTemplate.

Example

Here, we use ChatOpenAI and the ChatPromptTemplate to create a more structured interaction.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import os
from dotenv import load_dotenv

load_dotenv()

# Note: We are using the standard ChatOpenAI interface
chat_model = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert on local Bengali cuisine. Respond in a concise and friendly manner."),
    ("human", "What is a famous sweet dish I can find near {location}?")
])

# The standard chain: ChatPromptTemplate -> ChatModel -> OutputParser
chain = prompt | chat_model | StrOutputParser()

print("\nInvoking the ChatModel chain...")
response = chain.invoke({"location": "Khardaha"})
print(response)

Expected Output:

Invoking the ChatModel chain...

Being so close to Kolkata, you're in luck! A very famous and delicious sweet you should try is the "Rosogolla." While available everywhere, some of the most authentic and famous shops are in the northern parts of the city and its suburbs like Khardaha. Enjoy!

The ability to provide a system message separately from the human message gives us far more control over the model’s behavior, which is essential for agent development.


Why the Distinction Matters: A Summary

FeatureLLM (Legacy)ChatModel (Modern)
Primary UseText CompletionConversation, Instruction Following, Reasoning
Input TypestringList[BaseMessage] (System, Human, AI messages)
Output TypestringAIMessage object
Understands Roles?NoYes
Best PromptPromptTemplateChatPromptTemplate (with placeholders)
Typical Modelsgpt-3.5-turbo-instructgpt-4o, gemini-1.5-pro, claude-3-opus

The Golden Rule: If you are starting a new project today, you should almost always use a ChatModel. The underlying models are more capable, and the message-based interface is more flexible and powerful, providing the structure needed to build memory, give clear instructions, and create reliable agents.

The Power of Abstraction

One of the greatest benefits of LangChain is that it standardizes these interfaces. You can build your entire application logic using the ChatModel primitive. If you later decide to switch from OpenAI to Google’s Gemini, you only need to change one line of code:

# from langchain_openai import ChatOpenAI
from langchain_google_genai import ChatGoogleGenerativeAI

# model = ChatOpenAI(model="gpt-4o")
model = ChatGoogleGenerativeAI(model="gemini-1.5-pro-latest")

Your entire chain of prompts, tools, and parsers will work exactly the same. This is the power of building on a robust, abstracted foundation.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment