Blog Post #24: Getting Started with LangChain: Installation, Setup, and Core Principles

The theory is over. It’s time to write code.

For twenty-three posts, we’ve journeyed through the concepts that power Agentic AI, from the ReAct loop to memory and planning. We’ve built a professional workshop with Python, VS Code, and Git. Now, we finally get to turn on the machines.

We’ll begin our practical journey with LangChain, the most popular and comprehensive AI framework. Think of it as the Swiss Army Knife we discussed in Post #16; it has a tool for almost everything.

By the end of this tutorial, you will have a working Python script that uses LangChain to connect to an LLM and get a response. This is the “Hello, World!” of Agentic AI, and it’s the foundation for everything that comes next.


Part 1: The Workshop Setup (Prerequisites)

Let’s start with a clean workspace, applying the best practices we’ve learned.

  1. Create a Project Directory: Open your terminal and run: mkdir langchain-start && cd langchain-start
  2. Initialize a Virtual Environment: We’ll create an isolated sandbox for our project’s dependencies. python -m venv venv Now, activate it.
    • On Windows: .\venv\Scripts\activate
    • On macOS / Linux: source venv/bin/activateYour terminal prompt should now start with (venv).
  3. Set up Git: Initialize version control and create our crucial .gitignore file. git init Create a file named .gitignore and add the following lines to it. This prevents our environment and secrets from ever being committed.
# Virtual Environment
venv/

# Python cache
__pycache__/

# Secrets file
.env

Part 2: Installation – Getting the Tools

LangChain is highly modular. We install the core library, then add specific packages for the services we want to connect to.

  1. Install Core LangChain: pip install langchain
  2. Install the OpenAI Integration: We’ll use OpenAI’s models for this example, which requires its own integration package. pip install langchain-openai
  3. Install the .env Helper: We need this library to load our API key. pip install python-dotenv
  4. Save Your Dependencies: Finally, let’s lock in our setup by creating a requirements.txt file. pip freeze > requirements.txt Now, let’s make our first commit to save our work. git add .; git commit -m "Setup: Initialized project and installed dependencies"

Part 3: API Key Configuration – The Secure Handshake

As we learned in Post #13, we never hardcode secrets.

  1. Create a file named .env in the root of your langchain-start folder.
  2. Inside the .env file, add your OpenAI API key. Get this from the OpenAI Platform. OPENAI_API_KEY="sk-YourSecretKeyGoesHere..."
  3. Triple-check that .env is listed in your .gitignore file. This is your most important safety check.

Part 4: “Hello, LangChain!” – Writing Your First Chain

Now for the main event. Let’s write the code.

At its core, LangChain helps you chain together components. The most basic chain follows this pattern: Prompt -> LLM -> Output Parser.

Create a file named main.py and add the following code:

# main.py
import os
from dotenv import load_dotenv
import logging

# --- 1. Setup: Load environment variables and configure logging ---
load_dotenv()
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Verify that the API key is loaded
if not os.getenv("OPENAI_API_KEY"):
    logger.error("OPENAI_API_KEY not found in .env file. Please set it.")
    exit()
else:
    logger.info("OpenAI API Key loaded successfully.")

# --- 2. Import the necessary LangChain components ---
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# --- 3. Initialize the components of our chain ---
logger.info("Initializing LangChain components...")

# The LLM model we'll be using
llm = ChatOpenAI(model="gpt-3.5-turbo")

# The prompt template tells the LLM how to behave
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant who is an expert on Indian geography."),
    ("user", "{input}")
])

# The output parser will convert the LLM's chat message into a simple string
output_parser = StrOutputParser()

# --- 4. Create the chain using LCEL (LangChain Expression Language) ---
# The pipe symbol '|' is used to connect the components together.
chain = prompt | llm | output_parser
logger.info("LangChain chain created successfully.")

# --- 5. Invoke the chain with a query ---
logger.info("Invoking the chain...")
# The current time is 1:51 PM, Monday, September 29, 2025
# My location is Sodepur, West Bengal
query = "What is the capital of West Bengal, and is it far from my current location, Sodepur?"

try:
    response = chain.invoke({"input": query})

    # --- 6. Print the response ---
    logger.info(f"Query: {query}")
    logger.info(f"Response: {response}")

except Exception as e:
    logger.error(f"An error occurred while invoking the chain: {e}")

Running the Code

Open your terminal (with the (venv) activated) and run the script:

python main.py

You should see your log messages and then a response from the LLM, something like:

INFO: Response: The capital of West Bengal is Kolkata. Sodepur is located very close to Kolkata, as it is part of the Kolkata Metropolitan Area, just to the north of the city.

Understanding the Code

  • Components: We initialized three standard components: ChatOpenAI (the model), ChatPromptTemplate (the instructions), and StrOutputParser (to format the output).
  • LCEL |: The pipe symbol | is the LangChain Expression Language (LCEL). It’s the standard, modern way to compose components. It elegantly defines the flow of data: the output of the prompt is “piped” as the input to the llm, and its output is piped to the output_parser.
  • .invoke(): This is the standard method to run a chain. We pass our input variables in a dictionary (in this case, {"input": query}).

Conclusion

Congratulations! You have officially bridged the gap from theory to practice. You’ve installed a major AI framework, securely connected it to a powerful LLM, and executed your first chain.

While this script is simple, the Prompt -> LLM -> Parser pattern is the fundamental atom of all agentic systems. Every complex agent we’ve discussed, from ReAct loops to planners, is built by composing these simple chains in more sophisticated and powerful ways. You’ve laid the cornerstone for everything to come.

Author

Debjeet Bhowmik

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins. In my free time, I write blogs on ckdbtech.com

Leave a Comment