Lang Chain - Agents
Lang Chain - Agents
CHAPTER 1
INTRODUCTION
Designed primarily as a Python library, Lang Chain provides developers with ready-to-use components for
building structured chains or dynamic agents. Structured chains allow for a fixed sequence of interactions,
while agents add flexibility by making decisions during runtime based on user input and available tools.
This dual approach enhances the flexibility, intelligence, and real-world usability of AI applications.
Lang Chain’s architecture promotes rapid development, scalability, and best practices in building LLM-
powered solutions. By bridging the gap between standalone language models and practical, interactive
systems, Lang Chain plays a crucial role in enabling the next generation of AI-driven applications.
Figure 1.1 Lang Chain Framework Provides a conceptual overview of how various modules collaborate within
the Lang Chain framework:
Document Loaders: These are used to import and preprocess documents from various formats (PDFs,
Word, Notion, etc.).
Vector stores: Store numerical representations (embeddings) of documents for efficient similarity
search and retrieval.
Prompts: Serve as templates to guide the behavior of LLMs, ensuring accurate and context-aware
responses.
Agents: Dynamic decision-makers that select tools or actions based on user input and available
resources.
Chains: Sequences of calls (to LLMs or tools) that execute a workflow—either simple or complex.
LLMs (Large Language Models): The core engines (like GPT-4) that generate responses, interpret
inputs, and perform tasks.
Together, these modules enable Lang Chain to build powerful, modular applications like chatbots, search tools,
and more. Lang Chain helps manage complex workflows, making it easier to integrate LLMs into various
applications like chatbots and document analysis. Key benefits include:
Modular Workflow: Simplifies chaining LLMs together for reusable and efficient workflows.
Prompt Management: Offers tools for effective prompt engineering and memory handling.
the task into subparts, use tools like search APIs and data analysis, and create visualizations to provide a
comprehensive answer.
Key components of an LLM agent include the LLM itself, planning, memory, tool usage, and access to
relevant data sources. By integrating these elements, Lang Chain agents can tackle sophisticated queries and
tasks that would be challenging for standalone LLMs. This makes them incredibly valuable for applications
requiring nuanced understanding, multi-step reasoning, and interaction with various data sources or APIs. In
the following sections, we will delve deeper into how to set up and utilize Lang Chain agents, explore their
functionalities, and demonstrate their practical applications in real-world scenarios. Whether you are
developing a conversational agent, an automated research assistant, or a complex data analysis tool, Lang
Chain agents offer a robust solution to enhance your project’s capabilities.
Figure 1.2 illustrates how an Agent in Lang Chain operates by coordinating inputs, memory, tools, and
planning components to make dynamic decisions and execute tasks based on user requests.
This diagram represents the core working mechanism of an Agent in the Lang Chain framework:
User Request: The starting point of interaction, where the user poses a query or task.
Department of CSE-Data Science, ATMECE, Mysuru 3
Lang Chain - Agent
Agent: The central controller that interprets the request and decides what actions to take.
Tools: External utilities (like search APIs, calculators, or language functions) the agent can call on to
solve a problem.
Memory: Stores previous interactions, results, or important contextual information to maintain
continuity or reference past queries.
Planning: Enables the agent to break down complex tasks into steps and determine the most
effective execution strategy.
Together, these components enable the agent to dynamically respond to different types of tasks with
flexibility and intelligence, unlike a rigid, pre-defined system.
In Lang Chain, "chains" refer to the structured ways of organizing the flow of data and processing it
through a large language model (LLM) to generate meaningful responses. Depending on the type and
amount of data being handled, Lang Chain provides different types of chains to make the process
efficient and flexible.
1.5.1 Stuff:
Figure 1.3 illustrates the first method of Lang Chain, known as the Stuff Method. This is the simplest
type of chain, where all the information—such as documents or chunks of data—is gathered and
combined into a single prompt. This large prompt is then sent to the language model to generate a
response. The Stuff Method is fast, cost-effective, and works well with small amounts of information.
However, it is not suitable for large datasets, as too much input can overwhelm or confuse the model.
1.5.2 Map_Reduce
Figure 1.4 illustrates the second method of chains in Lang Chain, known as the Map-Reduce Method.
This method is designed for handling large amounts of information. In the "map" step, each document
or chunk is processed independently. In the "reduce" step, the individual outputs are combined to form
a single final response. This approach enables parallel processing, making it more scalable and
efficient. However, since documents are processed independently, it may overlook relationships
between different pieces of information.
1.5.3 Refine
Figure 1.5 illustrates the third type of chain in Lang Chain, known as the Refine Method. This approach
takes a more careful and sequential path. The language model first processes an initial document and
generates a base response. Then, for each subsequent document, the model refines or updates the
existing response by building upon it. This method is effective for generating richer, more detailed
answers, but it is slower since each step depends on the output of the previous one.
1.5.4 MAP_RERANK
Figure 1.6 illustrates the final method of chains in Lang Chain, known as the Map_ Rerank Method. In
this approach, each document is processed separately by the language model, which is instructed to
assign a relevance score to each one. After scoring, the document with the highest relevance is selected
as the final answer. This method is efficient due to its ability to process documents in parallel.
However, it depends heavily on well-designed scoring instructions to ensure accurate ranking by the
model.
Lang Chain Agents are used in various real-world applications, such as:
1. Customer Support Chatbots: Agents that can answer user queries by searching databases or
using APIs.
2. Search Assistants: Agents that fetch and summarize information from the web.
3. AI Coding Assistants: Helping users to generate, debug, and explain code using external
resources.
4. Business Automation: Managing repetitive tasks by dynamically interacting with tools like
email, databases, and CRMs.
CHAPTER 2
Working of Lang Chain
Below Figure 2.1 Shows the Working of Lang Chain How Lang Chain follows a structured pipeline
that integrates user queries, data retrieval and response generation into seamless workflow.
This question becomes the input for the Lang Chain system.
The user's question is converted into a vector (a set of numbers) using a technique called embeddings.
This vector helps the system understand the real meaning of the question, not just the words.
Lang Chain then searches a vector database to find information that is similar to the user's query.
(It looks for the most related or matching information.)
Based on the similarity search, Lang Chain retrieves the most relevant data or context from the database.
This step makes sure that the system uses correct and related information to prepare the answer.
The collected information is then given to a Language Model (LLM) like OpenAI’s GPT
The LLM thinks based on the input and generates a complete response.
Example: If the question was about the weather, the model might reply,
“Today’s weather is sunny with a high of 75°F.”
Finally, the response is sent back to the user, providing a clear and useful answer.
Figure 2.2 illustrates the workflow of chains in Lang Chain, demonstrating how user input passes through
various stages to produce the final output.
1. User Input:
The process starts when the user submits a query or a request.
Example: “Find the nearest coffee shop.”
2. Step 1: Understanding Intent:
The system analyses the query to understand the user’s actual need or intent.
It figures out what task needs to be performed.
3. Step 2: Searching / Fetching Data:
Based on the identified intent, the system searches for relevant information.
It may interact with databases, APIs, or external tools to collect the required data.
Step 3: Formatting Response:
After collecting the information, the system organizes and formats the data into a proper response.
The response is made easy to understand for the user.
4. Final Output:
The final, structured response is delivered to the user.
Example output: “The nearest coffee shop is Starbucks, located 1 km away.”
Agents in Lang Chain act as intelligent decision-makers. They interact with tools, process inputs, and
generate meaningful outputs. Instead of merely providing static responses, agents follow a looped
structure to perform reasoning and iterative steps until the desired output is reached.
Agents are needed because they make language models (LLMs) smarter and more useful. Normally, a language
model can only give answers based on what you ask. But in real life, many tasks are more complicated need the
system to think, choose, use tools, and act by itself. This is where Agents help.
Make Decisions: Agents can think and decide what to do next based on the situation.
Use Tools: Agents can use calculators, search engines, databases, and other external tools to get the
correct answer.
Solve Complex Problems: If a task has many steps, Agents can handle it step-by-step.
Remember Information: Agents can remember previous chats or steps, making them better for long
conversations.
Work in the Real World: Agents help AI not just answer questions but also take real actions like
booking a ticket or finding live weather.
Be Flexible: Agents can adjust and solve new problems without needing new instructions every time.
Agents play a very important role in Lang Chain. They help the system not just answer questions, but also think,
decide, use tools, and perform actions to solve real-world problems.
Decision-Makers: Agents decide which action or tool to use depending on the user’s query.
Tool Users: Agents can connect with external tools (like a search engine, calculator, or database) to
find information or perform tasks.
Multi-Step Problem Solvers: Agents can break down a big problem into smaller steps and solve them
one by one.
Memory Handlers: Agents can remember previous steps or conversations to keep the task on track.
Real-World Interaction: Agents help the AI interact with real-world systems, making it capable of
doing more than just chatting.
Dynamic Workflow Management: Agents can handle tasks where the next step depends on the
previous result — not just fixed flows.
Department of CSE-Data Science, ATMECE, Mysuru 12
Lang Chain - Agent
1. Language Model (LLM): Used for making decisions and interpreting results.
2. Tools: Predefined functions or APIs the Agent can use.
3. Memory (Optional): Stores past interactions or important information during a session
4. Agent Executor: Controls the execution flow of the Agent, managing how it plans and acts.
Together, these components allow the Agent to perform complex tasks intelligently and efficiently.
Figure 2.3 illustrates the working of an agent in Lang Chain. The steps below explain how an agent
operates through the following fundamental stages:
1. User asks a question → The user asks something (e.g., “How do I integrate an API?”).
2. Agent searches for information → The agent looks for relevant details in company resources.
3. Agent retrieves the information → It gathers data from code, documentation, or blog posts.
4. Agent gives the answer → The agent processes the information and provides a response to the
user.
5. Example: If a developer asks, “How do I set up authentication?”, the agent will search company
documents, find instructions, and return the correct setup steps.
In Lang Chain, Tools and Memory are two important features that make applications more powerful, useful,
and real-world ready.
1. Importance of Tools:
Extend Capabilities: Tools allow the language model to go beyond just answering questions
vthey can search the internet, access databases, use calculators, and interact with external
systems.
Real-Time Information: By using tools, Lang Chain can fetch live data (like weather updates,
stock prices, or news) instead of relying only on old information.
Action-Oriented: Tools allow agents to not just think but also act — like booking a ticket,
finding a location, or sending an email.
Problem-Solving: Complex problems often need tools (e.g., solving a math problem with a
calculator or retrieving specific files), making the agent much smarter and more helpful.
2. Importance of Memory:
Maintain Conversation Context: Memory helps Lang Chain remember what the user said
earlier, so conversations feel more natural and connected.
Better User Experience: When the system remembers user preferences, past queries, or
important points, it can give more personalized and accurate answers.
Long-Term Interactions: In tasks that need multiple steps over time, memory ensures the AI
stays on track without asking the user to repeat information.
Complex Task Handling: For large workflows or projects, memory helps in managing and
recalling previous steps to complete the full task properly.s
CHAPTER 3
In addition to installing libraries, the text file "chatbot.txt" was uploaded into the Colab environment.
The chatbot uses the contents of chatbot.txt to find answers and interact with the user.
Google Colab provided an easy, flexible environment for coding, running, and testing the chatbot program.
nltk (Natural Language Toolkit) is imported for natural language processing like tokenizing text and
lemmatizing words.
The contents are converted into lowercase letters using .lower() to maintain uniformity while processing
text (since "Weather" and "weather" should be treated the same).
This step organizes the input text, making it easier for the chatbot to match and respond accurately.
If it detects a greeting, it randomly selects a friendly reply from a predefined list (["hi", "hey", "hello",
"greetings"]) and returns it.
Purpose:
To make the chatbot seem polite, human-like, and welcoming at the start of a conversation.
The following section illustrates how the chatbot functions during runtime, from handling different types of
user inputs to generating appropriate responses, followed by examples of actual output.
Key interactions:
If the user says "bye" ➔ The chatbot says "Bye! take care.." and exits.
If the user says "thanks" or "thank you" ➔ The chatbot replies with "You are welcome.."
If the input is a greeting ➔ It sends a random greeting back.
Otherwise ➔ It calls the response() function to generate a meaningful answer.
Purpose:
To make the chatbot handle greetings, polite conversations, questions, and conversation endings
smoothly.
CHAPTER 4
A case study provides a real-world example of how theoretical concepts are applied to solve practical problems.
It helps in understanding the actual working, challenges, and outcomes of using a specific technology or
framework.
In this chapter, three case studies are presented based on Lang Chain and its applications in chatbot
development.
These case studies demonstrate how Lang Chain Chains and Agents were used to create intelligent systems that
can interact dynamically with users, select tools intelligently, and solve real-world business problems
efficiently.
Through these examples, the practical capabilities and advantages of using Lang Chain in AI development are
clearly highlighted.
4.2 Case Study 1: Building a Simple FAQ Chatbot Using Lang Chain
Problem Statement:
Organizations often face the need to automate responses to Frequently Asked Questions (FAQs) without hiring
multiple human agents.
The challenge was to create a simple chatbot that could answer a wide range of common queries efficiently,
without depending on live internet access.
Challenges:
The chatbot needed to handle multiple types of questions with accurate responses.
It had to understand user queries even when phrased differently from stored answers.
Solution:
Using Lang Chain framework, a basic chatbot was developed that reads information from a local file
(chatbot.txt).
The solution involved:
Measuring cosine similarity to find the sentence most similar to the user’s query.
Greeting the user intelligently and handling polite conversation exits like "bye" and "thank you".
Thus, the Lang Chain-powered chatbot could respond to queries by matching them to the closest known
sentence, even without live internet connectivity.
Result/Outcome:
The chatbot successfully answered common questions using its preloaded knowledge.
It was able to:
This case study demonstrates how Lang Chain techniques can be used to develop simple yet powerful FAQ
chatbots with minimal setup.
4.3 Case Study 2: Using Lang Chain Agents for Dynamic Tool Selection
Problem Statement:
Modern AI applications often need to perform different types of tasks based on user queries.
For example, a user might ask to perform a calculation, search for information, or summarize a document — in
the same conversation.
The challenge was to build a system that could decide intelligently which tool to use based on the user's
input, instead of following a fixed sequence.
Challenges:
Managing multiple tools and ensuring the correct tool is selected based on the input was complex.
Making decisions without manually hardcoding every possible situation was difficult.
Solution:
The Agent acts as a smart controller between the user and multiple tools.
Based on the user's query, the Agent decides which tool (like a calculator, web search API, or document
summarizer) is appropriate to use.
Agents are built using large language models (LLMs) that can reason about the task, think step-by-step,
and make dynamic choices.
Lang Chain’s Agent framework allows easy integration of multiple tools and uses "reasoning steps" to
plan and act.
Thus, the system became flexible and intelligent — able to handle different tasks without predefined static
flows.
Result/Outcome:
It automatically picked the correct tool at the right time without needing manual instructions.
It provided a better user experience, as users did not have to worry about what tools were available —
the agent handled it internally.
This case study proves how Lang Chain Agents enhance decision-making capabilities and allow the
development of intelligent, multi-functional AI applications.
Problem Statement:
Businesses receive thousands of customer queries every day, ranging from basic FAQs to complex problem
reports.
Handling all these manually leads to high costs, slow response times, and customer dissatisfaction.
The challenge was to develop an intelligent customer support chatbot that could automatically answer queries,
raise support tickets, and escalate complex issues to human agents when needed.
Challenges:
It had to select appropriate actions dynamically — answer FAQs, search databases, or escalate.
Solution:
Lang Chain Agents were used to build a dynamic, intelligent customer support chatbot.
The Agent was trained to think step-by-step about the user's query.
Memory components were added to allow the chatbot to remember conversation history, improving
user experience.
Lang Chain made it easy to integrate external APIs, search systems, and ticketing tools into the chatbot's
workflow.
Thus, the chatbot could handle routine queries automatically and escalate complex issues intelligently —
exactly like a human support executive.
Result/Outcome:
Improved customer satisfaction and reduced support costs for the business.
This case study shows that Lang Chain Agents can be effectively used to build real-world, business-grade AI
solutions in the customer service sector.
4.5 Conclusion
The case studies demonstrate Lang Chain’s versatility in building intelligent AI systems—from simple FAQ
bots to complex, real-world automation. Its modular design, agent-based logic, and tool integration make it
ideal for scalable, high-performing applications. Future improvements like advanced memory and real-time
data access can enhance its capabilities even further.
Overall, Lang Chain enables developers to move beyond static responses and create dynamic, context-aware
systems that can think, act, and adapt—bringing AI closer to real human-like interaction.
Chapter 5
CONCLUSION
Lang Chain provides a robust and modular framework for building powerful applications using large language
models (LLMs). It simplifies the integration of LLMs with structured workflows, tools, memory, and dynamic
decision-making. The core idea behind Lang Chain is not just to generate text but to create intelligent systems
that can reason, access external resources, and interact with the environment effectively.
Throughout this report, we explored the fundamentals of Lang Chain, its architecture, key components, and the
critical role agents play. Agents, in particular, enable LLMs to perform complex tasks by dynamically
choosing actions, using tools, and maintaining context across interactions. With capabilities like planning,
memory handling, and external tool usage, agents transform simple AI into adaptive, real-world problem
solvers.
As AI continues to evolve, frameworks like Lang Chain will be vital in bridging the gap between raw language
generation and practical, context-aware applications. The combination of modular design, agent-based
reasoning, and integration with external tools positions Lang Chain as a cornerstone for the next generation of
AI-powered systems.
REFERENCES
[1] Lang Chain Official Website, "Lang Chain Documentation", [Online]. Available: https://www.Lang
Chain.dev
[2] Harrison Chase, "Lang Chain: Connecting Language Models to External Tools", [Online]. Available:
https://blog.Lang Chain.dev/
[5] J. Brownlee, "Deep Learning for Natural Language Processing", Machine Learning Mastery, 2017.
[6] S. Russell and P. Norvig, "Artificial Intelligence: A Modern Approach", 4th Edition, Pearson, 2020.
[7] R. Ruder, "An Overview of Deep Learning for Natural Language Processing", arXiv preprint,
arXiv:1708.02709, 2017.
[8] L. Tunstall, L. von Werra, and T. Wolf, "Natural Language Processing with Transformers", O'Reilly
Media, 2022.
[9] A. Vaswani et al., "Attention is All You Need", Proceedings of the 31st International Conference on Neural
Information Processing Systems (NIPS), 2017.
[10] Scikit-learn Developers, "Scikit-learn: Machine Learning in Python", [Online]. Available: https://scikit-
learn.org/stable/