Blog — Advancing Analytics

Getting to Grips with the Agentic Framework, LangGraph

Written by Luke Menzies | Jan 9, 2025 3:57:10 PM

Introduction

In the past year, the concept of agents for Large Language Models (LLMs) has emerged as a fascinating focal point within the field of Artificial Intelligence (AI). These advanced agents, which possess the ability to engage in intricate interactions, solve complex problems, and even undertake creative tasks, are increasingly gaining traction across a variety of sectors, from education to entertainment. This growing interest can largely be attributed to their remarkable capability to outperform many standalone LLMs in handling complex tasks, coupled with the rapid advancements in user-friendly, open-source agentic frameworks. Consequently, agents have captured the attention of both businesses and individuals alike, all of whom are eager to innovate the ways in which they utilise Large Language Models.

One of the leading frameworks available in this domain is known as LangGraph. This framework has been developed on the foundation of LangChain, which stands as one of the dominant forces in the LLM market and has been progressively evolving towards agentic capabilities. This guide aims to introduce readers to a method of creating sophisticated solutions that transcend simple question-and-answer interactions, delving into the potential of these agents to address more complex problems.

By utilising LangGraph, users can harness the power of LLMs in a manner that is both intuitive and effective, enabling them to tackle a wide range of challenges. This guide will explore the fundamental principles of LangGraph, provide practical examples, and offer insights into best practices for leveraging these agents in various applications. Whether you are a developer looking to build innovative solutions or a business professional aiming to enhance operational efficiency, LangGraph presents an exciting opportunity to explore the future of AI.

How does LangGraph Work

LangGraph is a library built upon the robust foundation of LangChain, specifically designed to enhance applications that utilise LLMs. While LangChain facilitates the creation of customisable cyclical graph relations for linear workflows, LangGraph takes this concept a step further by incorporating LLM-driven cycles. This enhancement is vital for developing more complex, agent-like behaviours that can respond dynamically to various inputs and scenarios.

At the heart of LangGraph are three key conponents:

Stateful graph -  In this structure, each node represents a discrete computation step, and the graph maintains a dynamic state that evolves throughout the entire process. This allows for a more nuanced understanding of the context and progression of tasks, enabling the system to respond intelligently as conditions change.

NodesThese nodes serve as the fundamental building blocks of the structure. Each node is responsible for executing specific functions, such as processing inputs, making decisions, or performing calculations. By defining the roles and capabilities of each node, developers can create intricate workflows that align with their application's requirements.

EdgesThese nodes serve as the fundamental building blocks of the structure. Each node is responsible for executing specific functions, such as processing inputs, making decisions, or performing calculations. By defining the roles and capabilities of each node, developers can create intricate workflows that align with their application's requirements.

By customising the graph structure, state, and coordination mechanisms, LangGraph enables developers to create sophisticated, multi-actor applications that can intelligently adapt to changing inputs. This flexibility not only enhances the functionality of LLM applications but also opens up new avenues for innovation in various fields, from customer service automation to advanced data analysis. As you explore LangGraph, you will discover the potential to transform simple interactions into dynamic, engaging experiences that truly harness the power of Large Language Models.

Getting Started with LangGraph

The first step in getting started with LangGraph is to clearly define the problem you aim to solve. Unlike traditional techniques in AI, such as regression analysis, the approach to solving a problem can vary significantly depending on the specific context and requirements. Consequently, you will need to construct a network of tools and logic to effectively retrieve the final answer you seek.

To begin, consider the operations you may need to implement within your solution. For instance, you might require querying a SQL database, loading data into a Pandas DataFrame, calling an external API, or performing Retrieval-Augmented Generation (RAG) implementations, among others. Each of these operations offers unique capabilities and can be tailored to fit the needs of your project.

Once you have identified the components that are most suitable for the solution you wish to build, you can proceed to design your LangGraph application. This iterative process allows you to refine your approach as you gain a deeper understanding of the problem at hand and the tools available to you. By carefully selecting and integrating these components, you will be well on your way to developing a robust and effective application that leverages the full potential of LangGraph.

Setting up a Graph

The next step is to determine which variables are required for the state object, which will need to evolve throughout the graph. The state object can be set up as a Python class that adheres to the Pydantic convention, which is a popular choice for data validation and settings management in Python. This approach not only ensures that the data is well-structured but also enhances the clarity and maintainability of your code.

When defining the state, you can specify the various attributes that will hold the necessary information as the computation progresses. This allows for a dynamic representation of the state, which can change in response to the operations and interactions occurring within the graph. Here’s a basic example of how you might define the state using a Pydantic model:

 

Where the variable in this example is 'messages'. This represents a different aspect of the state that may be relevant to your application. You can easily expand this model by adding more variables as necessary to accommodate the requirements of your specific use case.

By establishing a well-defined state object, you enable your LangGraph application to track and adapt to changes in data over time, thereby enhancing its ability to respond intelligently to varying inputs and conditions. This foundational step is crucial for creating a responsive and effective solution. The StateGraph is then initialised with the statement below:

Now it is time to begin considering the necessary nodes for your graph. A node can encompass a wide range of functionalities, including Python logic, API calls, interactions with LLMs, and much more. Each node serves as a fundamental building block within your graph, executing specific tasks and contributing to the overall functionality of your application.

When designing your nodes, think about the operations you need to perform and how these nodes will interact with one another. For instance, you may have a node that retrieves data from a database, another that processes this data using Python logic, and yet another that communicates with an external API to fetch additional information. By strategically organising these nodes, you can create a comprehensive workflow that efficiently addresses your problem.

You can set one up using the following code:

This setup will execute whatever code is placed within the function once it is integrated into the graph. In order to establish connections within the graph, edges need to be created between the nodes. These edges define the relationships and flow of information, allowing data to move from one node to another.

There are several types of edges that can be created, each serving different purposes depending on the complexity of your application. For example, simple edges can connect multiple nodes, facilitating straightforward data transmission. You might also encounter conditional edges, which allow for more intricate decision-making processes.

To create a simple edge that connects two nodes, you can use the following expression:

This functionality allows the developer to chain together tasks, thereby effectively creating workflows that can handle a variety of operations. While this capability is invaluable for establishing connections between each task (or node), there will almost certainly be occasions where the user needs to fork the decision-making process. This means that there could be two or more distinct routes that can be taken from a particular node, depending on the conditions at play.

To achieve this, you will require a conditional edge, which enables the application of custom logic to determine the appropriate route based on specific criteria. Conditional edges are particularly useful for implementing decision trees, where the outcome of a node influences subsequent actions within the graph.

A conditional edge can be set up in the following way:

This capability provides the flexibility to direct the state along whichever route is required. A pertinent example of this might arise when a user requests information from an API based on a text input. In such cases, the decision regarding which route to take can be determined by the input being processed by a Large Language Model (LLM). The LLM can analyse the input and decide whether it should be directed to a function that calls the relevant API endpoint, allowing for a dynamic and responsive interaction with the user.

Moreover, it is important to highlight that LangGraph necessitates the establishment of an edge at the START of the graph. This foundational edge serves as an anchor point, informing the graph of where to initiate execution. Without this starting edge, the graph would lack direction, rendering it unable to commence the workflow effectively.

To set up a starting edge, you can use the following code:

Utilising LLM Tools for Chats

In addition to making decisions at a conditional edge, tools can also be provided to the Large Language Model (LLM), which it can utilise whenever it deems it appropriate. For instance, if the input text requests the current weather forecast for Bristol, a tool could be made available to the LLM that enables it to call an endpoint specifically designed to retrieve the latest weather information for that location.

This integration can be achieved by creating a function that interfaces with the weather API and then passing it to LangChain using the bind_tools command. This allows the LLM to assess whether using the tool is necessary to adequately address the question it has received. The LLM's ability to leverage external tools not only enhances its functionality but also enables it to provide more accurate and timely responses to user queries.

In order to set this up, there's a few steps required:

Let us unpack the code presented above. The tool function is defined and can be passed to LangChain in the form of a list. This list can subsequently be incorporated as a runnable within the assistant node. From this point, the assistant is empowered to determine whether to execute the tool in response to the incoming request via the conditional edge that has been established.

Furthermore, this approach includes an error handling fallback mechanism, which ensures that if there is an issue with the tool, a graceful response is returned to the user. This is particularly important in maintaining a smooth user experience, as it prevents the application from failing abruptly and instead provides informative feedback.

Although this method may initially appear complex, it offers a significant advantage by allowing the LLM a degree of autonomy in deciding whether to utilise the tool or not. This flexibility can be extremely beneficial when developing applications such as chatbots, as it enables the LLM to intelligently navigate the questions posed by the user.

For instance, if a user asks for the weather, the LLM can assess whether it has sufficient information to answer directly or whether it should invoke the tool to retrieve the most accurate and up-to-date data. By leveraging this autonomy, the LLM can provide more nuanced and contextually relevant responses, enhancing the overall interaction quality.

Executing Graphs

There are several methods available for executing a graph within LangGraph. In the context of a chatbot, one particularly effective approach is to stream responses dynamically. Streaming allows the chatbot to deliver information in real-time, enhancing the user experience by providing immediate feedback and fostering a more conversational interaction. This can be achieved in the following way:

While streaming is a suitable method for executing a chatbot, it may not always align with the user's needs or expectations. In situations where a developer desires a more defined or structured end, it would be more appropriate to utilise the invoke command:

In this example, starting parameters can be passed to the initial state, providing the developer the flexibility required. Additional config options can also be passed to the invoke command, such as recursion_limit. This controls the maximum number of iterations the graph will complete, ensuring a graph doesn't run indefinitely if stuck in a loop. 

It is important to note that when using the invoke command, it is essential to define an END for your graph. Much like the START node, the END node serves as a critical marker that signifies the conclusion of the graph's execution. This ensures that the flow of operations is clearly delineated, allowing the system to understand when to cease processing and return a final output to the user.

To add an END node to your graph, you can follow this expression:

Displaying Graphs

LangGraph offers an excellent means of displaying the graph, allowing users to save it in PNG format. This feature is incredibly useful for illustrating the functionality of the graph while also helping to visualise all the possible routes and connections within it.  To display your graph use this expression:

This will display the graph, which can then be saved as a PNG.

Conclusion

In conclusion, LangGraph emerges as a powerful and versatile framework for harnessing the capabilities of LLMs in a structured and dynamic manner. Through its innovative approach to graph-based design, LangGraph enables developers to create sophisticated applications that can intelligently adapt to user inputs and complex workflows. By understanding the fundamental concepts of stateful graphs, nodes, and edges, users can construct intricate systems that not only respond to queries but also engage in meaningful interactions.

As the field of AI continues to evolve, the demand for Agent Frameworks that can dramatically enhance interactions with LLMs will only grow. LangGraph stands at the forefront of libraries available, offering an exciting opportunity for developers and businesses alike to explore the future of intelligent applications.

By embracing the principles and practices outlined in this guide, you can unlock the full potential of LangGraph, paving the way for innovative solutions that truly leverage the power of Agents.

At Advancing Analytics, we’re here to help you navigate the complexities of AI and bring your data-driven vision to life. If you're looking to develop impactful, intelligent solutions that align with your business goals, we’re ready to partner with you.

Explore how our consultancy can guide your next project - visit our website or get in touch today!