Exploring LangChain's Expression Language (LCEL)
LangChain's Expression Language is an innovative feature designed to enhance the flexibility and functionality of workflows within the LangChain ecosystem. By enabling developers to build complex data pipelines, this language introduces a new level of control over how data is processed, transformed, and routed through different components of a chain. This article will explore the key aspects of the Expression Language, its core components, and practical examples to illustrate its capabilities.
Note: This article is part of the main Medium article:
What is LangChain's Expression Language?
The Expression Language in LangChain provides a declarative approach to defining how data should flow through a sequence of operations. It abstracts the complexity of chaining multiple tasks together, allowing developers to focus on defining the logic rather than worrying about the underlying mechanics of execution. The language leverages a series of "runnables" that represent individual units of work, which can be combined, parallelized, or sequenced to create powerful data processing pipelines.
The LangChain Expression Language (LCEL) is a more recent addition to the LangChain framework, introduced to enhance the flexibility and composability of chains within LangChain. LCEL allows developers to use a more expressive syntax when defining and connecting different components, such as templates, language models, and other operations within a LangChain pipeline.
When LCEL Came into Effect:
LangChain Expression Language (LCEL) was introduced in 2023 as part of LangChain's evolution to provide a more intuitive and powerful way to define chains and workflows. The exact release date might vary based on the specific versioning and releases by the LangChain team, but its introduction marked a significant shift in how chains could be created and manipulated within the framework.
Practical Examples
Before we start, make sure you set up Ollama as explained in the following article:
Creating Phi LLM object:
from langchain_community.llms import Ollama
llm = Ollama(model='phi3:3.8b')
llm.invoke("tell me a joke")
'Why don\'t scientists trust atoms? Because they make up everything!\n\n\nThis light-hearted, play on words joke relates to the fact that atoms are fundamental components of matter and literally "make up" everything in the sense that they constitute all physical objects. The humor comes from anthropomorphizing atoms as if they were capable of deceit like humans do when saying "they make up everything."'
Prompt Templates
Let's use a prompt template to customize the topic of the joke, as follows:
from langchain.prompts import PromptTemplate
template = """Tell me a joke about {subject}"""
prompt = PromptTemplate(
input_variables=["subject"],
template=template,
)
chain = prompt | llm
response = chain.invoke({"subject": "cats"})
print(response.content)
"Why did the cat join the band? Because it had rhythm! Just kidding, but in all seriousness, here's a light-hearted feline pun for you: Why was the cat sitting on the computer? It needed to 'meow' some space! Remember, cats are full of surprises – just like their sense of humor.\n\n**Note: Always ensure jokes about animals or any other subject maintain respect and positivity."
LangChain templates are a powerful feature in the LangChain framework, designed to streamline the process of crafting prompts for language models. These templates allow developers to define dynamic prompts with placeholders that can be populated with specific inputs at runtime, enabling a consistent structure across different queries while accommodating various contexts. By separating the prompt structure from the data, LangChain templates promote reusability and maintainability in natural language processing tasks. They are particularly useful in applications where the same underlying logic needs to be applied across different datasets or scenarios, such as generating responses, summarizing text, or interacting with structured data. The flexibility of LangChain templates makes them a valuable tool for developers looking to harness the full potential of language models in a scalable and efficient manner. For more information, refer to my previous article:
LangChain provides two key templates: PromptTemplate and ChatPromptTemplate.
Both templates simplify prompt creation and offer adaptability across tasks. For this simple example, we will use the PromptTemplate since we only need a placeholder.
What is a Chain:
In LangChain, creating a chain can be as simple as chaining together basic operations without the need to define any custom functions. Let's run the previous example using a chain
chain = template | llm
chain.invoke({"subject": "cats"})
'Why don't cats like to go on dates?
Because they never hear the end of "Paws and Relax!"
(Note: This joke is light-hearted and intended for entertainment. Respect towards animals should always be maintained.)'
The code snippet above uses the LangChain LCEL framework, demonstrating how to create and use a chain of operations involving a template and a language model (LLM). Let me break it down:
1. chain = template | llm
2. chain.invoke({"subject": "cats"})
Example Flow:
If you looked at this code output in a monitoring tool such as Langfuse, Langsmith, or Galileo, you will find it ordered as a chain:
This code demonstrates a simple but powerful pipeline in LangChain, where a template is filled with specific input data and then processed by a language model to generate a response. It shows how you can modularly build and execute operations in a structured and reusable manner using LangChain.
Recommended by LinkedIn
The chaining process works with LangChain prompt templates because they are specifically designed to be composable with other components in the LangChain framework, such as language models (LLMs). This composability is achieved through a few key features that make LangChain prompt templates special:
1. Modularity and Composition:
2. Built-in Integration:
3. Data Binding and Placeholder Management:
4. Pipeline Execution:
5. Error Handling and Validation:
The Old Way of LangChain Chains:
Before LCEL was introduced, creating chains in LangChain involved a more manual and less expressive process. Here’s how the old method worked:
Chains were created by explicitly defining each step in the process. Developers had to manually connect the output of one component to the input of another. This often involved writing more boilerplate code to manage the flow of data between components.
The old method typically supported more linear and straightforward chains. While it was possible to create complex chains, doing so required more effort and less intuitive coding practices
Without LCEL, there was less flexibility in combining different components. Developers had to be more careful in managing data types, inputs, and outputs between different steps in the chain. Error handling and branching logic were also more cumbersome
Do We Have to Have a Placeholder?
No, you don't always need a placeholder in a PromptTemplate. A prompt template can simply be a static string if no variables are needed. However, the template's true advantage comes from using placeholders, allowing dynamic content to be inserted at runtime. You can use placeholders to customize the output by inserting variable data into specific parts of the prompt. If no dynamic input is required, you can omit the placeholders and use a fixed prompt.
Make sure to pass an empty dictionary {} to the invoke method because the PromptTemplate expects a dictionary input, even if no variables are used.
RunnableSequence
If you checked the type og the "chain", you will find it of tor RunnableSequesnce
The RunnableSequence class in LangChain is a powerful tool designed to facilitate the sequential execution of multiple Runnable objects. It allows you to chain together different operations where the output of one step is automatically passed as the input to the next. This chaining mechanism is a cornerstone of creating complex workflows in LangChain, enabling seamless integration of different components like prompts, models, and parsers.
Key Features of RunnableSequence:
Sequential Execution: RunnableSequence ensures that each Runnable in the sequence is executed one after the other, with the output from one serving as the input to the next. This is especially useful in scenarios where you need to process data through multiple stages.
Chaining with Pipe Operator: You can easily construct a RunnableSequence using the pipe (|) operator. For example, if you have a prompt template, a model, and an output parser, you can chain them together like this:
chain = prompt | model | output_parser
result = chain.invoke({"input": "your question"})
Handling Asynchronous Operations: RunnableSequence supports both synchronous and asynchronous execution. This is particularly useful for managing operations that require waiting for external resources or handling large data streams efficiently.
Debugging and Tracing: LangChain provides tools to help you debug and trace your RunnableSequence. You can enable global debugging or use custom callbacks (such as Galileo or Langfuse) to monitor intermediate outputs, making it easier to troubleshoot and optimize your workflows.
Integration with Other Runnables: RunnableSequence can be composed with other Runnable objects, including custom functions or more complex sequences. This modularity allows for the creation of highly flexible and reusable components within your application.
In LangChain, a RunnableSequence is created by chaining together multiple Runnable objects. This chaining can be done using either the pipe (|) operator or the .pipe() method. When you link these Runnable objects, they automatically form a RunnableSequence, which itself is treated as a single Runnable.
How It Becomes a RunnableSequence:
Chaining with the Pipe Operator:
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI()
output_parser = StrOutputParser()
# Chaining runnables into a RunnableSequence
chain = prompt | model | output_parser
In this example, prompt, model, and output_parser are all Runnable objects. The chain created by prompt | model | output_parser automatically becomes a RunnableSequence, where each component runs sequentially.
By chaining Runnable objects, LangChain simplifies the process of building complex data processing pipelines, turning individual operations into a seamless, automated workflow
Runnables
Advantages of Using the Expression Language
The primary benefit of using the Expression Language in LangChain is its ability to simplify complex workflows. By providing a set of tools that can be combined in various ways, developers can create sophisticated data processing pipelines without getting bogged down in the intricacies of each component's implementation. This makes it easier to focus on the overall logic and ensures that the system remains maintainable and scalable.
Conclusion
LangChain's Expression Language is a powerful tool for developers looking to build complex workflows with ease. By leveraging the core components like Runnable, RunnablePassthrough, and RunnableParallel, you can create flexible and efficient data processing pipelines. Whether you're working on simple transformations or complex data retrieval tasks, the Expression Language provides the tools you need to get the job done efficiently.
Additional Resources: