November 21, 2024
Perplexity is, historically speaking, one of the "standard" evaluation metrics for language models. And while…
Conversational agents in LangChain facilitate interactive and dynamic conversations with users.
Conversation agents are optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which could be better in a conversational setting where you may want the agent to be able to chat with the user as well.
Conversational agents can engage in back-and-forth conversations, remember previous interactions, and make contextually informed decisions.
On the other hand, non-conversational agents have different focuses and capabilities. While they can generate text based on input, they may have different interactivity, memory, and context than conversational agents. They may be more suitable for tasks such as text generation, language translation, or sentiment analysis rather than engaging in interactive conversations.
Conversational agents in LangChain offer distinct features and functionalities that make them unique and tailored for interactive and dynamic conversations with users.
They’re different from other agents in LangChain in a few ways:
1) Focus on Conversation: Conversational agents are designed to facilitate interactive and dynamic conversations with users. They are optimized for conversation and can engage in back-and-forth interactions, remember previous interactions, and make contextually informed decisions.
2) Multi-turn Interactions: Conversational agents excel in handling multi-turn conversations, where users can ask follow-up questions or provide additional information. They can maintain the context of the conversation and provide coherent and meaningful responses based on the entire conversation history.
3) Dynamic Decision-making: Conversational agents can make dynamic decisions based on the current conversation context and available information. They can retrieve and integrate real-time data from external systems through APIs, enabling them to provide up-to-date and accurate responses.
Want to learn how to build modern software with LLMs using the newest tools and techniques in the field? Check out this free LLMOps course from industry expert Elvis Saravia of DAIR.AI!
ConversationBufferMemory
Ensure you take care of imports, setting your OpenAI key, etc.
%%capture
!pip install langchain openai duckduckgo-search youtube_search wikipedia langchainhub
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter Your OpenAI API Key:")
from langchain.agents import Tool, AgentType, initialize_agent
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.utilities import DuckDuckGoSearchAPIWrapper
from langchain.agents import AgentExecutor
from langchain import hub
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain.tools.render import render_text_description
Now, let’s get started.
I like using DuckDuckGo for search. It doesn’t require an API, which is great.
One less token to track and to worry about.
You can set up the toolkit as follows:
search = DuckDuckGoSearchAPIWrapper()
search_tool = Tool(name="Current Search",
func=search.run,
description="Useful when you need to answer questions about nouns, current events or the current state of the world."
)
tools = [search_tool]
The initial step involves creating a chat_history
component within the prompt. This feature will prevent “memory loss”, enabling the agent to retain context from previous interaction, enhancing its effectiveness.
memory = ConversationBufferMemory(memory_key="chat_history")
Let’s use a chat model, in this case, GPT-4-Turbo!
You want to make sure that the temperature is set to zero. The agent needs to ensure the ReAct framework, and it must output its responses as we specify. If you set the temperature higher, the LLM will start taking liberties with the prompt and won’t conform its outputs to how we specified.
I suspect that’s less of an issue with GPT-4, since it’s already pretty smart, and more so something to worry about when building agents with smaller LLMs.
llm = ChatOpenAI(model = "gpt-4-1106-preview", temperature=0)
You’ll also equip it with the LLM, tools, and memory
agent_chain = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=True)
agent_chain.run(input="What it do, nephew!")
> Entering new AgentExecutor chain...
```
Thought: Do I need to use a tool? No
AI: "What it do, nephew!" is another informal and friendly greeting, similar in use to "What's up?" or "How's it going?" It's all good here! How can I help you today?
```
> Finished chain.
"What it do, nephew!" is another informal and friendly greeting, similar in use to "What\'s up?" or "How\'s it going?" It\'s all good here! How can I help you today?\n```
agent_chain.run(input="I'm Harpreet Sahota, the Data Scientist, search me up bruv.")
> Entering new AgentExecutor chain...
```
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Harpreet Sahota Data Scientist
```
Observation: Harpreet Sahota, a data science expert and deep learning developer at Deci AI, joins Jon Krohn to explore the fascinating realm of object detection and the revolutionary YOLO-NAS model architecture. Discover how machine vision models have evolved and the techniques driving compute-efficient edge device applications. A new generation of data scientists is emerging—those who understand and masterfully leverage Generative AI. Harpreet Sahota is the host of The Artists of Data Science podcast; the only personal growth and development podcast for Data Scientists. A proud data science generalist with strong business acumen, Harpreet works by day to define and execute strategies that demonstrate the value of the data. Recommended Content Security Harpreet Sahota joins us from Deci today to detail YOLO-NAS as well as where Computer Vision is going next. Harpreet: ... • Through prolific data science content creation, including The Artists of Data Science podcast and his LinkedIn live streams, Harpreet has amassed a social-media following in excess of 70,000 followers. ... Plus upcoming panel discussion, text-guided image-to-image generation with Stable Diffusion, and a framework for generating synthetic data for LLMs The Generative Generation Subscribe
Thought:Do I need to use a tool? No
AI: Harpreet Sahota is recognized as a data science expert and deep learning developer at Deci AI. He has appeared on a podcast with Jon Krohn discussing object detection and the YOLO-NAS model architecture, which is relevant to machine vision models and their applications on edge devices. Harpreet is also the host of The Artists of Data Science podcast, which focuses on personal growth and development for data scientists. He is known for his strong business acumen and strategy execution in demonstrating the value of data. Additionally, Harpreet has a significant social media presence, with over 70,000 followers, and is involved in content creation related to data science, including LinkedIn live streams. He has also been involved in discussions about text-guided image-to-image generation with Stable Diffusion and generating synthetic data for large language models (LLMs).
> Finished chain.
Harpreet Sahota is recognized as a data science expert and deep learning developer at Deci AI. He has appeared on a podcast with Jon Krohn discussing object detection and the YOLO-NAS model architecture, which is relevant to machine vision models and their applications on edge devices. Harpreet is also the host of The Artists of Data Science podcast, which focuses on personal growth and development for data scientists. He is known for his strong business acumen and strategy execution in demonstrating the value of data. Additionally, Harpreet has a significant social media presence, with over 70,000 followers, and is involved in content creation related to data science, including LinkedIn live streams. He has also been involved in discussions about text-guided image-to-image generation with Stable Diffusion and generating synthetic data for large language models (LLMs).
You can confirm that the agent has memory, like so:
agent_chain.run(input="Who were we just talking about?")
> Entering new AgentExecutor chain...
```
Thought: Do I need to use a tool? No
AI: We were just talking about Harpreet Sahota, the Data Scientist.
```
> Finished chain.
We were just talking about Harpreet Sahota, the Data Scientist.\n```
agent_chain.run(input="Seems like a pretty cool dude to me.")
> Entering new AgentExecutor chain...
```
Thought: Do I need to use a tool? No
AI: Harpreet Sahota certainly has made a name for himself in the data science community and has contributed to various discussions and educational content in the field. It's great to hear that you think he's a cool dude!
```
> Finished chain.
Harpreet Sahota certainly has made a name for himself in the data science community and has contributed to various discussions and educational content in the field. It's great to hear that you think he's a cool dude!\n```
Awesome! LangChain wants us to start moving to LCEL, so let me show you how to create an agent.
This follows the same flow as above, just using the expression language.
You can inspect the REACT prompt template below and observe that it is, in fact, a partial prompt template.
A what? Partial. Prompt. Template.
Allow me to explain…
Partial prompt templates in LangChain offer a flexible way to work with prompt templates by allowing users to predefine a subset of required values. This is especially beneficial when some values are known beforehand, enabling a more streamlined approach to formatting the remaining values later.
LangChain provides two primary methods for creating partial prompt templates:
Partial with Strings:
Partial with Functions:
Consider a complex prompt template that necessitates multiple variables. If certain values, like name and location, are already known, a partial template can be crafted with these preset values. This partial template can then be used more efficiently, requiring only the input of the remaining variables, such as time.
For instance, a personalized story prompt might need variables like name, location, and time. A partial template can be created with these values if the name and location are predetermined. This simpler partial template can gather only the outstanding variables, like time.
Partial prompt templates in LangChain enhance the reusability of prompt templates and diminish complexity. By allowing users to preset specific values, they maintain the original template structure while simplifying the formatting process.
Using partial prompt templates with strings is particularly useful when you receive some variables earlier than others. This method streamlines the process and enhances efficiency.
agent_prompt = hub.pull("hwchase17/react-chat")
print(agent_prompt.template)
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
TOOLS:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
{agent_scratchpad}
Now, let’s go ahead and instantiate the text as a partial prompt template like so:
prompt = agent_prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm_with_stop = llm.bind(stop=["\nObservation"])
You may notice that when using an off-the-shelf agent you set up the chain with initialize_agent
, but below you’re using the AgentExecutor
.
The key differences between AgentExecutor
and initialize_agent
are:
• AgentExecutor
is a class that is used to execute actions from tools sequentially in a chain. It is part of the lower-level agent infrastructure.
• initialize_agent
is a convenience function to create an agent with tools and an LLM. It handles constructing an AgentExecutor under the hood and provides a simple interface to create different agent types.
In summary:
AgentExecutor
is lower-level, handles executing a chain of actions from tools.
initialize_agent
is higher-level, creates an agent with tools that uses AgentExecutor
under the hood. It provides a simple way to construct different agent types.
So initialize_agent
uses AgentExecutor
, but provides a more convenient interface for creating agents. AgentExecutor
is used directly when more control over the agent execution chain is needed.
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
agent_executor.invoke({"input": "What's the forecase for snow looking like in Winnipeg today?"})["output"]
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Winnipeg snow forecast todayWinnipeg, MB - 7 Day Forecast - Environment Canada C. Sunrise: 8:12 CST Sunset: 16:28 CST Averages and extremes 03 Dec Average high -6.8 °C Average low -15.6 °C Highest temperature (1938-2007) 6.7 °C 1941 Lowest temperature (1938-2007) -33.3 °C 1964 Greatest precipitation (1938-2007) 6.6 mm 1961 Greatest rainfall (1938-2006) 5.6 mm 1941 Winnipeg is also expected to see some of the white stuff today. Environment Canada's forecast for Friday says periods of rain will begin early in the morning then changing to snow in the... Warnings for wintry weather are in effect. It did, however, make a mark in the record books on Thursday, when it reached a high of 8.6 C. The old record of 5 C was set in 1939. The normal high for ... °F Observed at: Winnipeg Richardson Int'l Airport Date: 8:00 AM CDT Friday 20 October 2023 Condition: Mostly Cloudy Pressure: 100.6 kPa Tendency: Rising Temperature: 2.8°C Dew point: 2.1°C Humidity: 95% Wind: SSE 3 km/h Visibility: 18 km Forecast Hourly Forecast Air Quality Alerts Jet Stream Fri 20 Oct 17°C Detailed forecast for the next 24 hours - temperature, weather conditions, likelihood of precipitation and winds ... Hourly Forecast - Winnipeg . No alerts in effect. Date/Time (CST) Temp. (°C) Weather Conditions Likelihood of precip (%) ... Periods of rain mixed with snow. 100: N 20 : 10:00 : 1 : Periods of rain mixed with snow. 100: N 20 ...Do I need to use a tool? No
Final Answer: The forecast for Winnipeg today includes periods of rain early in the morning changing to snow later on. Environment Canada has issued warnings for wintry weather. As of the last observation at Winnipeg Richardson International Airport, the condition was mostly cloudy with a temperature of 2.8°C, and there is a 100% likelihood of precipitation with periods of rain mixed with snow expected. Please note that weather conditions can change rapidly, so it's always a good idea to check the latest forecast if you're planning to go out.
> Finished chain.
The forecast for Winnipeg today includes periods of rain early in the morning changing to snow later on. Environment Canada has issued warnings for wintry weather. As of the last observation at Winnipeg Richardson International Airport, the condition was mostly cloudy with a temperature of 2.8°C, and there is a 100% likelihood of precipitation with periods of rain mixed with snow expected. Please note that weather conditions can change rapidly, so it's always a good idea to check the latest forecast if you're planning to go out.
agent_executor.invoke({"input": "What did I just ask you about?"})["output"]
> Entering new AgentExecutor chain...
```
Thought: Do I need to use a tool? No
Final Answer: You just asked about the forecast for snow in Winnipeg today.
```
> Finished chain.
You just asked about the forecast for snow in Winnipeg today.\n```
agent_executor.invoke({"input": "Will it look like a white Christmas there?"})["output"]
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: Winnipeg white Christmas forecast 2023With 7 C in forecast, white Christmas may be dream By: Nicole Buffie Posted: 5:42 PM CST Monday, Dec. 4, 2023 Last Modified: 7:38 AM CST Tuesday, Dec. 5, 2023 Updates Winnipeggers hoping... Published Dec. 7, 2023 1:52 p.m. PST Share If you're anything like Michael Bublé or Bing Crosby before him, you're dreaming of a white Christmas, just like the ones you used to know. But as... WATCH: 2023-2024 winter weather forecast -- here's what Canadians can expect - Dec 1, 2023 After three consecutive La Niña winters, a moderate El Niño is now well established in the central... November 30, 2023 Share Facebook Email For daily wit & wisdom, sign up for the Almanac newsletter. Are you dreaming of a White Christmas? In 2023, your dreams might come true! Of course, lots of snow can also affect travel plans. As always, The Old Farmer's Almanac looks ahead with our special Christmas Forecast 2023. Christmas! December is here: Will there be a white Christmas in 2023? Experts weigh in. Doyle Rice USA TODAY 0:00 1:39 It's about time to turn our attention to the December holidays, including...Do I need to use a tool? No
Final Answer: Based on the information available, it seems that the chances of a white Christmas in Winnipeg are uncertain, with a forecast of 7°C suggesting that a traditional snowy Christmas may not be guaranteed. Weather conditions can change, so it's always best to check closer to the date for the most accurate forecast.
> Finished chain.
Based on the information available, it seems that the chances of a white Christmas in Winnipeg are uncertain, with a forecast of 7°C suggesting that a traditional snowy Christmas may not be guaranteed. Weather conditions can change, so it's always best to check closer to the date for the most accurate forecast.
And there you have it — how to set up an agent both ways!
In summary, the blog discusses the setup and utilization of Conversational Agents in LangChain, emphasizing their capabilities to engage in interactive dialogues and remember past interactions for contextually informed decision-making.
Key aspects include:
ConversationBufferMemory
with a chat_history
component to retain context from previous interactions.These steps provide a comprehensive guide for creating and operating a Conversational Agent capable of handling complex interactions and tasks in a dynamic environment.