LangChain is an open-source tool which helps you compose a variety of language chains (such as, chat system, QA system, etc) by LLMs, and it also has a framework for building Reasoning+Acting (shortly, ReAct) chains.
In this post, I’ll briefly show you “What is ReAct?”, “How it works?”, and “How to build?” with OpenAI GPT and widely-used LangChain.
Currently a lot of tools and open sources – such as, Auto-GPT, HuggingGPT (JARVIS), etc – automate a variety of tasks with reasoning on LLMs, and this post will give you a hint for building such applications.
ReAct (Reasoning + Acting) – Collaboration with LLMs and external apps
Integrating LLMs (Large Language Models) with external apps will be very often needed in the real business applications.
For instance, Microsoft has integrated OpenAI LLMs with their Office applications in Microsoft 365 Copilot.
Another example is the search. LLMs doesn’t always give correct answers, and the method to interact with external search engine (for both internet and intranet) is then often applied in QA system.
The flexible architectures with combination between LLM’s reasoning and additional experts are then proposed in several papers – such as, ReAct (Reasoning + Acting) or MRKL (Modular Reasoning, Knowledge and Language).
ReAct is a framework which has been introduced in this paper (Shunyu et al., 2022), and it improves decision making by integrating LLMs with the external tools.
As you can see in this post, this is so flexible and you can then apply this method in various types of orchestration of LLMs and external applications.
For instance, we can define the action “search [entity]
” which returns the sentences from the corresponding Wikipedia entity, and the multi-hop questions (which cannot be answered from a single Wikipedia entity, but can be answered by referencing the multiple entities) can be disassembled into multiple search actions. All search actions will then be processed by external tools. Finally, the answer can be obtained by question-answering with LLMs. (See below for the example in the paper.)
From “REACT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS” (Shunyu et al., 2022)
Note : In benchmarking in this paper, “
lookup [entity]
” action is also introduced, which returns the next sentence in the page containing string, simulating Ctrl+F functionality on the browser.
These actions (search, lookup) is simulating human behaviors in human response.
This reasoning and acting can be performed by few-shot in-context prompting, or it can also be learned by reinforcement learning or imitation learning (with expert data) in more advanced approaches.
In this post, we will see this ReAct with few-shot prompting.
Setup LangChain environment
In this example, I have used Ubuntu Server 20.04 LTS in Microsoft Azure.
In this virtual machine, it already has Python 3.8 and you just install additional packages for running examples as follows.
# Update pip
sudo apt-get update
sudo apt-get install -y python3-pip
sudo -H pip3 install --upgrade pip
# Install packages
pip3 install openai
pip3 install langchain==0.0.100
pip3 install wikipedia
Before running application, you should set the environment variables for Azure OpenAI integration in LangChain. (See this document to use Azure OpenAI Service in LangChain.)
export OPENAI_API_TYPE=azure
export OPENAI_API_VERSION=2022-12-01
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
export OPENAI_API_KEY="..."
Run Your First ReAct with LangChain
Now let’s start your first ReAct chain.
This is Wikipedia docstore example included in LangChain, in which the question is disassembled into multiple search actions.
In this demonstration, I have used text-davinci-003 (GPT-3.5) model with temperature 0.0 in OpenAI GPT.
In the following code, it finally executes ReAct chain by run()
function to get the answer for the following question :
“Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?”
from langchain.docstore.wikipedia import Wikipedia
from langchain.llms import AzureOpenAI
from langchain.agents import initialize_agent, Tool, AgentExecutor
from langchain.agents.react.base import DocstoreExplorer
# build tools
docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="Search for a term in the docstore.",
),
Tool(
name="Lookup",
func=docstore.lookup,
description="Lookup a term in the docstore.",
)
]
# build LLM
llm = AzureOpenAI(
deployment_name="davinci003-deploy",
model_name="text-davinci-003",
temperature=0,
)
# initialize ReAct agent
react = initialize_agent(tools, llm, agent="react-docstore", verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=react.agent,
tools=tools,
verbose=True,
)
# perform question-answering
question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
agent_executor.run(question)
This question cannot be answered by referring a single Wikipedia article, and it then needs multi-hop search actions.
When you run this application, the following verbose outputs will be shown in the logs.
As you can find, this question is disassembled into 3 search actions (“Search[David Chanoff]
“, “Search[U.S. Navy admiral]
“, and “Search[Admiral William J. Crowe]
“) in order, and it gets the final answer “Bill Clinton”.
What LangChain is doing inside ? (Demystifying ReAct)
Now let’s see how it works inside LangChain, and dive into the implementation of ReAct framework.
In the bottom of this chain, the following prompt is issued into OpenAI GPT. (You can see the prompt’s template by running the following code.)
The former part (which is a long long text) in the following prompt’s template is few-shot’s in-context examples. OpenAI will then learn how to generate text for a target question with these few-shot’s examples.
The last part (last 2 lines in output) is a target question and the {input}
and {agent_scratchpad}
placeholders will be replaced by LangChain framework depending on user’s inputs and chain steps. (I’ll show you the example soon.)
# output prompt's template
print(react.agent.llm_chain.prompt.template)
output
Question: What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range of the
area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in
Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern
sector.
Action 2: Lookup[eastern sector]
Observation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called
the Central Plains orogeny.
Thought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I
need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions
Thought 4: I need to instead search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130
m).[3]
Thought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer
is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]
Question: Musician and satirist Allie Goertz wrote a song about the "The Simpsons"
character Milhouse, who Matt Groening named after who?
Thought 1: The question simplifies to "The Simpsons" character Milhouse is named after
who. I only need to search Milhouse and find who it is named after.
Action 1: Search[Milhouse]
Observation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated
television series The Simpsons voiced by Pamela Hayden and created by Matt
Groening.
Thought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up
"named after".
Action 2: Lookup[named after]
Observation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose
middle name was Milhous.
Thought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is
Richard Nixon.
Action 3: Finish[Richard Nixon]
Question: Which documentary is about Finnish rock groups, Adam Clayton Powell or The
Saimaa Gesture?
Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which
documentary is about Finnish rock groups.
Action 1: Search[Adam Clayton Powell]
Observation 1 Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell
III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office
Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell
(film)’, ’Giancarlo Esposito’].
Thought 2: To find the documentary, I can search Adam Clayton Powell (film).
Action 2: Search[Adam Clayton Powell (film)]
Observation 2: Adam Clayton Powell is a 1989 American documentary film directed by
Richard Kilberg. The film is about the rise and fall of influential
African-American politician Adam Clayton Powell Jr.[3][4] It was later aired
as part of the PBS series The American Experience.
Thought 3: Adam Clayton Powell (film) is a documentary about an African-American
politician, not Finnish rock groups. So the documentary about Finnish rock
groups must instead be The Saimaa Gesture.
Action 3: Finish[The Saimaa Gesture]
Question: What profession does Nicholas Ray and Elia Kazan have in common?
Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then
find the profession they have in common.
Action 1: Search[Nicholas Ray]
Observation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16,
1979) was an American film director, screenwriter, and actor best known for
the 1955 film Rebel Without a Cause.
Thought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need
to search Elia Kazan next and find his professions.
Action 2: Search[Elia Kazan]
Observation 2: Elia Kazan was an American film and theatre director, producer, screenwriter
and actor.
Thought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor.
So profession Nicholas Ray and Elia Kazan have in common is director,
screenwriter, and actor.
Action 3: Finish[director, screenwriter, actor]
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for Women, and find which was
started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine (1844-1846) was an American literary periodical published
in Philadelphia in the 19th century.
Thought 2: Arthur’s Magazine was started in 1844. I need to search First for Women
next.
Action 2: Search[First for Women]
Observation 2: First for Women is a woman’s magazine published by Bauer Media Group in the
USA.[1] The magazine was started in 1989.
Thought 3: First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First
for Women), so Arthur’s Magazine was started first.
Action 3: Finish[Arthur’s Magazine]
Question: Were Pavel Urysohn and Leonid Levin known for the same type of work?
Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work,
then find if they are the same.
Action 1: Search[Pavel Urysohn]
Observation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet
mathematician who is best known for his contributions in dimension theory.
Thought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and
find its type of work.
Action 2: Search[Leonid Levin]
Observation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer
scientist.
Thought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn
and Leonid Levin have the same type of work.
Action 3: Finish[yes]
Question: {input}
{agent_scratchpad}
Now I’ll show you examples in prompts.
First in this chain, it will provide the following prompt to OpenAI GPT.
As you can see below, the former part (few-shot’s examples’ part) is the same as above, but the latter part is replaced with the target question.
If you can use OpenAI playground, please input this text in prompt and see the generated text. (Here I use text-davinci-003 model with temperature 0.0 and max 256 tokens.)
prompt 1
Question: What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range of the
area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in
Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern
sector.
Action 2: Lookup[eastern sector]
Observation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called
the Central Plains orogeny.
Thought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I
need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions
Thought 4: I need to instead search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130
m).[3]
Thought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer
is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]
Question: Musician and satirist Allie Goertz wrote a song about the "The Simpsons"
character Milhouse, who Matt Groening named after who?
Thought 1: The question simplifies to "The Simpsons" character Milhouse is named after
who. I only need to search Milhouse and find who it is named after.
Action 1: Search[Milhouse]
Observation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated
television series The Simpsons voiced by Pamela Hayden and created by Matt
Groening.
Thought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up
"named after".
Action 2: Lookup[named after]
Observation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose
middle name was Milhous.
Thought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is
Richard Nixon.
Action 3: Finish[Richard Nixon]
Question: Which documentary is about Finnish rock groups, Adam Clayton Powell or The
Saimaa Gesture?
Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which
documentary is about Finnish rock groups.
Action 1: Search[Adam Clayton Powell]
Observation 1 Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell
III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office
Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell
(film)’, ’Giancarlo Esposito’].
Thought 2: To find the documentary, I can search Adam Clayton Powell (film).
Action 2: Search[Adam Clayton Powell (film)]
Observation 2: Adam Clayton Powell is a 1989 American documentary film directed by
Richard Kilberg. The film is about the rise and fall of influential
African-American politician Adam Clayton Powell Jr.[3][4] It was later aired
as part of the PBS series The American Experience.
Thought 3: Adam Clayton Powell (film) is a documentary about an African-American
politician, not Finnish rock groups. So the documentary about Finnish rock
groups must instead be The Saimaa Gesture.
Action 3: Finish[The Saimaa Gesture]
Question: What profession does Nicholas Ray and Elia Kazan have in common?
Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then
find the profession they have in common.
Action 1: Search[Nicholas Ray]
Observation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16,
1979) was an American film director, screenwriter, and actor best known for
the 1955 film Rebel Without a Cause.
Thought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need
to search Elia Kazan next and find his professions.
Action 2: Search[Elia Kazan]
Observation 2: Elia Kazan was an American film and theatre director, producer, screenwriter
and actor.
Thought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor.
So profession Nicholas Ray and Elia Kazan have in common is director,
screenwriter, and actor.
Action 3: Finish[director, screenwriter, actor]
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for Women, and find which was
started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine (1844-1846) was an American literary periodical published
in Philadelphia in the 19th century.
Thought 2: Arthur’s Magazine was started in 1844. I need to search First for Women
next.
Action 2: Search[First for Women]
Observation 2: First for Women is a woman’s magazine published by Bauer Media Group in the
USA.[1] The magazine was started in 1989.
Thought 3: First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First
for Women), so Arthur’s Magazine was started first.
Action 3: Finish[Arthur’s Magazine]
Question: Were Pavel Urysohn and Leonid Levin known for the same type of work?
Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work,
then find if they are the same.
Action 1: Search[Pavel Urysohn]
Observation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet
mathematician who is best known for his contributions in dimension theory.
Thought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and
find its type of work.
Action 2: Search[Leonid Levin]
Observation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer
scientist.
Thought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn
and Leonid Levin have the same type of work.
Action 3: Finish[yes]
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
OpenAI will then generate the following result. (The highlighted text is the response from OpenAI GPT.)
In LangChain farmework, the text under “Observation 1:
” will be all ignored, and only “Thought 1:
” and “Action 1:
” sections will then be used for the next context.
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is an American author and journalist. He is the co-author of
several books, including Inside the Mirage: America's Fragile Partnership
with Saudi Arabia (2004) and The Game Player: Confessions of the CIA's
Original Political Operative (1989).
Thought 2: I need to search the U.S. Navy admiral David Chanoff collaborated with.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral Charles R. Larson was a United States Navy four-star admiral who
served as the United States Ambassador to the United Kingdom from 1998 to
2001.
Thought 3: Admiral Charles R. Larson served as the United States Ambassador to the
United Kingdom from 1998 to 2001. So the President they served under is
Bill Clinton.
Action 3: Finish[Bill Clinton]
LangChain framework parses “Action 1: Search[David Chanoff]
” (see above) and searches “David Chanoff” in Wikipedia entities with external tools (in this case, docstore.search()
function).
When it has obtained the search result, it will then fill this result in “Observation 1” section as follows.
I note that this search result is different from above OpenAI-generated response text.
... (omitted)
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Next LangChain framework will prompt as follows. (I have omitted large few-shot’s examples in former parts, but it also includes the same few-shot’s examples every time, because the prompt is always stateless.)
In this time, it includes the previous Observation1’s result and asks for “Thought 2”.
prompt 2
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2:
Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)
Same as above, the text under “Observation 2:
” will be all ignored, and only “Thought 2:
” and “Action 2:
” sections are used in the chain.
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: William J. Crowe Jr. (January 2, 1925 – October 18, 2007) was an admiral in the United States Navy who served as Chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush and then as the United States Ambassador to the United Kingdom.
Thought 3: William J. Crowe Jr. was the ambassador to the United Kingdom under
Presidents Ronald Reagan and George H. W. Bush. So the answer is Ronald
Reagan and George H. W. Bush.
Action 3: Finish[Ronald Reagan and George H. W. Bush]
LangChain framework will then search “U.S. Navy admiral” in Wikipedia with external tools, and add this result as an observation as follows. (See the following “Observation 2”.)
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Again LangChain framework will prompt as follows.
As you can find, it includes the previous Observation2’s result (search result) and asks for “Thought 3”.
prompt 3
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3:
Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)
Same as above, the text under “Observation 3:
” will also be ignored, and only “Thought 3:
” and “Action 3:
” sections are used in the chain.
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: Admiral William J. Crowe Jr., KBE, DSC (January 2, 1925 – October 18, 2007) was a United States Navy admiral who served as Chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush and as the United States Ambassador to the United Kingdom under President Bill Clinton.
Thought 4: Admiral William J. Crowe was the ambassador to the United Kingdom under
President Bill Clinton, so the answer is Bill Clinton.
Action 4: Finish[Bill Clinton]
Again LangChain framework will then search “Admiral William J. Crowe” in Wikipedia with external tools, and add this result as an observation as follows.
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Again LangChain framework will prompt as follows.
As you can find, it includes the previous Observation3’s result (search result) again and asks for “Thought 4”.
prompt 4
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought 4:
Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)
This time, OpenAI now provides the final answer, because the previous search result includes the answer.
... few-shot examples (omitted)
Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought 4: Admiral William J. Crowe was the ambassador to the United Kingdom under
President Bill Clinton, so the answer is Bill Clinton.
Action 4: Finish[Bill Clinton]
LangChain framework can then recognize that the final answer is “Bill Clinton”, and the chain is completed (done !).
Here I summarize how LLM (OpenAI GPT-3) and external tools (in this case, search tool) interact with each others as follows. (See the following diagram.)
LLM will provide the thoughts and next action with history of previous contexts, and external tools will perform each corresponding actions. LangChain framework will then connect these activities in the chain.
In this example, we have applied few-shot examples in prompt, but you can also use zero-shot prompting by using langchain.agents.ZeroShotAgent
class in LangChain.
The following is the example of zero-shot prompt template. The prefix and suffix text can be customized in ZeroShotAgent
class. (I note that the following “Search” and “Lookup” will also differ depending on specified tools.)
{prefix text}
Search: Search for a term in the docstore.
Lookup: Lookup a term in the docstore.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search, Lookup]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
{suffix text}
Question: {input}
{agent_scratchpad}
Build Your Own Custom Tools and Agents
You can use the following built-in agents in LangChain.
(I’m using LangChain version 0.0.100, which is the latest version in this writing.)
- zero-shot-react-description : Multi-purpose agent optimized for zero-shot prompting (it uses
langchain.agents.ZeroShotAgent
class which I have mentioned in above) - react-docstore : ReAct agent example, which uses docstore example (see above)
- self-ask-with-search : Agent which answers open questions with search tool. In LangChain, there exists a tool (wrapper class) to work with SerpApi.
- conversational-react-description : Multi-purpose agent, with which you can include conversation history in prompt
If these built-in agents don’t matter, you can also build and define your own chains, your own external tools, and your own agents in LangChain framework.
In this example, I will show you example to ask for the amount of company’s invoices and the request for simple calculations.
In this example, I’ll define the following external tools.
- GetInvoice[name_of_company] :
This returns the invoice amount of name_of_company from company’s database. - Diff[list_of_quantity] :
This is a simple calculator and returns the difference between list_of_quantity. The placeholder list_of_quantity should be delimited by space character. - Total[list_of_quantity] :
This is also a simple calculator and returns the total amount (addition) of list_of_quantity. The placeholder list_of_quantity should also be delimited by space character.
With these tools, I’ll ask for the following question :
“How much is the difference between the total of company C, F and the total of company A, E ?”
In this example, we expect the following procedures.
- Get invoice amount for company C and F. (GetInvoice action)
- Calculate total amount of above C and F. (Total action)
- Get invoice amount for company A and E. (GetInvoice action)
- Calculate total amount of above A and E. (Total action)
- Calculate the difference between previous total C, F and total A, E. (Diff action)
Note : OpenAI GPT can have the ability to perform simple calculation (such as, total, difference, etc) and you don’t need above tools for calculation (Diff and Total).
I’ll define these primitive tools for your learning purpose in this example.
To work the chain-of-thought (CoT) correctly, I will provide the following prompt with few-shot examples to OpenAI GPT (in which, the few-shot examples are included in the former parts).
Question: How much is the difference between the invoice of company A and company B ?
Thought : I need to get invoice amount of company A.
Action : GetInvoice[A]
Observation : 2000
Thought : I need to get invoice amount of company B.
Action : GetInvoice[B]
Observation : 1500
Thought : I need to get difference of obtained amount between company A and company B.
Action : Diff[2000 1500]
Observation : 500
Thought : So the answer is 500.
Action : Finish[500]
Question: How much is the total invoice amount of company B, C, and D ?
Thought : I need to get invoice amount of company B.
Action : GetInvoice[B]
Observation : 1500
Thought : I need to get invoice amount of company C.
Action : GetInvoice[C]
Observation : 20000
Thought : I need to get invoice amount of company D.
Action : GetInvoice[D]
Observation : 6700
Thought : I need to get total amount of obtained amount B, C, and D.
Action : Total[1500 20000 6700]
Observation : 28200
Thought : So the answer is 28200.
Action : Finish[28200]
Question: How much is the difference between company C and the total invoice amount of company A, D ?
Thought : I need to get invoice amount of company C.
Action : GetInvoice[C]
Observation : 20000
Thought : I need to get invoice amount of company A.
Action : GetInvoice[A]
Observation : 2000
Thought : I need to get invoice amount of company D.
Action : GetInvoice[D]
Observation : 6700
Thought : I need to get total amount of obtained amount A and D.
Action : Total[2000 6700]
Observation : 8700
Thought : I need to get difference of obtained amount C and the total of A, D.
Action : Diff[20000 8700]
Observation : 11300
Thought : So the answer is 11300.
Action : Finish[11300]
Question: How much is the difference between the total of company C, F and the total of company A, E ?
Thought :
The following is the source code for this custom ReAct chain example.
In this source code, I have defined custom tools and custom agent, put these all components (tools, agent, and Azure OpenAI LLM) together, and finally run the chain by agent_executor.run()
.
The custom agent will also help you in case you want to handle ReAct’s prompts with non-English language.
import re
from typing import Sequence, List, Tuple, Optional
from langchain.agents.agent import Agent
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
from langchain.tools.base import BaseTool
from langchain.agents import Tool, initialize_agent, AgentExecutor
from langchain.llms import AzureOpenAI
##########
# define tools
##########
company_dic = {
"A": 2000,
"B": 1500,
"C": 20000,
"D": 6700,
"E": 1000,
"F": 4100,
}
def get_invoice(company_name):
return company_dic[company_name]
def diff(value_str):
str_list = value_str.split(" ")
assert(len(str_list) == 2)
int_list = [int(s) for s in str_list]
return str(abs(int_list[0] - int_list[1]))
def total(value_str):
str_list = value_str.split(" ")
int_list = [int(s) for s in str_list]
return str(sum(int_list))
tools = [
Tool(
name="GetInvoice",
func=get_invoice,
description="Get invoice amount of trading company.",
),
Tool(
name="Diff",
func=diff,
description="Get diffrence.",
),
Tool(
name="Total",
func=total,
description="Get total.",
),
]
##########
# define agent
##########
EXAMPLES = [
"""Question: How much is the difference between the invoice of company A and company B ?
Thought : I need to get invoice amount of company A.
Action : GetInvoice[A]
Observation : 2000
Thought : I need to get invoice amount of company B.
Action : GetInvoice[B]
Observation : 1500
Thought : I need to get difference of obtained amount between company A and company B.
Action : Diff[2000 1500]
Observation : 500
Thought : So the answer is 500.
Action : Finish[500]""",
"""Question: How much is the total invoice amount of company B, C, and D ?
Thought : I need to get invoice amount of company B.
Action : GetInvoice[B]
Observation 1: 1500
Thought : I need to get invoice amount of company C.
Action : GetInvoice[C]
Observation : 20000
Thought : I need to get invoice amount of company D.
Action : GetInvoice[D]
Observation : 6700
Thought : I need to get total amount of obtained amount B, C, and D.
Action : Total[1500 20000 6700]
Observation : 28200
Thought : So the answer is 28200.
Action : Finish[28200]""",
"""Question: How much is the difference between company C and the total invoice amount of company A, D ?
Thought : I need to get invoice amount of company C.
Action : GetInvoice[C]
Observation : 20000
Thought : I need to get invoice amount of company A.
Action : GetInvoice[A]
Observation : 2000
Thought : I need to get invoice amount of company D.
Action : GetInvoice[D]
Observation : 6700
Thought : I need to get total amount of obtained amount A and D.
Action : Total[2000 6700]
Observation : 8700
Thought : I need to get difference of obtained amount C and the total of A, D.
Action : Diff[20000 8700]
Observation : 11300
Thought : So the answer is 11300.
Action : Finish[11300]""",
]
SUFFIX = """\nQuestion: {input}
{agent_scratchpad}"""
TEST_PROMPT = PromptTemplate.from_examples(
EXAMPLES, SUFFIX, ["input", "agent_scratchpad"]
)
class ReActTestAgent(Agent):
@classmethod
def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
return TEST_PROMPT
@classmethod
def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
if len(tools) != 3:
raise ValueError("The number of tools is invalid.")
tool_names = {tool.name for tool in tools}
if tool_names != {"GetInvoice", "Diff", "Total"}:
raise ValueError("The name of tools is invalid.")
@property
def _agent_type(self) -> str:
return "react-test"
@property
def finish_tool_name(self) -> str:
return "Finish"
@property
def observation_prefix(self) -> str:
return f"Observation : "
@property
def llm_prefix(self) -> str:
return f"Thought : "
# This method is called by framework to parse text
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
action_prefix = f"Action : "
if not text.split("\n")[1].startswith(action_prefix):
return None
action_block = text.split("\n")[1]
action_str = action_block[len(action_prefix) :]
re_matches = re.search(r"(.*?)\[(.*?)\]", action_str)
if re_matches is None:
raise ValueError(f"Could not parse action directive: {action_str}")
return re_matches.group(1), re_matches.group(2)
##########
# run agent
##########
llm = AzureOpenAI(
deployment_name="davinci003-deploy",
model_name="text-davinci-003",
temperature=0,
)
agent = ReActTestAgent.from_llm_and_tools(
llm,
tools,
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
)
question = "How much is the difference between the total of company C, F and the total of company A, E ?"
agent_executor.run(question)
Note : To build this source code, I have referred and changed above docstore’s source code.
When you run this code, it will show the following verbose outputs.
OpenAI doesn’t know the invoice amount of company E and company F (because I don’t provide these values in few-shot examples), but it can provide the correct answer, because the external functions (in this case, get_invoice()
, diff()
, total()
) are properly used in this chain.
This is a trivial example, but you can build your own practical logic, in which external tools are flexibly integrated in LLM pipeline, by ReAct framework.
For instance, the users may ask for the recommendation of restaurants and dinner in trip assistant application, and the AI agent will then show them the results with related information – such as, estimated dinner prices, seat’s availabilities, etc – by interacting with external database.
You can also find several other research papers describing the benefits of using external knowledge in LLM reasoning. (e.g, See LLM-Augmenter system.)
Categories: Uncategorized
Hi,
This article provides the following example of Though 3 from Observation 2:
…
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
…
It seems incorrect because Observation 2 only talks about Admiral George Dewey and it doesn’t mention anything about Admiral William J. Crowe whatsoever.
When I tried, from Observation 2, got the following Thought 3:
…
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral George Dewey and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral George Dewey]
…
And it seems correct, even though it leads to wrong conclusion.
Is there anything I’m missing here?
LikeLike
Thank you for your feedback.
Yes, it seems that the result of “Observation 2” is not referred in the next thought.
I think that the action 3 is because of the existing knowledge in GPT, in which only William J. Crowe is an American admiral among other collaborators (Augustus A. White, Joycelyn Elders, …) in observation 1.
I have just tried to ask the following question to GPT 3.5 (text-davinci-003), and the following result seems to be returned. (GPT 3.5 knows that William J. Crowe was an American admiral.)
==========
prompt : Provide knowledge about William J. Crowe
completion : William J. Crowe Jr. (January 2, 1925 – October 18, 2007) was an American admiral who served as the Chairman of the Joint Chiefs of Staff from 1985 to 1989, and as the Ambassador to the United Kingdom from 1994 to 1997. He was the first Navy officer to serve as Chairman of the Joint Chiefs of Staff.
==========
And, your doing is right (not wrong).
Please put a few (3 or 4) line breaks (\n) in the first line of your prompt, and try again. You will then see that the action 3 is “Search[William J. Crowe]”. (I have tried in text-davinci-003 with temperature 0.0.)
I don’t know the exact reason, but the output of generative model is based on the possibilities of tokens and it might be influenced with a little bit change of prompts, as you know.
LikeLiked by 1 person
Your article was a treat –
Im currently obsessed with ‘COT’ and used a ‘Question, Observation, Thought, Action, Finish’ chain in my js rendition of it here:
https://github.com/rocket-pig/autobabytermux
Uses chatGPT and node.js. The LLM Agent is given a sandboxed REPL with a fake filesystem. Ive only been working on it for about a week but its been so much fun. I wanted a version that didnt have massive deps/reqs and there wasnt one. Anyway. Thank you for sharing and reading.
LikeLike
Super impressed by this work. Building an Agent UI web app and would love to chat with you about possibly working together on integration. Back end is node/express so could be a great fit. Email me at jblickjr@gmail.com if you wanna discuss more.
LikeLike
I still do not figure out why the model will search William J. Crowe at Thought3, because there is no such guy info in the observation 2. please help~
———-
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
LikeLike
Yes, the observation 2 (the result in action 2) is not referred in this example.
As I have mentioned above (see Aree Abdel-san’s question and my answer in comment), I think that the action 3 (Search[Admiral William J. Crowe]) is caused by the existing knowledge in GPT, in which only William J. Crowe is an American admiral among other collaborators (Augustus A. White, Joycelyn Elders, …) in observation 1.
(In short, GPT has planned to search both “U.S. Navy admiral” and “Admiral William J. Crowe” by only the observation 1.)
LikeLike
Thank you so much! I’m amazed how so few other places illustrate how this principal is actually applied in practice.
LikeLike