Uncategorized

ReAct (Reason+Act) prompting in LLMs

Currently a lot of tools accelerated by LLMs – such as, Auto-GPT, ChatGPT plugins, Copilot, etc – automate a variety of tasks with reasoning, which splits complex tasks into simpler subtasks.

To provide you a hint for building such autonomous agents, I’ll show you Reasoning+Acting – shortly ReAct – by giving you the answer for the following questions. :

  • What is ReAct ?
  • How it works ?
  • How to build ?

ReAct is essential and origin for today’s advanced LLM reasoning.

Introduction to ReAct – Collaboration with LLMs and external tools

Integrating LLMs (Large Language Models) with external tools is a frequently used technique in the real business applications.
For instance, ChatGPT plugins can interact with external tools in its conversations. Microsoft has also integrated LLMs (OpenAI GPT) and external tools (such as, Office applications) in Copilot framework.
Another example is the search integration. LLMs doesn’t always give correct answers, and the method to interact with external search engine (for both internet and intranet) is then often applied in real QA systems.
Several flexible architectures with combination between LLM’s reasoning and additional tools (experts) are then proposed in several papers – such as, in ReAct (Reasoning + Acting) or MRKL (Modular Reasoning, Knowledge and Language).

ReAct is a fundamental of agent framework which has been introduced in this paper (Shunyu et al., 2022).
As you can see in this post, you can apply this method in various types of automation (orchestration) between LLMs and external tools.

For instance, now we define the action “search [entity]” which returns the sentences from the corresponding Wikipedia entity. Suppose, the multi-hop questions – which cannot be answered from a single Wikipedia entity, but can be answered by referencing the multiple entities – is then given by user.
By applying ReAct prompting, first the question is disassembled into multiple search’s actions by LLM.
All these actions (in this case, search actions) will then be performed by external tools – such as, Search APIs.
Finally, the answer is obtained by LLM, with all these threads as a context in question-answering. (See below.)

From “REACT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS” (Shunyu et al., 2022)

Note : Because of token limitation, it’s difficult to get entire document in Wikipedia. Hence, “lookup [entity]” action is also introduced in this paper, which returns the next sentence in the page containing some string, simulating Ctrl+F functionality on the page in web browser.
These 2 actions (search, lookup) is such like human’s behaviors.

In this flow, LLM reasons to call external tools, gets halted to fetch the tool’s response as observation, and then decides the next action based on all preceding responses. (Later I’ll show you details of this flow.)
This technique is also sometimes referred as Augmented Language Models (ALMs).

This reasoning+acting can be performed by few-shot or zero-shot in prompting, or it can also be learned by reinforcement learning or imitation learning (with expert data) in more advanced approaches.

In this post, we first see ReAct example with In-Context few-shot prompting, and later I’ll follow In-Context zero-shot version of same example.

Run Your First ReAct

Before we dive into details about ReAct’s flow, now we start to run an example (simple autonomous agent) in famous LangChain with OpenAI GPT.
LangChain is an open-source and widely-used tool to help you compose a variety of language chains in LLMs – such as, chat system, QA system, advanced RAG, etc.
To start building an agent, I have prepared Ubuntu Server 20.04 LTS in Microsoft Azure.

In this virtual machine, it already has Python 3.8 and you just install additional packages for running example as follows.

# Update pip
sudo apt-get update
sudo apt-get install -y python3-pip
sudo -H pip3 install --upgrade pip

# Install packages
pip3 install openai
pip3 install langchain  # see below note
pip3 install wikipedia

Note : LangChain frequently updates and I have used LangChain version 0.0.300. (Please specify version to install packages, if it cannot run.)

Before running application, you should set the environment variables for OpenAI integration in LangChain.

# Get a token from https://platform.openai.com/account/api-keys
export OPENAI_API_KEY="..."
# Set OpenAI organization if you have multiple organizations
# export OPENAI_ORGANIZATION="..."

Now let’s build your first ReAct chain with LangChain.

This is an example of Wikipedia docstore included in LangChain, in which the question is disassembled into multiple search actions.
In this demonstration, I have used text-davinci-003 (GPT-3.5) model with temperature 0.0 in OpenAI GPT.

Note : Currently text-davinci-003 is deprecated in OpenAI and then please use other model, such as, gpt-3.5-turbo-instruct.

In the following code, it finally executes ReAct chain by run() function to get the answer for the following question :

“Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?”

This question cannot be answered by a single Wikipedia article, and it then needs multi-hop search actions to get correct answer.

from langchain.docstore.wikipedia import Wikipedia
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool, AgentExecutor
from langchain.agents.react.base import DocstoreExplorer

# build tools
docstore=DocstoreExplorer(Wikipedia())
tools = [
  Tool(
    name="Search",
    func=docstore.search,
    description="Search for a term in the docstore.",
  ),
  Tool(
    name="Lookup",
    func=docstore.lookup,
    description="Lookup a term in the docstore.",
  )
]

# build LLM
llm = OpenAI(
  model_name="text-davinci-003",
  temperature=0,
)

# initialize ReAct agent
react = initialize_agent(tools, llm, agent="react-docstore", verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
  agent=react.agent,
  tools=tools,
  verbose=True,
)

# perform question-answering
question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
agent_executor.run(question)

When you run this application, the following verbose outputs will be shown in output’s logs. (I note that the result will be a little different when you use other language model and versions. This is the case of text-davinci-003.)
As you can find, this question is disassembled into 3 search actions (“Search[David Chanoff]“, “Search[U.S. Navy admiral]“, and “Search[Admiral William J. Crowe]“) in order, and it gets the final answer “Bill Clinton”.

Demystifying ReAct – How it works inside ?

Now let’s see how it works inside LangChain, and dive into the implementation of ReAct framework.

In the bottom of this chain, the following prompt is issued into OpenAI GPT. (You can see the prompt’s template by running the following code.)
The former part (which is a long long text) in the following prompt’s template is few-shot’s in-context examples. OpenAI will then learn how to generate text for a target question with these few-shot’s examples.
The last part (last 2 lines in output) is a target question and the {input} and {agent_scratchpad} placeholders will be replaced by LangChain framework depending on user’s inputs and chain steps. (I’ll show you the example soon.)

# output prompt's template
print(react.agent.llm_chain.prompt.template)

output



Question: What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range of the
area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in
Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern
sector.
Action 2: Lookup[eastern sector]
Observation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called
the Central Plains orogeny.
Thought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I
need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions
Thought 4: I need to instead search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130
m).[3]
Thought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer
is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]

Question: Musician and satirist Allie Goertz wrote a song about the "The Simpsons"
character Milhouse, who Matt Groening named after who?
Thought 1: The question simplifies to "The Simpsons" character Milhouse is named after
who. I only need to search Milhouse and find who it is named after.
Action 1: Search[Milhouse]
Observation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated
television series The Simpsons voiced by Pamela Hayden and created by Matt
Groening.
Thought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up
"named after".
Action 2: Lookup[named after]
Observation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose
middle name was Milhous.
Thought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is
Richard Nixon.
Action 3: Finish[Richard Nixon]

Question: Which documentary is about Finnish rock groups, Adam Clayton Powell or The
Saimaa Gesture?
Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which
documentary is about Finnish rock groups.
Action 1: Search[Adam Clayton Powell]
Observation 1 Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell
III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office
Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell
(film)’, ’Giancarlo Esposito’].
Thought 2: To find the documentary, I can search Adam Clayton Powell (film).
Action 2: Search[Adam Clayton Powell (film)]
Observation 2: Adam Clayton Powell is a 1989 American documentary film directed by
Richard Kilberg. The film is about the rise and fall of influential
African-American politician Adam Clayton Powell Jr.[3][4] It was later aired
as part of the PBS series The American Experience.
Thought 3: Adam Clayton Powell (film) is a documentary about an African-American
politician, not Finnish rock groups. So the documentary about Finnish rock
groups must instead be The Saimaa Gesture.
Action 3: Finish[The Saimaa Gesture]

Question: What profession does Nicholas Ray and Elia Kazan have in common?
Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then
find the profession they have in common.
Action 1: Search[Nicholas Ray]
Observation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16,
1979) was an American film director, screenwriter, and actor best known for
the 1955 film Rebel Without a Cause.
Thought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need
to search Elia Kazan next and find his professions.
Action 2: Search[Elia Kazan]
Observation 2: Elia Kazan was an American film and theatre director, producer, screenwriter
and actor.
Thought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor.
So profession Nicholas Ray and Elia Kazan have in common is director,
screenwriter, and actor.
Action 3: Finish[director, screenwriter, actor]

Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for Women, and find which was
started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine (1844-1846) was an American literary periodical published
in Philadelphia in the 19th century.
Thought 2: Arthur’s Magazine was started in 1844. I need to search First for Women
next.
Action 2: Search[First for Women]
Observation 2: First for Women is a woman’s magazine published by Bauer Media Group in the
USA.[1] The magazine was started in 1989.
Thought 3: First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First
for Women), so Arthur’s Magazine was started first.
Action 3: Finish[Arthur’s Magazine]

Question: Were Pavel Urysohn and Leonid Levin known for the same type of work?
Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work,
then find if they are the same.
Action 1: Search[Pavel Urysohn]
Observation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet
mathematician who is best known for his contributions in dimension theory.
Thought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and
find its type of work.
Action 2: Search[Leonid Levin]
Observation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer
scientist.
Thought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn
and Leonid Levin have the same type of work.
Action 3: Finish[yes]


Question: {input}
{agent_scratchpad}

Now I’ll show you examples in prompts.

First in this chain, it will provide the following prompt to OpenAI GPT.
As you can see below, the former part (few-shot’s examples’ part) is the same as above, but the latter part is replaced with the target question.

When you use OpenAI playground for debugging, please input this text in prompt and see the generated text. (Here I use text-davinci-003 model with temperature 0.0 and max 256 tokens.)

prompt 1



Question: What is the elevation range for the area that the eastern sector of the
Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector
of the Colorado orogeny extends into, then find the elevation range of the
area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in
Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern
sector.
Action 2: Lookup[eastern sector]
Observation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called
the Central Plains orogeny.
Thought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I
need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions
Thought 4: I need to instead search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains. From east to west, the
High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130
m).[3]
Thought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer
is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]

Question: Musician and satirist Allie Goertz wrote a song about the "The Simpsons"
character Milhouse, who Matt Groening named after who?
Thought 1: The question simplifies to "The Simpsons" character Milhouse is named after
who. I only need to search Milhouse and find who it is named after.
Action 1: Search[Milhouse]
Observation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated
television series The Simpsons voiced by Pamela Hayden and created by Matt
Groening.
Thought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up
"named after".
Action 2: Lookup[named after]
Observation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose
middle name was Milhous.
Thought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is
Richard Nixon.
Action 3: Finish[Richard Nixon]

Question: Which documentary is about Finnish rock groups, Adam Clayton Powell or The
Saimaa Gesture?
Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which
documentary is about Finnish rock groups.
Action 1: Search[Adam Clayton Powell]
Observation 1 Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell
III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office
Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell
(film)’, ’Giancarlo Esposito’].
Thought 2: To find the documentary, I can search Adam Clayton Powell (film).
Action 2: Search[Adam Clayton Powell (film)]
Observation 2: Adam Clayton Powell is a 1989 American documentary film directed by
Richard Kilberg. The film is about the rise and fall of influential
African-American politician Adam Clayton Powell Jr.[3][4] It was later aired
as part of the PBS series The American Experience.
Thought 3: Adam Clayton Powell (film) is a documentary about an African-American
politician, not Finnish rock groups. So the documentary about Finnish rock
groups must instead be The Saimaa Gesture.
Action 3: Finish[The Saimaa Gesture]

Question: What profession does Nicholas Ray and Elia Kazan have in common?
Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then
find the profession they have in common.
Action 1: Search[Nicholas Ray]
Observation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16,
1979) was an American film director, screenwriter, and actor best known for
the 1955 film Rebel Without a Cause.
Thought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need
to search Elia Kazan next and find his professions.
Action 2: Search[Elia Kazan]
Observation 2: Elia Kazan was an American film and theatre director, producer, screenwriter
and actor.
Thought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor.
So profession Nicholas Ray and Elia Kazan have in common is director,
screenwriter, and actor.
Action 3: Finish[director, screenwriter, actor]

Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for Women, and find which was
started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine (1844-1846) was an American literary periodical published
in Philadelphia in the 19th century.
Thought 2: Arthur’s Magazine was started in 1844. I need to search First for Women
next.
Action 2: Search[First for Women]
Observation 2: First for Women is a woman’s magazine published by Bauer Media Group in the
USA.[1] The magazine was started in 1989.
Thought 3: First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First
for Women), so Arthur’s Magazine was started first.
Action 3: Finish[Arthur’s Magazine]

Question: Were Pavel Urysohn and Leonid Levin known for the same type of work?
Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work,
then find if they are the same.
Action 1: Search[Pavel Urysohn]
Observation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet
mathematician who is best known for his contributions in dimension theory.
Thought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and
find its type of work.
Action 2: Search[Leonid Levin]
Observation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer
scientist.
Thought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn
and Leonid Levin have the same type of work.
Action 3: Finish[yes]


Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?

OpenAI will then generate the following result. (The highlighted text is the response from OpenAI GPT.)

In LangChain farmework, the text under “Observation 1:” will be all ignored, and only “Thought 1:” and “Action 1:” sections will then be used for the next context.

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?

Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is an American author and journalist. He is the co-author of
several books, including Inside the Mirage: America's Fragile Partnership
with Saudi Arabia (2004) and The Game Player: Confessions of the CIA's
Original Political Operative (1989).
Thought 2: I need to search the U.S. Navy admiral David Chanoff collaborated with.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral Charles R. Larson was a United States Navy four-star admiral who
served as the United States Ambassador to the United Kingdom from 1998 to
2001.
Thought 3: Admiral Charles R. Larson served as the United States Ambassador to the
United Kingdom from 1998 to 2001. So the President they served under is
Bill Clinton.
Action 3: Finish[Bill Clinton]

LangChain framework parses “Action 1: Search[David Chanoff]” (see above) and searches “David Chanoff” in Wikipedia entities with external tools (in this case, docstore.search() function).
When it has obtained the search result, it will then fill this result in “Observation 1” section as follows.

I note that this search result is different from above OpenAI-generated response text.

... (omitted)

Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.

Next LangChain framework will prompt as follows.
As you see, it includes the previous Observation1’s result, and then asks for “Thought 2”.

I note that I have omitted large few-shot’s examples in former parts, but it also includes the same few-shot’s examples every time (i.e, it needs all historical tokens every time), because the prompt is always stateless.

prompt 2

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2:

Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)

Same as above, the text under “Observation 2:” will be all ignored, and only “Thought 2:” and “Action 2:” sections are used in the chain.

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: William J. Crowe Jr. (January 2, 1925 – October 18, 2007) was an admiral in the United States Navy who served as Chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush and then as the United States Ambassador to the United Kingdom.
Thought 3: William J. Crowe Jr. was the ambassador to the United Kingdom under
Presidents Ronald Reagan and George H. W. Bush. So the answer is Ronald
Reagan and George H. W. Bush.
Action 3: Finish[Ronald Reagan and George H. W. Bush]

LangChain framework will then search “U.S. Navy admiral” in Wikipedia with external tools, and add this result as an observation as follows. (See the following “Observation 2”.)

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.

Again LangChain framework will prompt as follows.
As you can find, it includes the previous Observation2’s result (search result) and asks for “Thought 3”.

prompt 3

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3:

Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)

Same as above, the text under “Observation 3:” will also be ignored, and only “Thought 3:” and “Action 3:” sections are used in the chain.

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: Admiral William J. Crowe Jr., KBE, DSC (January 2, 1925 – October 18, 2007) was a United States Navy admiral who served as Chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush and as the United States Ambassador to the United Kingdom under President Bill Clinton.
Thought 4: Admiral William J. Crowe was the ambassador to the United Kingdom under
President Bill Clinton, so the answer is Bill Clinton.
Action 4: Finish[Bill Clinton]

Again LangChain framework will then search “Admiral William J. Crowe” in Wikipedia with external tools, and add this result as an observation as follows.

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.

Again LangChain framework will prompt as follows.
As you can find, it includes the previous Observation3’s result (search result) again and asks for “Thought 4”.

prompt 4

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought 4:

Then OpenAI will return the following response. (The highlighted text is the response from OpenAI GPT.)
This time, OpenAI now provides the final answer, because the previous search result includes the answer.

... few-shot examples (omitted)

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?
Thought 1: I need to search David Chanoff and the U.S. Navy admiral, find the
ambassador to the United Kingdom, then find the President they served under.
Action 1: Search[David Chanoff]
Observation 1: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought 2: David Chanoff has collaborated with a U.S. Navy admiral. I need to search
the U.S. Navy admiral and find the ambassador to the United Kingdom.
Action 2: Search[U.S. Navy admiral]
Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
United Kingdom.
Action 3: Search[Admiral William J. Crowe]
Observation 3: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought 4: Admiral William J. Crowe was the ambassador to the United Kingdom under
President Bill Clinton, so the answer is Bill Clinton.
Action 4: Finish[Bill Clinton]

LangChain framework can then recognize that the final answer is “Bill Clinton”, and the chain is completed (done !).

Here I summarize how LLM (OpenAI GPT-3) and external tools (in this case, search tool) interact with each others as follows. (See the following diagram.)
LLM will provide the thoughts and next action with history of previous contexts, and external tools will perform each corresponding actions. LangChain framework will then connect these activities in the chain.

Zero-shot Approach – Enable Tool’s Plugging

In above example, I have provided few-shot exemplars in prompt, but what if the available tools are added, deleted, or modified ?
With few-shot exemplars, you should add or modify exemplars every time you have the change of available tools.

To give the flexibility of adding new tools, the following prompting without few-shot exemplars is often used in real applications.
This approach is called zero-shot prompting, in contrast of few-shot prompting.

You are an assistant to give people the answer of questions.
Human may provide new question to you with a description and you should use tools to get the answer, rather than directly reply from the description.

You have access to the following tools:

> Search: Search for a term in Wikipedia and return the first sentence. The input to this tool should be a string to search.
> Lookup: Lookup a term in the Wikipedia article and return sentence which describes about a term. The input to this tool should be a string to lookup.

To use a tool, please use the following format:

```
Question: the input question you must answer
Thought: your thought about what to do
Action: the action to take, should be one of [Search, Lookup]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```

Begin!

Now human has provided the following question.

Question: Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?

By using langchain.agents.ZeroShotAgent class, you can use zero-shot prompting also in LangChain.
The following is the zero-shot prompt template in docstore application. (The following prefix and suffix text can be customized in ZeroShotAgent class.)

{prefix text}

Search: Search for a term in the docstore.
Lookup: Lookup a term in the docstore.

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search, Lookup]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

{suffix text}

Question: {input}
{agent_scratchpad}

However, zero-shot reasoning sometimes becomes more difficult to get exact results, and you then sometimes need more advanced models – such as, GPT-4 – in the difficult zero-shot reasoning.

Build Your Own Custom Tools and Agents

If built-in agents don’t matter, you can also build your own chains, your own tools, and your own agents in LangChain framework.

In this example, I will show you example to ask for the amount of company’s invoices and its simple calculations.
In this example, I’ll define the following external tools.

  • GetInvoice[name_of_company] :
    This returns the invoice amount of name_of_company from company’s database.
  • Diff[list_of_quantity] :
    This is a simple calculator and returns the difference between list_of_quantity. The placeholder list_of_quantity should be delimited by comma.
  • Total[list_of_quantity] :
    This is also a simple calculator and returns the total amount (addition) of list_of_quantity. The placeholder list_of_quantity should also be delimited by comma.

With these tools, I’ll ask for the following question :

“How much is the difference between the total of company C, F and the total of company A, E ?”

In this example, we expect the following procedures.

  • Get invoice amount for company C and F. (GetInvoice action)
  • Calculate total amount of above C and F. (Total action)
  • Get invoice amount for company A and E. (GetInvoice action)
  • Calculate total amount of above A and E. (Total action)
  • Calculate the difference between previous total C, F and total A, E. (Diff action)

Note : LLM can have the ability to perform simple arithmetic operations (such as, total, difference, etc) when numbers are small, but it struggles to perform with large numbers.
It’s then meaningful to define external tools for these primitive operations – such as, Diff and Total.

To work the chain-of-thought (CoT) correctly, I will provide the following prompt with few-shot examples to OpenAI GPT (in which, the few-shot examples are included in the former parts).

Question: How much is the difference between the invoice of company A and company B ?
Thought: I need to get invoice amount of company A.
Action: GetInvoice["A"]
Observation: 2000
Thought: I need to get invoice amount of company B.
Action: GetInvoice["B"]
Observation: 1500
Thought: I need to get difference of obtained amount between company A and company B.
Action: Diff[2000 1500]
Observation: 500
Thought: So the answer is 500.
Action: Finish[500]

Question: How much is the total invoice amount of company B, C, and D ?
Thought: I need to get invoice amount of company B.
Action: GetInvoice["B"]
Observation: 1500
Thought: I need to get invoice amount of company C.
Action: GetInvoice["C"]
Observation: 20000
Thought: I need to get invoice amount of company D.
Action: GetInvoice["D"]
Observation: 6700
Thought: I need to get total amount of obtained amount B, C, and D.
Action: Total[1500 20000 6700]
Observation: 28200
Thought: So the answer is 28200.
Action: Finish[28200]

Question: How much is the difference between company C and the total invoice amount of company A, D ?
Thought: I need to get invoice amount of company C.
Action: GetInvoice["C"]
Observation: 20000
Thought: I need to get invoice amount of company A.
Action: GetInvoice["A"]
Observation: 2000
Thought: I need to get invoice amount of company D.
Action: GetInvoice["D"]
Observation: 6700
Thought: I need to get total amount of obtained amount A and D.
Action: Total[2000 6700]
Observation: 8700
Thought: I need to get difference of obtained amount C and the total of A, D.
Action: Diff[20000 8700]
Observation: 11300
Thought: So the answer is 11300.
Action: Finish[11300]

In each action, you cannot use the nested functions, such as Diff[GetInvoice["A"], GetInvoice["B"]].
Instead, you should parse into 3 actions - GetInvoice["A"], GetInvoice["B"], and Diff(2000, 1500).

Let's start.

Question: How much is the difference between the total of company C, F and the total of company A, E ?
Thought:

The following is the source code for this custom ReAct chain example.
In this source code, I have defined custom tools and custom agent, put these all components (tools, agent, and OpenAI LLM) together, and finally run the chain by agent_executor.run().

The custom agent will also be helpful for you in case you want to handle ReAct prompting with non-English languages.

import re
from typing import Sequence, List, Tuple, Optional, Any
from langchain.agents.agent import Agent, AgentOutputParser
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
from langchain.tools.base import BaseTool
from langchain.agents import Tool, initialize_agent, AgentExecutor
from langchain.llms import OpenAI
from langchain.agents.react.output_parser import ReActOutputParser

##########
# define tools
##########

company_dic = {
  "A": 2000,
  "B": 1500,
  "C": 20000,
  "D": 6700,
  "E": 1000,
  "F": 4100,
}

def get_invoice(company_name):
  return company_dic[company_name.strip("\"")]

def diff(value_str):
  int_list = [int(x.strip()) for x in value_str.split(",")]
  assert(len(int_list) == 2)
  return str(abs(int_list[0] - int_list[1]))

def total(value_str):
  int_list = [int(x.strip()) for x in value_str.split(",")]
  return str(sum(int_list))

tools = [
  Tool(
    name="GetInvoice",
    func=get_invoice,
    description="Get invoice amount of trading company.",
  ),
  Tool(
    name="Diff",
    func=diff,
    description="Get diffrence.",
  ),
  Tool(
    name="Total",
    func=total,
    description="Get total of the list of numbers.",
  ),
]

##########
# define agent
##########

EXAMPLES = [
  """Question: How much is the difference between the invoice of company A and company B ?
Thought: I need to get invoice amount of company A.
Action: GetInvoice["A"]
Observation: 2000
Thought: I need to get invoice amount of company B.
Action: GetInvoice["B"]
Observation: 1500
Thought: I need to get difference of obtained amount between company A and company B.
Action: Diff[2000, 1500]
Observation: 500
Thought: So the answer is 500.
Action: Finish[500]""",
  """Question: How much is the total invoice amount of company B, C, and D ?
Thought: I need to get invoice amount of company B.
Action: GetInvoice["B"]
Observation 1: 1500
Thought: I need to get invoice amount of company C.
Action: GetInvoice["C"]
Observation: 20000
Thought: I need to get invoice amount of company D.
Action: GetInvoice["D"]
Observation: 6700
Thought: I need to get total amount of obtained amount B, C, and D.
Action: Total[1500, 20000, 6700]
Observation: 28200
Thought: So the answer is 28200.
Action: Finish[28200]""",
  """Question: How much is the difference between company C and the total invoice amount of company A, D ?
Thought: I need to get invoice amount of company C.
Action: GetInvoice["C"]
Observation: 20000
Thought: I need to get invoice amount of company A.
Action: GetInvoice["A"]
Observation: 2000
Thought: I need to get invoice amount of company D.
Action: GetInvoice["D"]
Observation: 6700
Thought: I need to get total amount of obtained amount A and D.
Action: Total[2000, 6700]
Observation: 8700
Thought: I need to get difference of obtained amount C and the total of A, D.
Action: Diff[20000, 8700]
Observation: 11300
Thought: So the answer is 11300.
Action: Finish[11300]""",
]

SUFFIX = """\nIn each action, you cannot use the nested functions, such as Diff[GetInvoice["A"], GetInvoice["B"]].
Instead, you should parse into 3 actions - GetInvoice["A"], GetInvoice["B"], and Diff(2000, 1500).

Let's start.

Question: {input}
{agent_scratchpad}"""

TEST_PROMPT = PromptTemplate.from_examples(
  examples=EXAMPLES,
  suffix=SUFFIX,
  input_variables=["input", "agent_scratchpad"],
)

class ReActTestAgent(Agent):
  @classmethod
  def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:
    return ReActOutputParser()

  @classmethod
  def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
    return TEST_PROMPT

  @classmethod
  def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
    if len(tools) != 3:
      raise ValueError("The number of tools is invalid.")
    tool_names = {tool.name for tool in tools}
    if tool_names != {"GetInvoice", "Diff", "Total"}:
      raise ValueError("The name of tools is invalid.")

  @property
  def _agent_type(self) -> str:
    return "react-test"

  @property
  def finish_tool_name(self) -> str:
    return "Finish"

  @property
  def observation_prefix(self) -> str:
    return f"Observation: "

  @property
  def llm_prefix(self) -> str:
    return f"Thought: "

##########
# run agent
##########

llm = OpenAI(
  model_name="text-davinci-003",
  temperature=0.0,
)
agent = ReActTestAgent.from_llm_and_tools(
  llm,
  tools,
)
agent_executor = AgentExecutor.from_agent_and_tools(
  agent=agent,
  tools=tools,
  verbose=True,
)

question = "How much is the difference between the total of company C, F and the total of company A, E ?"
agent_executor.run(question)

Note : As I have mentioned above, currently text-davinci-003 is deprecated in OpenAI and then please use other model, such as, gpt-3.5-turbo-instruct.
You can also use a new function-enabled model (gpt-4-0613 and gpt-3.5-turbo-0613) to perform reasoning more reliably for function calling.
See here for details.

Note : In English, you can also use built-in langchain.agents.react.agent.create_react_agent() method, in which the chain is built by LCEL (LangChain Expression Language). See here for source code.
You can also use LangGraph for more complex and arbitrary workflows (such as, ReWOO, LLM Compiler, etc) including cyclic flows.

Note : For ReAct implementation in Semantic Kernel, you can use built-in Stepwise Planner.

When you run this code, it will show the following verbose outputs.

OpenAI doesn’t know the invoice amount of company E and company F (because I don’t provide these values in few-shot examples), but it can provide the correct answer, because the external functions (in this case, get_invoice(), diff(), total()) are properly used in this chain.

This is a trivial example, but ReAct-style reasoning can be used in a lot and practical cases. (For instance, QA agent can also gather information from multiple sources – such as, FAQ database, product’s document database, customer’s history, …)

You can also find several other research works to use external knowledge in LLM reasoning.

Added note by author :
It’s known that multi-agent (in which, each agent has specific role and skills, and works each other) can get better performance rather than a single agent, and currently multi-agent framework is often used in advanced reasoning and autonomous agents. (See this paper.)
The paper “ReWOO – Reasoning WithOut Observation” (Binfeng et. al., 2023) also proposes dividing into Planner and Worker, decoupling reasoning from observations, in order to save the token consumption in prompt. (Because a long prompt cannot fit to the limit of tokens and will also degrade the quality of response.)
A lot of research works is advancing today, and ReAct is fundamental (base idea) for these works

 

[Change log]

Sep 2023 :
– Updated LangChain version 0.0.100 to 0.0.300 (latest)

 

Categories: Uncategorized

Tagged as: , ,

45 replies »

  1. Hi,

    This article provides the following example of Though 3 from Observation 2:

    Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
    Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
    United Kingdom.
    Action 3: Search[Admiral William J. Crowe]

    It seems incorrect because Observation 2 only talks about Admiral George Dewey and it doesn’t mention anything about Admiral William J. Crowe whatsoever.

    When I tried, from Observation 2, got the following Thought 3:

    Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
    Thought 3: I need to search Admiral George Dewey and find the ambassador to the
    United Kingdom.
    Action 3: Search[Admiral George Dewey]

    And it seems correct, even though it leads to wrong conclusion.

    Is there anything I’m missing here?

    Like

    • Thank you for your feedback.
      Yes, it seems that the result of “Observation 2” is not referred in the next thought.
      I think that the action 3 is because of the existing knowledge in GPT, in which only William J. Crowe is an American admiral among other collaborators (Augustus A. White, Joycelyn Elders, …) in observation 1.
      I have just tried to ask the following question to GPT 3.5 (text-davinci-003), and the following result seems to be returned. (GPT 3.5 knows that William J. Crowe was an American admiral.)

      ==========
      prompt : Provide knowledge about William J. Crowe
      completion : William J. Crowe Jr. (January 2, 1925 – October 18, 2007) was an American admiral who served as the Chairman of the Joint Chiefs of Staff from 1985 to 1989, and as the Ambassador to the United Kingdom from 1994 to 1997. He was the first Navy officer to serve as Chairman of the Joint Chiefs of Staff.
      ==========

      And, your doing is right (not wrong).
      Please put a few (3 or 4) line breaks (\n) in the first line of your prompt, and try again. You will then see that the action 3 is “Search[William J. Crowe]”. (I have tried in text-davinci-003 with temperature 0.0.)
      I don’t know the exact reason, but the output of generative model is based on the possibilities of tokens and it might be influenced with a little bit change of prompts, as you know.

      Liked by 1 person

  2. Your article was a treat –

    Im currently obsessed with ‘COT’ and used a ‘Question, Observation, Thought, Action, Finish’ chain in my js rendition of it here:

    https://github.com/rocket-pig/autobabytermux

    Uses chatGPT and node.js. The LLM Agent is given a sandboxed REPL with a fake filesystem. Ive only been working on it for about a week but its been so much fun. I wanted a version that didnt have massive deps/reqs and there wasnt one. Anyway. Thank you for sharing and reading.

    Like

    • Super impressed by this work. Building an Agent UI web app and would love to chat with you about possibly working together on integration. Back end is node/express so could be a great fit. Email me at jblickjr@gmail.com if you wanna discuss more.

      Like

  3. I still do not figure out why the model will search William J. Crowe at Thought3, because there is no such guy info in the observation 2. please help~

    ———-
    Action 2: Search[U.S. Navy admiral]
    Observation 2: Admiral of the Navy was the highest-possible rank in the United States Navy, prior to the creation of fleet admiral in 1944. The rank is considered to be at least equivalent to that of a five-star admiral, with Admiral George Dewey being the only officer to be appointed to the rank.
    Thought 3: I need to search Admiral William J. Crowe and find the ambassador to the
    United Kingdom.

    Like

    • Yes, the observation 2 (the result in action 2) is not referred in this example.
      As I have mentioned above (see Aree Abdel-san’s question and my answer in comment), I think that the action 3 (Search[Admiral William J. Crowe]) is caused by the existing knowledge in GPT, in which only William J. Crowe is an American admiral among other collaborators (Augustus A. White, Joycelyn Elders, …) in observation 1.
      (In short, GPT has planned to search both “U.S. Navy admiral” and “Admiral William J. Crowe” by only the observation 1.)

      Like

  4. Thank you so much! I’m amazed how so few other places illustrate how this principal is actually applied in practice.

    Like

  5. I noticed that the web interface to ChatGPT at OpenAI can answer almost all of the prompt questions directly, without using any strategies like ReAct. Do you think maybe they have implemented something like it under the hood, or is ReAct just not necessary for a lot of these sorts of questions?

    Liked by 1 person

  6. This is a great article. Thanks.

    However, when I try to run the script, I get struck in the agent creation step

    —————————————————————————
    TypeError Traceback (most recent call last)
    Cell In[11], line 4
    1 ##########
    2 # run agent
    3 ##########
    —-> 4 agent = ReActTestAgent.from_llm_and_tools(
    5 llm,
    6 tools
    7 )

    File …/lib/python3.9/site-packages/langchain/agents/agent.py:559, in Agent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, **kwargs)
    557 tool_names = [tool.name for tool in tools]
    558 _output_parser = output_parser or cls._get_default_output_parser()
    –> 559 return cls(
    560 llm_chain=llm_chain,
    561 allowed_tools=tool_names,
    562 output_parser=_output_parser,
    563 **kwargs,
    564 )

    TypeError: Can’t instantiate abstract class ReActTestAgent with abstract method _get_default_output_parser

    Do we have to specifically define the output parser? Any thoughts on this?

    Like

    • Thank you for your feedback.
      I’m sorry, but this source code runs on langchain library version 0.0.100 (old version) and you might have run this program on recent version.
      In current langchain version (today, 0.0.288), instead of implementing _extract_tool_and_input() method, you should divide this implementation into parser class and should instantiate this parser class in _get_default_output_parser() method.
      See ReActDocstoreAgent in [langchain package dir]/agents/react/base.py (which is the implementation of above react-docstore agent) and ReActOutputParser in [langchain package dir]/agents/react/output_parser.py (which is the implementation of its parser class).

      Like

Leave a Reply