欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 财经 > 金融 > 【LangChain】Chapter3 - Chains

【LangChain】Chapter3 - Chains

2025/4/18 21:24:06 来源:https://blog.csdn.net/AustinCyy/article/details/144805054  浏览:    关键词:【LangChain】Chapter3 - Chains

说在前面

本节将介绍 LangChain 当中最重要的模块 Chain(链),这也是LangChain这个名字的由来。Chain 通常将LLM与 Prompt 结合在一起,我们可以将一堆 Chain组合在一起,对文本或是数据按顺序进行操作。(视频时长13:07)

注: 所有的示例代码文件课程网站上都有(完全免费),并且是配置好的 Juptyernotebook 环境和配置好的 OPENAI_API_KEY,不需要自行去花钱租用,建议代码直接上课程网站上运行。 课程网站

另外,LLM 的结果并不总是相同的。在执行代码时,可能会得到与视频中略有不同的答案。


Main Content

我们将介绍三种类型的链分别是:

  • LLMChain
  • Sequential Chains
    • SimpleSequentialChain
    • SequentialChain
  • Router Chain

前置工作

1.配置环境变量。

import warnings
warnings.filterwarnings('ignore')
import osfrom dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)# Set the model variable based on the current date
if current_date > target_date:llm_model = "gpt-3.5-turbo"
else:llm_model = "gpt-3.5-turbo-0301"

2.导入需要的文件。

import pandas as pd
df = pd.read_csv('Data.csv')

3.展示导入的 dataframe 的前五行,我们可以看到这个表格,有两列,左列 Product,右列 Review。 每一行是一条不同的数据记录。

# head() 方法返回 DataFrame 的前五行数据,默认情况下不改变原始 DataFrame。
df.head()  # 展示 DataFrame 的前五行记录

在这里插入图片描述

LLMChain

LLMChain 是最基本的 Chain。

1.导入三个需要的库。ChatOpenAI 用于导入 LLM,ChatPromptTemplate 用于创建 prompt 模板,LLMChain 用于创建 LLMChain

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain

2.初始化 llm,设置 temperature=0.9 这个值可以自行更改。

llm = ChatOpenAI(temperature=0.9, model=llm_model)

3.设置 prompt 模板,大意为给生产 {product} 的公司取个名字

prompt = ChatPromptTemplate.from_template("What is the best name to describe \a company that makes {product}?"
)

4.初始化 chain,它是 llmprompt 的结合。

chain = LLMChain(llm=llm, prompt=prompt)

5.运行 chain,得到结果如下图所示。

product = "Queen Size Sheet Set"
chain.run(product)

在这里插入图片描述

Sequential Chains

Sequential Chains(顺序链)是最常用的链之一,他的功能就是将一系列的链按顺序执行。这里我们将介绍两种顺序链:

  • SimpleSequentialChain:单输入,单输出
  • SequentialChain: 多输入,多输出

SimpleSequentialChain

SimpleSequentialChain 适合只有一个输入,和只需要返回一个输出的情况。
在这里插入图片描述

1.导入 SimpleSequentialChain

from langchain.chains import SimpleSequentialChain

2.初始化 llm,并定义 prompt template 1Chain 1 ,作用为根据 {product} 给公司取名字。输入:产品名,输出:公司名。

llm = ChatOpenAI(temperature=0.9, model=llm_model)# prompt template 1
first_prompt = ChatPromptTemplate.from_template("What is the best name to describe \a company that makes {product}?"
)# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)

3.定义 prompt template 2chain 2 ,作用为根据 {company_name} 对公司进行一个20字以内的描述。输入:公司名,输出:20字以内的公司描述。

# prompt template 2
second_prompt = ChatPromptTemplate.from_template("Write a 20 words description for the following \company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)

3.将两条链合并成一条顺序链 overall_simple_chain

overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],verbose=True)

4.运行该顺序链(product 为 LLMChain 的例子 Queen Size Sheet Set ),结果如下图所示。可以看到 SimpleSequentialChain 在单输入单输出的情况下,效果不错。

overall_simple_chain.run(product)

在这里插入图片描述

SequentialChain

SequentialChain 针对的是需要面对我们同时有多个输入和输出的情况。

在这里插入图片描述

1.导入需要的 SequentialChain 库。

from langchain.chains import SequentialChain

2.初始化 llm 和定义 chain 1,作用是将输入的内容翻译成英文 。输入:Review,输出:English_Review

llm = ChatOpenAI(temperature=0.9, model=llm_model)# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template("Translate the following review to english:""\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, output_key="English_Review")

3.定义 chain 2,作用为给翻译成英文的 review 做一个摘要。输入:English_Review,输出:summary

second_prompt = ChatPromptTemplate.from_template("Can you summarize the following review in 1 sentence:""\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt, output_key="summary")

4.定义 chain 3,作用为分析 review 使用的是什么语言。输入:Review,输出:language

# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template("What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,output_key="language")

5.定义 chain 4,作用为根据 summary 和 language 进行回复。输入:summarylanguage,输出:followup_message

# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template("Write a follow up response to the following ""summary in the specified language:""\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,output_key="followup_message")

6.整合四条链为一个完整的顺序链 overall_chain。输入:Review,输出:English_Reviewsummaryfollowup_message

# overall_chain: input= Review 
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(chains=[chain_one, chain_two, chain_three, chain_four],input_variables=["Review"],output_variables=["English_Review", "summary","followup_message"],verbose=True
)

7.运行这条顺序链,结果如下图所示。

review = df.Review[5]
overall_chain(review)

在这里插入图片描述

注: 可以看到这条顺序链,在执行的过程中存在多个输入和多个输出的情况,为保证各个链之间的顺利连接,我们需要关注我们的变量名称是否书写正确,以及output_key 的撰写。

Router Chain

Router Chain (路由链)是更为复杂的一种链结构。它可以根据不同的输入内容,输入到具有不同功能的子链中去执行得到输出。它首先判断该使用哪条子链,然后将输入传递到对应的子链中去执行。

在这里插入图片描述

1.首先定义针对四种不同问题的prompt模板。问题包括:physics,math,history,computerscience

physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.Here is a question:
{input}"""math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts, 
answer the component parts, and then put them together\
to answer the broader question.Here is a question:
{input}"""history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.Here is a question:
{input}"""computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity. Here is a question:
{input}"""

2.整理四个prompt模板的信息,将这些信息给 touter chain,它将根据这些信息判断该使用哪一个链。

prompt_infos = [{"name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template},{"name": "math", "description": "Good for answering math questions", "prompt_template": math_template},{"name": "History", "description": "Good for answering history questions", "prompt_template": history_template},{"name": "computer science", "description": "Good for answering computer science questions", "prompt_template": computerscience_template}
]

3.导入我们需要的库。其中 RouterOutputParser 用于将输出内容输出为一个字典便于我们进行路由定位。

from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate

4.初始化llm

llm = ChatOpenAI(temperature=0, model=llm_model)

5.定义目标链 destination_chains,用于选择目标子链。

destination_chains = {}
for p_info in prompt_infos:name = p_info["name"]prompt_template = p_info["prompt_template"]prompt = ChatPromptTemplate.from_template(template=prompt_template)chain = LLMChain(llm=llm, prompt=prompt)destination_chains[name] = chain  destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)

5.定义默认链 default_prompt。默认链用于输入内容,没有能够对应的目标子链的情况,是一个通用解决方案。

default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)

6.设定路由链 router_chain

MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
\```json
{{{{"destination": string \ name of the prompt to use or "DEFAULT""next_inputs": string \ a potentially modified version of the original input
}}}}
\```REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.<< CANDIDATE PROMPTS >>
{destinations}<< INPUT >>
{{input}}<< OUTPUT (remember to include the ```json)>>"""
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str
)
router_prompt = PromptTemplate(template=router_template,input_variables=["input"],output_parser=RouterOutputParser(),
)router_chain = LLMRouterChain.from_llm(llm, router_prompt)

7.将上面的所有链合并为 chain

chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)

8.运行该路由链。运行结果如下所示。可以看到输入的问题进入了对应类型的子链进行了回答。

# Physics
chain.run("What is black body radiation?")

在这里插入图片描述

# Math
chain.run("what is 2 + 2")

在这里插入图片描述

总结

本节介绍了三种常见的 Chain。通过 Chain 的使用,我们可以使用 prompt 跟 LLM 产生更丰富的功能,以使 LLM 能更加精确的回答我们的问题。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词