DeepSeek结合Langchain的基本用法
- DeepSeek 基于Openai接口规范的Prompt应答
- Deepseek结合Langchain
- DeepSeek 基于langchain的结构化返回
DeepSeek 基于Openai接口规范的Prompt应答
首先我们需要先基于pip 安装
pip install openai
最开始我们先熟悉如何使用openai的接口规范,基于deepseek来实现的基础问答。代码接口如下:
from openai import OpenAI
client = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")def get_completion(prompt, model="deepseek-chat"):# messages = [{"role": "user", "content": prompt}]response = client.chat.completions.create(model=model,messages=[{"role": "system", "content": "You are a helpful assistant"},{"role": "user", "content": prompt},],stream=False)return responseresp = get_completion("What is 1+1?")
print(resp)
print(resp.choices[0].message.content)
我们这类 1+ 1 等于几,大模型回答如下:
往往为了复用某些功能,就需要我们针对某一类问题设计模版,能够基于不同的问题,替换不同的具体问题,如何来使用模版功能,如下所示这里我们需要转换文本,使用一种新的表达style 基于 llm改造文本内容:
# 模版开发
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
style = """American English \
in a calm and respectful tone
"""
prompt = f"""Translate the text \
that is delimited by triple backticks
into a style that is {style}.
text: ```{customer_email}```
"""response = get_completion(prompt)
print(response)
print('------------')print(response.choices[0].message.content)
Deepseek结合Langchain
首先我们需要先基于pip 安装
pip install langchain_openai langchain
我们实现上述类似逻辑,通过llm 基于同一段文本进行改造转换, 实现如下:
from langchain_openai import ChatOpenAIchat = ChatOpenAI(model='deepseek-chat',openai_api_key=api_key,openai_api_base='https://api.deepseek.com',max_tokens=1024
)template_string = """Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```
"""from langchain.prompts import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_template(template_string)
customer_style = """American English \
in a calm and respectful tone
"""
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
customer_messages = prompt_template.format_messages(style=customer_style,text=customer_email)
# Call the LLM to translate to the style of the customer message
# Reference: chat = ChatOpenAI(temperature=0.0)
customer_response = chat.invoke(customer_messages, temperature=0)
print(customer_response.content)service_reply = """Hey there customer, \
the warranty does not cover \
cleaning expenses for your kitchen \
because it's your fault that \
you misused your blender \
by forgetting to put the lid on before \
starting the blender. \
Tough luck! See ya!
"""service_style_pirate = """\
a polite tone \
that speaks in English Pirate\
"""service_messages = prompt_template.format_messages(style=service_style_pirate,text=service_reply)service_response = chat.invoke(service_messages, temperature=0)
print(service_response.content)
DeepSeek 基于langchain的结构化返回
如何将llm返回的信息按照特定的结构返回信息,比如返回json数据格式。 我们还是按照上面的例子来进行改造: 首先我们返回的数据结构长什么样子:
因此需要设计输出的schema要求:
gift_schema = ResponseSchema(name="gift",description="Was the item purchased\as a gift for someone else? \Answer True if yes,\False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",description="How many days\did it take for the product\to arrive? If this \information is not found,\output -1.")response_schemas = [gift_schema,delivery_days_schema]
我们定义了返回的数据结构,gift True or False, delivery_days 返回时间 默认值-1.
from langchain_openai import ChatOpenAI
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
from langchain.prompts import ChatPromptTemplatechat = ChatOpenAI(model='deepseek-chat',openai_api_key=api_key,openai_api_base='https://api.deepseek.com',max_tokens=1024
)gift_schema = ResponseSchema(name="gift",description="Was the item purchased\as a gift for someone else? \Answer True if yes,\False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",description="How many days\did it take for the product\to arrive? If this \information is not found,\output -1.")response_schemas = [gift_schema,delivery_days_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
print(output_parser)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)customer_review = """\
This leaf blower is pretty amazing. It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wife's \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far I've been the only one using it, and I've been \
using it every other morning to clear the leaves on our lawn. \
It's slightly more expensive than the other leaf blowers \
out there, but I think it's worth it for the extra features.
"""review_template = """\
For the following text, extract the following information:gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.Format the output as JSON with the following keys:
gift
delivery_daystext: {text}
"""prompt = ChatPromptTemplate.from_template(template=review_template)
messages = prompt.format_messages(text=customer_review,format_instructions=format_instructions)response = chat.invoke(messages, temperature=0)
output_dict = output_parser.parse(response.content)
print(output_dict)