欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 财经 > 金融 > AI大模型之旅-langchain结合glm4,faiss构建本地知识库

AI大模型之旅-langchain结合glm4,faiss构建本地知识库

2024/10/25 9:25:39 来源:https://blog.csdn.net/weixin_43866043/article/details/142393439  浏览:    关键词:AI大模型之旅-langchain结合glm4,faiss构建本地知识库

所需依赖如下:

_libgcc_mutex=0.1=main
_openmp_mutex=5.1=1_gnu
accelerate=0.34.2=pypi_0
aiofiles=23.2.1=pypi_0
aiohappyeyeballs=2.4.0=pypi_0
aiohttp=3.10.5=pypi_0
aiosignal=1.3.1=pypi_0
annotated-types=0.7.0=pypi_0
anyio=4.4.0=pypi_0
attrs=24.2.0=pypi_0
bitsandbytes=0.43.3=pypi_0
blas=1.0=mkl
blinker=1.8.2=pypi_0
bzip2=1.0.8=h5eee18b_6
ca-certificates=2024.7.2=h06a4308_0
certifi=2024.8.30=pypi_0
charset-normalizer=3.3.2=pypi_0
click=8.1.7=pypi_0
contourpy=1.3.0=pypi_0
cuda-cudart=12.4.127=h99ab3db_0
cuda-cudart_linux-64=12.4.127=hd681fbe_0
cuda-nvrtc=12.4.127=h99ab3db_1
cuda-version=12.4=hbda6634_3
cycler=0.12.1=pypi_0
dataclasses-json=0.6.7=pypi_0
distro=1.9.0=pypi_0
einops=0.8.0=pypi_0
expat=2.6.3=h6a678d5_0
faiss-gpu=1.8.0=py3.12_h4c7d538_0_cuda12.1.1
fastapi=0.112.4=pypi_0
ffmpy=0.4.0=pypi_0
filelock=3.15.4=pypi_0
flask=3.0.3=pypi_0
fonttools=4.53.1=pypi_0
frozenlist=1.4.1=pypi_0
fsspec=2024.9.0=pypi_0
gradio=4.43.0=pypi_0
gradio-client=1.3.0=pypi_0
greenlet=3.0.3=pypi_0
h11=0.14.0=pypi_0
httpcore=1.0.5=pypi_0
httpx=0.27.2=pypi_0
huggingface-hub=0.24.6=pypi_0
idna=3.8=pypi_0
importlib-resources=6.4.4=pypi_0
intel-openmp=2023.1.0=hdb19cb5_46306
itsdangerous=2.2.0=pypi_0
jinja2=3.1.4=pypi_0
jiter=0.5.0=pypi_0
joblib=1.4.2=pypi_0
jsonpatch=1.33=pypi_0
jsonpointer=3.0.0=pypi_0
kiwisolver=1.4.7=pypi_0
langchain=0.3.0=pypi_0
langchain-community=0.3.0=pypi_0
langchain-core=0.3.0=pypi_0
langchain-huggingface=0.1.0=pypi_0
langchain-text-splitters=0.3.0=pypi_0
langsmith=0.1.120=pypi_0
ld_impl_linux-64=2.38=h1181459_1
libcublas=12.4.5.8=h99ab3db_1
libfaiss=1.8.0=h046e95b_0_cuda12.1.1
libffi=3.4.4=h6a678d5_1
libgcc-ng=11.2.0=h1234567_1
libgomp=11.2.0=h1234567_1
libstdcxx-ng=11.2.0=h1234567_1
libuuid=1.41.5=h5eee18b_0
markdown-it-py=3.0.0=pypi_0
markupsafe=2.1.5=pypi_0
marshmallow=3.22.0=pypi_0
matplotlib=3.9.2=pypi_0
mdurl=0.1.2=pypi_0
mkl=2023.1.0=h213fc3f_46344
mkl-service=2.4.0=py312h5eee18b_1
mkl_fft=1.3.10=py312h5eee18b_0
mkl_random=1.2.7=py312h526ad5a_0
mpmath=1.3.0=pypi_0
multidict=6.0.5=pypi_0
mypy-extensions=1.0.0=pypi_0
ncurses=6.4=h6a678d5_0
networkx=3.3=pypi_0
numpy=1.26.4=py312hc5e2394_0
numpy-base=1.26.4=py312h0da6c21_0
nvidia-cublas-cu12=12.1.3.1=pypi_0
nvidia-cuda-cupti-cu12=12.1.105=pypi_0
nvidia-cuda-nvrtc-cu12=12.1.105=pypi_0
nvidia-cuda-runtime-cu12=12.1.105=pypi_0
nvidia-cudnn-cu12=9.1.0.70=pypi_0
nvidia-cufft-cu12=11.0.2.54=pypi_0
nvidia-curand-cu12=10.3.2.106=pypi_0
nvidia-cusolver-cu12=11.4.5.107=pypi_0
nvidia-cusparse-cu12=12.1.0.106=pypi_0
nvidia-nccl-cu12=2.20.5=pypi_0
nvidia-nvjitlink-cu12=12.6.68=pypi_0
nvidia-nvtx-cu12=12.1.105=pypi_0
openai=1.44.0=pypi_0
openssl=3.0.15=h5eee18b_0
orjson=3.10.7=pypi_0
outcome=1.3.0.post0=pypi_0
packaging=24.1=py312h06a4308_0
pandas=2.2.2=pypi_0
pillow=10.4.0=pypi_0
pip=24.2=py312h06a4308_0
psutil=6.0.0=pypi_0
pydantic=2.9.0=pypi_0
pydantic-core=2.23.2=pypi_0
pydantic-settings=2.5.2=pypi_0
pydub=0.25.1=pypi_0
pygments=2.18.0=pypi_0
pylzma=0.5.0=pypi_0
pyparsing=3.1.4=pypi_0
pysocks=1.7.1=pypi_0
python=3.12.4=h5148396_1
python-dateutil=2.9.0.post0=pypi_0
python-dotenv=1.0.1=pypi_0
python-multipart=0.0.9=pypi_0
pytz=2024.1=pypi_0
pyyaml=6.0.2=pypi_0
readline=8.2=h5eee18b_0
regex=2024.7.24=pypi_0
requests=2.32.3=pypi_0
rich=13.8.0=pypi_0
ruff=0.6.4=pypi_0
safetensors=0.4.5=pypi_0
scikit-learn=1.5.1=pypi_0
scipy=1.14.1=pypi_0
selenium=4.24.0=pypi_0
semantic-version=2.10.0=pypi_0
sentence-transformers=3.0.1=pypi_0
sentencepiece=0.2.0=pypi_0
setuptools=72.1.0=py312h06a4308_0
shellingham=1.5.4=pypi_0
six=1.16.0=pypi_0
sniffio=1.3.1=pypi_0
sortedcontainers=2.4.0=pypi_0
sqlalchemy=2.0.34=pypi_0
sqlite=3.45.3=h5eee18b_0
sse-starlette=2.1.3=pypi_0
starlette=0.38.4=pypi_0
sympy=1.13.2=pypi_0
tbb=2021.8.0=hdb19cb5_0
tenacity=8.5.0=pypi_0
threadpoolctl=3.5.0=pypi_0
tiktoken=0.7.0=pypi_0
timm=1.0.9=pypi_0
tk=8.6.14=h39e8969_0
tokenizers=0.19.1=pypi_0
tomlkit=0.12.0=pypi_0
torch=2.4.1=pypi_0
torchvision=0.19.1=pypi_0
tqdm=4.66.5=pypi_0
transformers=4.44.0=pypi_0
trio=0.26.2=pypi_0
trio-websocket=0.11.1=pypi_0
triton=3.0.0=pypi_0
typer=0.12.5=pypi_0
typing-extensions=4.12.2=pypi_0
typing-inspect=0.9.0=pypi_0
tzdata=2024.1=pypi_0
undetected-chromedriver=3.5.5=pypi_0
urllib3=2.2.2=pypi_0
uvicorn=0.30.6=pypi_0
websocket-client=1.8.0=pypi_0
websockets=12.0=pypi_0
werkzeug=3.0.4=pypi_0
wheel=0.43.0=py312h06a4308_0
wsproto=1.2.0=pypi_0
xz=5.4.6=h5eee18b_1
yarl=1.11.0=pypi_0
zlib=1.2.13=h5eee18b_1

python代码如下:

from flask import Flask, request, jsonify
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from langchain.chains import RetrievalQA
from langchain.cache import InMemoryCache
from langchain.globals import set_llm_cache# 设置 Flask 应用
app = Flask(__name__)# 设置 LangChain 的缓存
set_llm_cache(InMemoryCache())# 加载ChatGLM模型和Tokenizer
model_name = "/home/ck/llm/ZhipuAI/glm-4-9b-chat/"
embding_name = "/home/ck/llm/iic/nlp_bert_document-segmentation_chinese-base"
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to(device)# 自定义生成函数
def custom_generate(text, max_length=1000, do_sample=False, temperature=0, top_p=0.95, top_k=50):inputs = tokenizer(text, return_tensors="pt").to(device)output = model.generate(inputs.input_ids, max_length=max_length, do_sample=do_sample, temperature=temperature, top_p=top_p, top_k=top_k)return tokenizer.decode(output[0], skip_special_tokens=True)# 自定义的 Pipeline 类,用于 LangChain
class CustomPipeline:def __init__(self, model, tokenizer):self.model = modelself.tokenizer = tokenizerself.task = "text-generation"def __call__(self, text):generated_text = custom_generate(text)return [{"generated_text": generated_text}]# 将自定义 Pipeline 传入 LangChain
generator = CustomPipeline(model=model, tokenizer=tokenizer)
llm = HuggingFacePipeline(pipeline=generator)# Step 1: 创建 FAISS 知识库
# 假设有一个文本文件作为知识库的来源
loader = TextLoader('/home/ck/PycharmProjects/output.txt')  # 替换为你的知识库文件
documents = loader.load()# 使用文本切分器将长文档切分为小块
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(documents)# 使用 Hugging Face 的嵌入生成器
embeddings = HuggingFaceEmbeddings(model_name=embding_name)# 创建 FAISS 索引
faiss_index = FAISS.from_documents(texts, embeddings)# Step 2: 将 FAISS 知识库集成到 LangChain 的 RetrievalQA
qa_chain = RetrievalQA.from_chain_type(llm=llm,retriever=faiss_index.as_retriever(),  # 将 FAISS 索引用作检索器chain_type="stuff"  # chain_type 决定了如何使用知识库
)# Step 3: 定义 Flask API 路由,查询知识库并生成答案
@app.route("/query", methods=["POST"])
def query_knowledge_base():data = request.get_json()query = data.get("query")if not query:return jsonify({"error": "No query provided"}), 400# 查询知识库并生成答案try:answer = qa_chain.run(query)return jsonify({"query": query, "answer": answer})except Exception as e:return jsonify({"error": str(e)}), 500if __name__ == "__main__":app.run(host="0.0.0.0", port=5000)

测试:
在这里插入图片描述

过程中可能遇到的问题:
问题1:(ckglm4) ck@insight:~/PycharmProjects/ckpractice$ pip install faiss-gpu
ERROR: Could not find a version that satisfies the requirement faiss-gpu (from versions: none)
ERROR: No matching distribution found for faiss-gpu
答案:使用conda安装

conda install -c pytorch faiss-gpu

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com