欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 房产 > 家装 > LLAMA-Factory安装教程(解决报错cannot allocate memory in static TLS block的问题)

LLAMA-Factory安装教程(解决报错cannot allocate memory in static TLS block的问题)

2025/2/7 18:56:34 来源:https://blog.csdn.net/qq_36344652/article/details/145473783  浏览:    关键词:LLAMA-Factory安装教程(解决报错cannot allocate memory in static TLS block的问题)

步骤一: 下载基础镜像

# 配置docker DNS
vi /etc/docker/daemon.json

# daemon.json文件中

{ "insecure-registries": ["https://swr.cn-east-317.qdrgznjszx.com"], "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }

systemctl restart docker.service 
docker pull swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 
mkdir /root/llama_factory_model

步骤二:新建基础容器 

docker create -it -u root --ipc=host --net=host --name=llama-factory   -e LANG="C.UTF-8"\--device=/dev/davinci0 \--device=/dev/davinci1 \--device=/dev/davinci2 \--device=/dev/davinci3 \--device=/dev/davinci4 \--device=/dev/davinci5 \--device=/dev/davinci6 \--device=/dev/davinci7 \--device=/dev/davinci_manager \--device=/dev/devmm_svm \--device=/dev/hisi_hdc \-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \-v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \-v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi \-v /mnt/:/mnt/ \-v /root/llama_factory_model:/root/llama_factory_model \-v /var/log/npu:/usr/slog swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 \/bin/bash \

步骤三:安装llamafactory

docker start llama-factory
docker exec -it llama-factory bash# 安装llama-factory
wget https://codeload.github.com/hiyouga/LLaMA-Factory/zip/refs/heads/main -O LLaMA-Factory.zip
unzip LLaMA-Factory.zip
mv LLaMA-Factory-main LLaMA-Factorycd LLaMA-Factory
pip install -e ".[torch-npu,metrics]" 
apt install libsndfile1# 激活昇腾环境变量(建议加入 ~/.bashrc中)
source /usr/local/Ascend/ascend-toolkit/set_env.sh#使用以下指令对 LLaMA-Factory × 昇腾的安装进行校验
llamafactory-cli env

# 运行llamafactory webui(访问本机7860端口)
nohup llamafactory-cli webui> llama_factory_output.log 2>&1 &
# 查看llamafactory运行日志
tail -f /home/HwHiAiUser/LLaMA-Factory/llama_factory_output.log

解决报错

问题描述

RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback):

/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block

解决思路

vim ~/.bashrc#文档末尾添加
export LD_PRELOAD=/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0source ~/.bashrc

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com