欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 科技 > IT业 > 云服务器部署DeepSeek Janus-Pro生成图片实战

云服务器部署DeepSeek Janus-Pro生成图片实战

2025/2/28 6:54:58 来源:https://blog.csdn.net/hello_ejb3/article/details/145885266  浏览:    关键词:云服务器部署DeepSeek Janus-Pro生成图片实战

本文主要研究一下如何在腾讯云HAI-GPU服务器上部署DeepSeek Janus-Pro来进行文本生成图片

步骤

选择带GPU的服务器

到deepseek2025试用一下带GPU的服务器

下载Janus

git clone https://github.com/deepseek-ai/Janus.git

安装依赖

cd Janus
pip install -e .

安装gradio

pip install gradio

安装torch

pip uninstall torch torchvision torchaudio -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

运行示例

python demo/app_januspro.py --device cuda

输出示例如下

Python version is above 3.10, patching the collections module.
/root/miniforge3/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py:594: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` insteadwarnings.warn(
pytorch_model-00001-of-00002.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 9.99G/9.99G [09:34<00:00, 11.9MB/s]
pytorch_model-00002-of-00002.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 4.85G/4.85G [06:46<00:00, 11.9MB/s]
Downloading shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [16:21<00:00, 490.70s/it]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00,  2.47s/it]
preprocessor_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 346/346 [00:00<00:00, 3.40MB/s]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 285/285 [00:00<00:00, 2.94MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.72M/4.72M [00:00<00:00, 18.1MB/s]
special_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 344/344 [00:00<00:00, 2.93MB/s]
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
processor_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 210/210 [00:00<00:00, 2.00MB/s]
Some kwargs in processor config are unused and will not have any effect: ignore_id, add_special_token, num_image_tokens, mask_prompt, sft_format, image_tag. 
* Running on local URL:  http://127.0.0.1:7860
* Running on public URL: https://xxxxx.gradio.liveThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)

可以访问这个public URL

使用示例

在这里插入图片描述

大概需要等120s左右可以生成,app.py使用的模型deepseek-ai/Janus-1.3B

小结

自己部署实际还是挺多麻烦的(最开始是在mac上跑,遇到CUDA_HOME问题,后来是找了cpu版本的,遇到没有GPU的问题,最后用了一个带GPU的服务器才跑成功),会遇到各种依赖问题,还有GPU等配置问题,另外就是网络访问问题,所以实际折腾下来就是,如果没有其他特殊需求,还是乖乖用云服务的api吧。

doc

  • deepseek2025
  • DeepSeek 多模态大模型Janus-Pro-7B,本地部署教程!支持图像识别和图像生成

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词