笔者使用 Quadro M4000 显卡用于 LLM 相关任务,但奈何该卡发布的年代过于久远,以至于 LLM 相关任务只能使用例如:Phi3 mini、Qwen 2 2B、GLM 4 8B 以及 Gemini v2 2B等小参数模型,且速度不堪理想,也经常因为显卡过热降频导致对话效率低下。
对于家用而言,不会去考虑那些特别新的 Tesla 计算卡,而会考虑一些旧的大显存平台,最好是大于10 GB 的显存,这样可以跑一些经过量化的、参数量高一些的模型。对于计算相关,推理相关的更应注重FP16
的计算能力,如果有微调需求,同时也应注重FP32
的计算能力。
最近总想着置办一张计算卡用于 Homelab 的 LLM
应用,但是市面上的计算卡/显卡种类太多了,有的时候不晓得要看哪一张显卡,故从TechPowerUp
网站摘录下表,以供参考。
显卡型号 | Chip | Released | VRAM | Bandwidth | BF16 | FP16 | FP32 | FP64 | TDP (W) |
---|---|---|---|---|---|---|---|---|---|
Quadro M4000 (现役) | GM204 | Jun 29th, 2015 | 8 GB GDDR5 | 192.3 GB/s | Nan | Nan | 2.573 TFlops | 80.39 GFlops | 120 |
Tesla P4 | GP104 | Sep 13th, 2016 | 8GB GDDR5 | 192.3GB/s | Nan | 89.12 GFlops | 5.704 TFlops | 178.2 GFlops | 75 |
Tesla P40 | GP102 | Sep 13th, 2016 | 24GB GDDR5 | 347.1 GB/s | Nan | 183.7 GFlops | 11.76 TFlops | 367.4 GFlops | 250 |
Tesla P100 PCIE | GP100 | Jun 20th, 2016 | 16GB HBM2 | 732.2 GB/s | Nan | 19.05 TFlops | 9.526 TFlops | 4.763 TFlops | 250 |
Tesla P100 SXM2 | GP100 | Apr 5th, 2016 | 16GB HBM2 | 732.2 GB/s | Nan | 21.22 TFlops | 10.61 TFlops | 5.304 TFlops | 300 |
GTX 1080 | GP104 | May 27th, 2016 | 8GB GDDR5X | 320.3 GB/s | Nan | 138.6 GFlops | 8.873 TFlops | 277.3 GFlops | 180 |
RTX 2080 Ti | TU102 | Sep 20th, 2018 | 11GB GDDR6 | 616.0 GB/s | Nan | 26.9 TFlops | 13.45 TFlops | 420.2 GFlops | 250 |
Tesla V100 PCIe | GV100 | Jun 21st, 2017 | 16 GB HBM2 | 897.0 GB/s | Nan | 28.26 TFlops | 14.13 TFlops | 7.066 TFlops | 300 |
Tesla V100 PCIe | GV100 | Mar 27th, 2018 | 32 GB HBM2 | 897.0 GB/s | Nan | 28.26 TFlops | 14.13 TFlops | 7.066 TFlops | 250 |
Tesla T4 | TU104 | Sep 13th, 2018 | 16 GB GDDR6 | 320.0 GB/s | Nan | 65.13 TFlops | 8.141 TFlops | 254.4 GFlops | 70 |
RTX3060 | GA104 | Sep 1st, 2021 | 12GB GDDR6 | 360.0 GB/s | Unknow | 12.74 TFlops | 12.74 TFlops | 199.0 GFlops | 170 |
RTX3060 | GA106 | Jan 12th, 2021 | 12GB GDDR6 | 360.0 GB/s | Unknow | 12.74 TFlops | 12.74 TFlops | 199.0 GFlops | 170 |
RTX3060 Ti | GA104 | Dec 1st, 2020 | 8GB GDDR6 | 448.0 GB/s | Unknow | 16.2 TFlops | 16.2 TFlops | 253.1 GFlops | 200 |
RTX 3080 Ti | GA102 | Jan 2022 | 20GB GDDR6X | 760.3 GB/s | Unknow | 34.1 TFlops | 34.1 TFlops | 532.8 GFlops | 350 |
RTX 3090 | GA102 | Sep 1st, 2020 | 24 GB GDDR6X | 936.2 GB/s | Unknow | 35.58 TFlops | 35.58 TFlops | 556.0 GFlops | 350 |
RTX 3090 Ti | GA102 | Jan 27th, 2022 | 24GB GDDR6X | 1.01 TB/s | Unknow | 40 TFlops | 40 TFlops | 625.0 GFlops | 450 |
A100 PCIe | GA100 | Jun 22nd, 2020 | 40 GB HBM2e | 1.56 TB/s | 311.84 TFlops | 77.97 TFlops | 19.49 TFlops | 9.746 TFlops | 250 |
RTX 4060 | AD107 | May 18th, 2023 | 8 GB GDDR6 | 272.0 GB/s | Unknow | 15.11 TFlops | 15.11 TFlops | 236.2 GFlops | 115 |
RTX 4060 Ti | AD106 | May 18th, 2023 | 16 GB GDDR6 | 288.0 GB/s | Unknow | 22.06 TFlops | 22.06 TFlops | 344.8 GFlops | 165 |
RTX 4070 SUPER | AD104 | Jan 8th, 2024 | 12 GB GDDR6X | 504.2 GB/s | Unknow | 35.48 TFlops | 35.48 TFlops | 554.4 GFlops | 220 |
RTX 4070 Ti SUPER | AD103 | Jan 8th, 2024 | 16 GB GDDR6X | 672.3 GB/s | Unknow | 44.10 TFlops | 44.10 TFlops | 689.0 GFlops | 285 |
RTX 4080 | AD103 | Sep 20th, 2022 | 16 GB GDDR6X | 716.8 GB/s | Unknow | 48.74 TFlops | 48.74 TFlops | 761.5 GFlops | 320 |
RTX 4080 SUPER | AD103 | Jan 8th, 2024 | 16 GB GDDR6X | 736.3 GB/s | Unknow | 52.22 TFlops | 52.22 TFlops | 816.0 GFlops | 320 |
RTX 4090 | AD102 | Sep 20th, 2022 | 24 GB GDDR6X | 1.01 TB/s | Unknow | 82.58 TFlops | 82.58 TFlops | 1,290 GFlops | 450 |
RTX 4090 D | AD102 | Sep 20th, 2022 | 24 GB GDDR6X | 1.01 TB/s | Unknow | 73.54 TFlops | 73.54 TFlops | 1,149 GFlops | 450 |