配置

两台8*64的昇腾910b4服务器物理机,未做任何虚拟化,已经通过交换机进行互联

模型和镜像下载

模型:

https://modelscope.cn/models/Eco-Tech/Qwen3.5-35B-A3B-w8a8-mtp

镜像(不是openeuler系统就下不带openeuler的):

m.daocloud.io/quay.io/ascend/vllm-ascend:v0.18.0rc1-openeuler

驱动:

https://www.hiascend.com/hardware/firmware-drivers/community?product=4&model=32&cann=8.5.1&driver=Ascend+HDK+25.5.2

run文件就行

上传以上所有文件到服务器的硬盘上

服务器验货:

先分别查看两台服务器的npu状态:

#!/bin/bash

 # Check the remote switch ports
 for i in {0..7}; do hccn_tool -i $i -lldp -g | grep Ifname; done 
 # Get the link status of the Ethernet ports (UP or DOWN)
 for i in {0..7}; do hccn_tool -i $i -link -g ; done
 # Check the network health status
 for i in {0..7}; do hccn_tool -i $i -net_health -g ; done
 # View the network detected IP configuration
 for i in {0..7}; do hccn_tool -i $i -netdetect -g ; done
 # View gateway configuration
 for i in {0..7}; do hccn_tool -i $i -gateway -g ; done
 # show ip addr
 for i in {0..7}; do hccn_tool -i $i -ip -g | grep ipaddr; done
 # View NPU network configuration
 cat /etc/hccn.conf

注意:如果这里面有执行错误就没法继续部署,必须先找厂家解决

验证两边的通信:

分别从两台主机的 for i in {0..7}; do hccn_tool -i $i -ip -g | grep ipaddr; done执行结果中找的找出npu的ip地址,使用如下命令进行ping验证

for i in {0..7};do hccn_tool -i 0 -ping -g address {其中任一npu的ip};done

如果有ping不通,说明服务器pcie或交换机有问题,必须先解决

驱动安装:

因为是使用docker镜像安装,所以cann安装不是必须的,只需要安装驱动

从华为官网下载驱动,如果没有商用权限就下载社区的,都能用,vllm-ascend:v0.18.0rc1使用cann8.5.1版本,这个版本要求驱动得是25.5.2,分别下载:

Ascend-hdk-910b-npu-firmware_7.8.0.7.220.run
Ascend-hdk-910b-npu-driver_25.5.2_linux-aarch64.run

接下来看情况,先使用npu-smi info看看驱动版本,如果npu-smi执行成功了,说明已经装好驱动了,如果出现未找到命令这种,说明没装驱动

没装驱动的,安装:

chmod +x ./*.run
./Ascend-hdk-910b-npu-firmware_7.8.0.7.220.run --install
./Ascend-hdk-910b-npu-driver_25.5.2_linux-aarch64.run --install

如果装了驱动,但是版本低,升级:

chmod +x ./*.run
./Ascend-hdk-910b-npu-firmware_7.8.0.7.220.run --upgrade
./Ascend-hdk-910b-npu-driver_25.5.2_linux-aarch64.run --upgrade

别管安装还是升级了,都要做一次重启,直接reboot命令就行

部署前准备

在两台服务器分别执行下面的脚本,对两台服务器先统一状态,如果中间有重启记着重新统一

注意这一步如果缺少会导致多机部署的时候可能出现rank错误或卡死,如果有类似问题试试执行统一状态

for i in {0..7}; do hccn_tool -i $i -tls -s enable 0 ; done

启动vllm-ascend的docker镜像

使用脚本再两台服务器分别启动镜像:

注意,其中的/data/qwen35_397b_w8a8_mtp换成你的实际模型权重的存放路径

#!/bin/sh
NAME=model-vllm
PORT=10020
DEVICES="0,1,2,3,4,5,6,7"
IMAGE="m.daocloud.io/quay.io/ascend/vllm-ascend:v0.18.0rc1-openeuler"    # 加载镜像
docker run -itd -u 0  --ipc=host  --privileged \
 -e VLLM_USE_MODELSCOPE=True -e PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 \
 -e  ASCEND_RT_VISIBLE_DEVICES=$DEVICES \
 --name $NAME \
 --net=host \
 --shm-size=100g \
 --device /dev/davinci_manager \
 --device /dev/devmm_svm \
 --device /dev/hisi_hdc \
 -v /usr/local/dcmi:/usr/local/dcmi \
 -v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
 -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
 -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
 -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
 -v /etc/ascend_install.info:/etc/ascend_install.info \
 -v /home/:/home/ \
 -v /opt/data/:/opt/data/ \
 -p $PORT:11025 \
 -v /data/.cache:/root/.cache \
 -v /data/qwen35_397b_w8a8_mtp:/model_weights 
 -it $IMAGE bash

启动模型

分别进入两台的docker,进去之后会卡几秒等下就行

docker exec -it model-vllm /bin/bash

启动master

先聚焦到master的docker里

执行master启动脚本start-master-service.sh:

#!/bin/bash
export HCCL_IF_IP=当前物理机ip
#以下网卡名称为当前物理机ip相关联的那个网卡即可
export GLOO_SOCKET_IFNAME="网卡名称"
export TP_SOCKET_IFNAME="网卡名称"
export HCCL_SOCKET_IFNAME="网卡名称"
export HCCL_BUFFSIZE=1024
export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7



export HCCL_OP_EXPANSION_MODE="AIV"
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True



export OMP_PROC_BIND=false
export OMP_NUM_THREADS=100
export VLLM_USE_V1=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=0



export HCCL_INTRA_PCIE_ENABLE=1
export HCCL_INTRA_ROCE_ENABLE=0



export TASK_QUEUE_ENABLE=1
export VLLM_API_KEY=SK1234567890987654321


vllm serve /model_weights \
 --served-model-name "qwen35" \
 --host 0.0.0.0 \
 --port 11025\
 --tensor-parallel-size 8 \
 --data-parallel-size 2 \
 --data-parallel-size-local 1 \
 --data-parallel-start-rank 0 \
 --data-parallel-address 当前物理机ip \
 --data-parallel-rpc-port 13071 \
 --max-num-seqs 64 \
 --max-model-len 262144 \
 --max-num-batched-tokens 16384 \
 --gpu-memory-utilization 0.92 \
 --enable-chunked-prefill \
 --async-scheduling \
 --api-key $VLLM_API_KEY \
 --enable-expert-parallel \
 --trust-remote-code \
 --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY","cudagraph_capture_sizes":[1,2,4,8,16,32,64,80,96,128]}' \
 --mm_processor_cache_type="shm" \
 --quantization ascend \
 --allowed-local-media-path / \
 --no-enable-prefix-caching \
 --speculative_config '{"method": "qwen3_5_mtp", "num_speculative_tokens": 3, "enforce_eager": true}' \
 --additional-config '{"enable_cpu_binding":true,"multistream_overlap_shared_expert": true}' \
 --default-chat-template-kwargs '{"enable_thinking": false}' \
 --enable-auto-tool-choice \
 --tool-call-parser qwen3_coder

再做个daemon脚本

#!/bin/bash

nohup ./start-master-service.sh > ./service.log 2>&1 &

启动worker

再聚焦到worker的docker里

执行worker启动脚本start-worker-service.sh:

export HCCL_IF_IP=当前物理机ip
#以下网卡名称为当前物理机ip相关联的那个网卡即可
export GLOO_SOCKET_IFNAME="网卡名称"
export TP_SOCKET_IFNAME="网卡名称"
export HCCL_SOCKET_IFNAME="网卡名称"
export HCCL_BUFFSIZE=1024

export ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7


export HCCL_OP_EXPANSION_MODE="AIV"
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True



export OMP_PROC_BIND=false
export OMP_NUM_THREADS=100
export VLLM_USE_V1=1
export VLLM_ASCEND_ENABLE_FLASHCOMM1=0



export HCCL_INTRA_PCIE_ENABLE=1
export HCCL_INTRA_ROCE_ENABLE=0



export TASK_QUEUE_ENABLE=1
export VLLM_API_KEY=SK1234567890987654321



vllm serve /model_weights \
 --served-model-name "qwen35" \
 --host 0.0.0.0 \
 --port 11205 \
 --headless \
 --tensor-parallel-size 8 \
 --data-parallel-size 2 \
 --data-parallel-size-local 1 \
 --data-parallel-start-rank 1 \
 --data-parallel-address master物理机的ip地址 \
 --data-parallel-rpc-port 13071 \
 --max-num-seqs 64 \
 --max-model-len 262144 \
 --max-num-batched-tokens 16384 \
 --gpu-memory-utilization 0.92 \
 --enable-chunked-prefill \
 --async-scheduling \
 --api-key $VLLM_API_KEY \
 --enable-expert-parallel \
 --trust-remote-code \
 --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY","cudagraph_capture_sizes":[1,2,4,8,16,32,64,80,96,128]}' \
 --mm_processor_cache_type="shm" \
 --quantization ascend \
 --allowed-local-media-path / \
 --no-enable-prefix-caching \
 --speculative_config '{"method": "qwen3_5_mtp", "num_speculative_tokens": 3, "enforce_eager": true}' \
 --additional-config '{"enable_cpu_binding":true,"multistream_overlap_shared_expert": true}'
 --default-chat-template-kwargs '{"enable_thinking": false}' \
 --enable-auto-tool-choice \
 --tool-call-parser qwen3_coder

再做个worker的daemon脚本

#!/bin/bash

nohup ./start-worker-service.sh > ./service.log 2>&1 &

分别启动master和worker的脚本,先master再worker,成功了master端会有:

(APIServer pid=373) INFO:     Started server process [373]
(APIServer pid=373) INFO:     Waiting for application startup.

(APIServer pid=373) INFO:     Application startup complete.

验证

#!/bin/bash
curl -N -XPOST http://{master节点ip}:11025/v1/chat/completions \
  -H "Content-type: application/json" \
  -H "Authorization: Bearer SK1234567890987654321" \
  -d '{
    "model": "qwen35",
    "messages": [{"role": "user", "content": "Please introduce the qwen3.5?"}],
    "stream": true,
    "temperature": 0.7,
    "top_p": 0.8,
    "max_tokens": 1500
  }'

首次访问会比较慢

Logo

作为“人工智能6S店”的官方数字引擎,为AI开发者与企业提供一个覆盖软硬件全栈、一站式门户。

更多推荐