openEuler × Qwen:
Fast & Easy Qwen3/Qwen3-MoE Model Deployment on openEuler

openEuler × Qwen: Fast & Easy Qwen3/Qwen3-MoE Model Deployment on openEuler

📢 Breaking News for TODAY [April 29] — Alibaba launched the next-gen Qwen3 and Qwen3-MoE models, bringing major upgrades in both scale and performance.

Article content

The OpenAtom openEuler community, in collaboration with the vLLM community, has already validated Qwen3/Qwen3-MoE, enabling developers to run inference using openEuler and vLLM. This support was made possible shortly after the release of vLLM v0.8.4, with openEuler becoming a default OS and a streamlined container image being launched for easy deployment. 👏

Experience this powerful new model on openEuler (download🔗) today and see its advanced capabilities in action! 🙌

 

Now, let's dive into the v0.8.4rc2 📰 release notes, which make this possible... 👀

🆕 Key Feature Updates

☑️ Powerful Qwen3 & Qwen3-MoE supported 📰

  • Qwen3: Qwen3-0.6B, Qwen3-1.7B, Qwen3-4B, and Qwen3-8B
  • Qwen3-MoE: Qwen3-MoE-15B-A2B, Qwen3-30B-A3B, and Qwen3-235B-A22B

Article content


Article content

☑️ W8A8 quantization 📰

☑️ PyTorch 2.5.1 integrated, no need to manually install torch-npu

☑️ torch.compile graph supported

☑️ openEuler container image supported

☑️ Lora support

 

With all the exciting new features in v0.8.4rc2, it's now easier than ever to experience Qwen3 on openEuler! Thanks to the seamless integration of these updates, you're just a few steps away from running Qwen3 on your local setup. Let's dive into the hands-on guide and get you started with deploying Qwen3 on openEuler! 🙌

Before Getting Started

Before getting started, make sure your firmware and drivers are correctly installed. You can confirm with the following command:

npu-smi info        

Once everything is set, you can use the following command to quickly pull up the vLLM-Ascend container image based on openEuler:

# Update DEVICE according to your device (/dev/davinci[0-7]).

export DEVICE=/dev/davinci0

# Update the openeuler-vllm-ascend image.

export IMAGE=quay.io/ascend/vllm-ascend:v0.8.4rc2-openeuler

docker run --rm \

--name openeuler-vllm-ascend \

--device $DEVICE \

--device /dev/davinci_manager \

--device /dev/devmm_svm \

--device /dev/hisi_hdc \

-v /usr/local/dcmi:/usr/local/dcmi \

-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \

-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \

-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \

-v /etc/ascend_install.info:/etc/ascend_install.info \

-v /root/.cache:/root/.cache \

-p 8000:8000 \

-it $IMAGE bash        

After entering the container environment, use the ModelScope platform 🔗 to accelerate the download:

export VLLM_USE_MODELSCOPE=true        

Online Inference

You can easily deploy an online inference service with vLLM using a simple command:

vllm serve Qwen/Qwen3-8B        

Once the service is up and running, use a curl request to generate content:

curl http://localhost:8000/v1/completions \

-H "Content-Type: application/json" \

-d '{"model": "Qwen/Qwen3-8B", "prompt": "The future of AI is", "max_tokens": 5,  "temperature": 0}' | python3 -m json.tool        

Offline Inference

For offline inference, use vLLM. Here's an example script (example.py):

from vllm import LLM, SamplingParams

prompts = [

    "Hello, my name is",

    "The future of AI is",]

# Create a sampling params object.

sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.

llm = LLM(model="Qwen/Qwen3-8B")

# Generate texts from the prompts.

outputs = llm.generate(prompts, sampling_params)

for output in outputs:

    prompt = output.prompt

    generated_text = output.outputs[0].text

    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")        

Run the script below to start the inference:

# export VLLM_USE_MODELSCOPE=true to speed up download if Hugging Face is not reachable.

python example.py        

The inference results will appear as shown below:

Article content

 

More Questions or Issues?

If you encounter any issues while deploying or running Qwen3 on openEuler, feel free to report them on the official openEuler forum 💬 under the dedicated thread for Qwen3 on openEuler 👉 Qwen3 on openEuler - Discussion & Feedback 🔗, or simply drop a comment below.

 

More details:

To view or add a comment, sign in

More articles by openEuler

Explore topics