This repository serves as the Docker image pack center for GPUStack Runner. It provides a collection of Dockerfiles to build images for various inference services across different accelerated backends.
- Onboard Services
- Directory Structure
- Dockerfile Convention
- Docker Image Naming Convention
- Integration Process
Tip
- The list below shows the accelerated backends and inference services available in the latest release. For support of backends or services not shown here, please refer to previous release tags.
- Deprecated inference service versions in the latest release are marked with
strikethroughformatting. They may still be available in previous releases, and not recommended for new deployments. - Polished inference service versions in the latest release are marked with bold formatting. If they are using in your deployment, it is recommended to pull the latest images and upgrade.
The following table lists the supported accelerated backends and their corresponding inference services with versions.
Warning
Important
- Applied ATB model patched to MindIE 2.2.rc1/2.1.rc2.
- Applied av package to MindIE 2.2.rc1/2.1.rc2.
- Update vLLM 0.11.0 with stable vLLM Ascend plugin.
| CANN Version (Variant) |
MindIE | vLLM | SGLang |
|---|---|---|---|
| 8.3 (A3/910C) | 2.2.rc1 |
0.12.0, 0.11.0 |
0.5.6.post2 |
| 8.3 (910B) | 2.2.rc1 |
0.12.0, 0.11.0 |
0.5.6.post2 |
| 8.3 (310P) | 2.2.rc1 |
||
| 8.2 (A3/910C) | 2.1.rc2 |
0.11.00.10.2, 0.10.1.1 |
0.5.2, 0.5.1.post3 |
| 8.2 (910B) | 2.1.rc2, 2.1.rc1 |
0.11.00.10.2, 0.10.1.1, 0.10.0, 0.9.2, 0.9.1 |
0.5.2, 0.5.1.post3 |
| 8.2 (310P) | 2.1.rc2, 2.1.rc1 |
0.10.0, 0.9.2 |
| CoreX Version (Variant) |
vLLM |
|---|---|
| 4.2 | 0.8.3 |
Note
- CUDA 12.9 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0 10.0 10.3 12.0 12.1+PTX. - CUDA 12.8 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0 10.0+PTX 12.0+PTX. - CUDA 12.6/12.4 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0+PTX.
Important
- Applied Qwen2.5 VL patched to vLLM 0.11.2.
- Applied vLLM[audio] packages to vLLM 0.11.2.
| CUDA Version (Variant) |
vLLM | SGLang | VoxBox |
|---|---|---|---|
| 12.9 | 0.12.0, 0.11.2 |
0.5.6.post2 |
|
| 12.8 | 0.12.0, 0.11.2, 0.11.0, 0.10.2, 0.10.1.1, 0.10.0 |
0.5.6.post2, 0.5.5.post3, 0.5.5, 0.5.4.post3 |
0.0.20 |
| 12.6 | 0.12.0, 0.11.2, 0.11.0, 0.10.2, 0.10.1.1, 0.10.0 |
0.5.6.post2 |
0.0.20 |
| 12.4 | 0.11.0, 0.10.2, 0.10.1.1, 0.10.0 |
0.0.20 |
| DTK Version (Variant) |
vLLM |
|---|---|
| 25.04 | 0.9.2, 0.8.5 |
| MACA Version (Variant) |
vLLM |
|---|---|
| 3.2 | 0.10.2 |
| 3.0 | 0.9.1 |
Note
- ROCm 7.0 supports LLVM targets:
gfx908 gfx90a gfx942 gfx950 gfx1030 gfx1100 gfx1101 gfx1200 gfx1201 gfx1150 gfx1151. - ROCm 6.4/6.3 supports LLVM targets:
gfx908 gfx90a gfx942 gfx1030 gfx1100.
Warning
- ROCm 7.0 vLLM
0.11.2/0.11.0are reusing the official ROCm 6.4 PyTorch 2.9 wheel package rather than a ROCm 7.0 specific PyTorch build. Although supports ROCm 7.0 in vLLM0.11.2/0.11.0,gfx1150/gfx1151are not supported yet. - SGLang supports
gfx942only.
Important
- Applied vLLM[audio] packages to vLLM 0.11.2.
- Applied petit-kernel package to vLLM 0.11.2 and SGLang 0.5.5.post3.
| ROCm Version (Variant) |
vLLM | SGLang |
|---|---|---|
| 7.0 | 0.12.0, 0.11.2, 0.11.0 |
0.5.6.post2 |
| 6.4 | 0.12.0, 0.11.2, 0.10.2 |
0.5.6.post2, 0.5.5.post3 |
| 6.3 | 0.10.1.1, 0.10.0 |
The pack skeleton is organized by backend:
pack
├── {BACKEND 1}
│ └── Dockerfile
├── {BACKEND 2}
│ └── Dockerfile
├── {BACKEND 3}
│ └── Dockerfile
├── ...
│ └── Dockerfile
└── {BACKEND N}
└── Dockerfile
Each Dockerfile follows these conventions:
- Begin with comments describing the package logic in steps and usage of build arguments (
ARGs). - Use
ARGfor all required and optional build arguments. If a required argument is unused, mark it as(PLACEHOLDER). - Use heredoc syntax for
RUNcommands to improve readability.
# Describe package logic and ARG usage.
#
ARG PYTHON_VERSION=... # REQUIRED
ARG CMAKE_MAX_JOBS=... # REQUIRED
ARG {OTHERS} # OPTIONAL
ARG {BACKEND}_VERSION=... # REQUIRED
ARG {BACKEND}_ARCHS=... # REQUIRED
ARG {BACKEND}_{OTHERS}=... # OPTIONAL
ARG {SERVICE}_BASE_IMAGE=... # REQUIRED
ARG {SERVICE}_VERSION=... # REQUIRED
ARG {SERVICE}_{OTHERS}=... # OPTIONAL
ARG {SERVICE}_{FRAMEWORK}_VERSION=... # REQUIRED
ARG {SERVICE}_{FRAMEWORK}_{OTHERS}=... # OPTIONAL
# Stage Bake Runtime
FROM {BACKEND DEVEL IMAGE} AS runtime
SHELL ["/bin/bash", "-eo", "pipefail", "-c"]
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG ...
RUN <<EOF
# TODO: install runtime dependencies
EOF
# Stage Install Service
FROM {BACKEND}_BASE_IMAGE AS {service}
SHELL ["/bin/bash", "-eo", "pipefail", "-c"]
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG ...
RUN <<EOF
# TODO: install service and dependencies
EOF
WORKDIR /
ENTRYPOINT [ "tini", "--" ]
The Docker image naming convention is as follows:
- Multi-architecture image names:
{NAMESPACE}/{REPOSITORY}:{TAG}. - Single-architecture image tags:
{BACKEND}{BACKEND_VERSION%.*}[-{BACKEND_VARIANT}]-{SERVICE}{SERVICE_VERSION}-{OS}-{ARCH}. - Multi-architecture image tags:
{BACKEND}{BACKEND_VERSION%.*}[-{BACKEND_VARIANT}]-{SERVICE}{SERVICE_VERSION}[-dev]. - All names adn tags must be lowercase.
- NAMESPACE:
gpustack - REPOSITORY:
runner
| Accelerated Backend | OS/ARCH | Inference Service | Single-Arch Image Name | Multi-Arch Image Name |
|---|---|---|---|---|
| Ascend CANN 910b | linux/amd64 | vLLM | gpustack/runner:cann8.1-910b-vllm0.9.2-linux-amd64 |
gpustack/runner:cann8.1-910b-vllm0.9.2 |
| Ascend CANN 910b | linux/arm64 | vLLM | gpustack/runner:cann8.1-910b-vllm0.9.2-linux-arm64 |
gpustack/runner:cann8.1-910b-vllm0.9.2 |
| NVIDIA CUDA 12.8 | linux/amd64 | vLLM | gpustack/runner:cuda12.8-910b-vllm0.9.2-linux-amd64 |
gpustack/runner:cuda12.8-910b-vllm0.9.2 |
| NVIDIA CUDA 12.8 | linux/arm64 | vLLM | gpustack/runner:cuda12.8-910b-vllm0.9.2-linux-arm64 |
gpustack/runner:cuda12.8-910b-vllm0.9.2 |
- Build single architecture images for OS/ARCH, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2-linux-amd64. - Combine single-architecture images into a multiple architectures image, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2-dev. - After testing, rename the multi-architecture image to the final tag, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2.
To add support for a new accelerated backend:
- Create a new directory under
pack/named with the new backend. - Add a
Dockerfilein the new directory following the Dockerfile Convention. - Update pack.yml to include the new backend in the build matrix.
- Update matrix.yml to include the new backend and its variants.
- Update
_RE_DOCKER_IMAGEin runner.py to recognize the new backend. - [Optional] Update tests if necessary.
To add support for a new inference service:
- Modify the
Dockerfileof the relevant backend inpack/{BACKEND}/Dockerfileto include the new service. - Update pack.yml to include the new service in the build matrix.
- Update matrix.yml to include the new service.
- Update
_RE_DOCKER_IMAGEin runner.py to recognize the new service. - [Optional] Update tests if necessary.
Copyright (c) 2025 The GPUStack authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.