Popular repositories Loading
-
triton-inference-server
triton-inference-server PublicForked from triton-inference-server/server
The Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs.
C++ 1
-
-
sagemaker-pytorch-inference-toolkit
sagemaker-pytorch-inference-toolkit PublicForked from aws/sagemaker-pytorch-inference-toolkit
Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at https://github.com/aws/deep-learning-containers.
Python
-
sagemaker-inference-toolkit
sagemaker-inference-toolkit PublicForked from aws/sagemaker-inference-toolkit
Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Python
-
deep-learning-containers
deep-learning-containers PublicForked from aws/deep-learning-containers
AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet.
Python
-
multi-model-server
multi-model-server PublicForked from awslabs/multi-model-server
Multi Model Server is a tool for serving neural net models for inference
Java
If the problem persists, check the GitHub status page or contact support.