Leveling up AI collaboration and compute.
Users and organizations already use the Hub as a collaboration platform,
we’re making it easy to seamlessly and scalably launch ML compute directly from the Hub.
HF Hub
Collaborate on Machine Learning
-
Host unlimited models, datasets, and Spaces
-
Create unlimited orgs and private repos
-
Access the latest ML tools and open source
-
Community support
Forever
FreePro Account
Unlock advanced HF features
-
ZeroGPU and Dev Mode for Spaces
-
Higher rate limits for serverless inference
-
Get early access to upcoming features
-
Show your support with a Pro badge
Subscribe for
$9/monthEnterprise Hub
Accelerate your AI roadmap
-
SSO and SAML support
-
Select data location with Storage Regions
-
Precise actions reviews with Audit logs
-
Granular access control with Resource groups
-
Centralized token control and approval
-
Dataset Viewer for private datasets
-
Advanced compute options for Spaces
-
Deploy Inference on your own Infra
-
Managed billing with yearly commits
-
Priority support
Starting at
$20per user per monthSpaces Hardware
Upgrade your Space compute
-
Free CPUs
-
Build more advanced Spaces
-
7 optimized hardware available
-
From CPU to GPU to Accelerators
Starting at
$0/hourInference Endpoints
Deploy models on fully managed infrastructure
-
Deploy dedicated Endpoints in seconds
-
Keep your costs low
-
Fully-managed autoscaling
-
Enterprise security
Starting at
$0.032/hourNeed support to accelerate AI in your organization? View our Expert Support.
Hugging Face Hub
freeThe HF Hub is the central place to explore, experiment, collaborate and build technology with Machine
Learning.
Join the open source Machine Learning movement!
Create with ML
Packed with ML features, like model eval, dataset viewer and much more.
Collaborate
Git based and designed for collaboration at its core.
Play and learn
Learn by experimenting and sharing with our awesome community.
Build your ML portfolio
Share your work with the world and build your own ML profile.
Spaces Hardware
Starting at $0Spaces are one of the most popular ways to share ML applications and demos with the world.
Upgrade your Spaces with our selection of custom on-demand hardware:
Name | CPU | Memory | Accelerator | VRAM | Hourly price |
---|---|---|---|---|---|
CPU Basic | 2 vCPU | 16 GB | - | - | FREE |
CPU Upgrade | 8 vCPU | 32 GB | - | - | $0.03 |
Nvidia T4 - small | 4 vCPU | 15 GB | Nvidia T4 | 16 GB | $0.40 |
Nvidia T4 - medium | 8 vCPU | 30 GB | Nvidia T4 | 16 GB | $0.60 |
1x Nvidia L4 | 8 vCPU | 30 GB | Nvidia L4 | 24 GB | $0.80 |
4x Nvidia L4 | 48 vCPU | 186 GB | Nvidia L4 | 96 GB | $3.80 |
1x Nvidia L40S | 8 vCPU | 62 GB | Nvidia L4 | 48 GB | $1.80 |
4x Nvidia L40S | 48 vCPU | 382 GB | Nvidia L4 | 192 GB | $8.30 |
8x Nvidia L40S | 192 vCPU | 1534 GB | Nvidia L4 | 384 GB | $23.50 |
Nvidia A10G - small | 4 vCPU | 15 GB | Nvidia A10G | 24 GB | $1.00 |
Nvidia A10G - large | 12 vCPU | 46 GB | Nvidia A10G | 24 GB | $1.50 |
2x Nvidia A10G - large | 24 vCPU | 92 GB | Nvidia A10G | 48 GB | $3.00 |
4x Nvidia A10G - large | 48 vCPU | 184 GB | Nvidia A10G | 96 GB | $5.00 |
Nvidia A100 - large | 12 vCPU | 142 GB | Nvidia A100 | 40 GB | $4.00 |
TPU v5e 1x1 | 22 vCPU | 44 GB | Google TPU v5e | 16 GB | $1.38 |
TPU v5e 2x2 | 110 vCPU | 186 GB | Google TPU v5e | 64 GB | $5.50 |
TPU v5e 2x4 | 220 vCPU | 380 GB | Google TPU v5e | 128 GB | $11.00 |
Custom | on demand | on demand | on demand | on demand | on demand |
Spaces Persistent Storage
All Spaces get ephemeral storage for free but you can upgrade and add persistent storage at any time.
Name | Storage | Monthly price |
---|---|---|
Small | 20 GB | $5 |
Medium | 150 GB | $25 |
Large | 1 TB | $100 |
Building something cool as a side project? We also offer community GPU grants.
Inference Endpoints
Starting at $0.033/hourInference Endpoints (dedicated) offers a secure production solution to easily deploy any ML model on dedicated and autoscaling infrastructure, right from the HF Hub.
→Learn moreCPU instances
Provider | Architecture | vCPUs | Memory | Hourly rate |
---|---|---|---|---|
aws | Intel Sapphire Rapids | 1 | 2GB | $0.03 |
2 | 4GB | $0.07 | ||
4 | 8GB | $0.13 | ||
8 | 16GB | $0.27 | ||
azure | Intel Xeon | 1 | 2GB | $0.06 |
2 | 4GB | $0.12 | ||
4 | 8GB | $0.24 | ||
8 | 16GB | $0.48 | ||
gcp | Intel Sapphire Rapids | 1 | 2GB | $0.07 |
2 | 4GB | $0.14 | ||
4 | 8GB | $0.28 | ||
8 | 16GB | $0.56 |
Accelerator instances
Provider | Architecture | Topology | Accelerator Memory | Hourly rate |
---|---|---|---|---|
aws | Inf2 Neuron | x1 | 14.5GB | $0.75 |
x12 | 760GB | $12.00 | ||
gcp | TPU v5e | 1x1 | 16GB | $1.38 |
2x2 | 64GB | $5.50 | ||
2x4 | 128GB | $11.00 |
GPU instances
Provider | Architecture | GPUs | GPU Memory | Hourly rate |
---|---|---|---|---|
aws | NVIDIA T4 | 1 | 14GB | $0.50 |
4 | 56GB | $3.00 | ||
aws | NVIDIA L4 | 1 | 24GB | $0.80 |
4 | 96GB | $3.80 | ||
aws | NVIDIA L40S | 1 | 48GB | $1.80 |
4 | 192GB | $8.30 | ||
8 | 384GB | $23.50 | ||
aws | NVIDIA A10G | 1 | 24GB | $1.00 |
4 | 96GB | $5.00 | ||
aws | NVIDIA A100 | 1 | 80GB | $4.00 |
2 | 160GB | $8.00 | ||
4 | 320GB | $16.00 | ||
8 | 640GB | $32.00 | ||
gcp | NVIDIA T4 | 1 | 16GB | $0.50 |
gcp | NVIDIA L4 | 1 | 24GB | $1.00 |
4 | 96GB | $5.00 | ||
gcp | NVIDIA A100 | 1 | 80GB | $6.00 |
2 | 160GB | $12.00 | ||
4 | 320GB | $24.00 | ||
8 | 640GB | $48.00 | ||
gcp | NVIDIA H100 | 1 | 80GB | $12.50 |
2 | 160GB | $25.00 | ||
4 | 320GB | $50.00 | ||
8 | 640GB | $100.00 |
Pro Account
PROA monthly subscription to access powerful features.
→ Get Pro ($9/month)-
ZeroGPU: Get 5x usage quota and highest GPU queue priority.
-
Spaces Hosting: Create ZeroGPU Spaces with A100 hardware.
-
Dev Mode: Fast iterations via SSH/VS Code for Spaces.
-
Dataset Viewer: Activate it on private datasets.
-
Inference API: Get higher rate limits for serverless inference.
-
Blog Articles: Publish articles to the Hugging Face blog.
-
Social Posts: Share short updates with the community.
-
Features Preview: Get early access to upcoming features.
-
PRO Badge: Show your support on your profile.