Command Line Interface
The Ultralytics command line interface (CLI) provides a straightforward way to use Ultralytics YOLO models without needing a Python environment. The CLI supports running various tasks directly from the terminal using the yolo command, requiring no customization or Python code.
Watch: Mastering Ultralytics YOLO: CLI
Ultralytics yolo commands use the following syntax:
yolo TASK MODE ARGSWhere:
TASK(optional) is one of [detect, segment, classify, pose, obb]MODE(required) is one of [train, val, predict, export, track, benchmark]ARGS(optional) are any number of customarg=valuepairs likeimgsz=320that override defaults.
See all ARGS in the full Configuration Guide or with yolo cfg.
Where:
TASK(optional) is one of[detect, segment, classify, pose, obb]. If not explicitly passed, YOLO will attempt to infer theTASKfrom the model type.MODE(required) is one of[train, val, predict, export, track, benchmark]ARGS(optional) are any number of customarg=valuepairs likeimgsz=320that override defaults. For a full list of availableARGS, see the Configuration page anddefault.yaml.
Arguments must be passed as arg=val pairs, separated by an equals = sign and delimited by spaces between pairs. Do not use -- argument prefixes or commas , between arguments.
yolo predict model=yolo26n.pt imgsz=640 conf=0.25✅yolo predict model yolo26n.pt imgsz 640 conf 0.25❌yolo predict --model yolo26n.pt --imgsz 640 --conf 0.25❌
Train
Train YOLO on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments, see the Configuration page.
Start training YOLO26n on COCO8 for 100 epochs at image size 640:
yolo detect train data=coco8.yaml model=yolo26n.pt epochs=100 imgsz=640Val
Validate the accuracy of the trained model on the COCO8 dataset. No arguments are needed as the model retains its training data and arguments as model attributes.
Validate an official YOLO26n model:
yolo detect val model=yolo26n.ptPredict
Use a trained model to run predictions on images.
Predict with an official YOLO26n model:
yolo detect predict model=yolo26n.pt source='https://ultralytics.com/images/bus.jpg'Export
Export a model to a different format like ONNX or CoreML.
Export an official YOLO26n model to ONNX format:
yolo export model=yolo26n.pt format=onnxAvailable Ultralytics export formats are in the table below. You can export to any format using the format argument, i.e., format='onnx' or format='engine'.
| Format | format Argument | Model | Metadata | Arguments |
|---|---|---|---|---|
| PyTorch | - | yolo26n.pt | ✅ | - |
| TorchScript | torchscript | yolo26n.torchscript | ✅ | imgsz, half, dynamic, optimize, nms, batch, device |
| ONNX | onnx | yolo26n.onnx | ✅ | imgsz, half, dynamic, simplify, opset, nms, batch, device |
| OpenVINO | openvino | yolo26n_openvino_model/ | ✅ | imgsz, half, dynamic, int8, nms, batch, data, fraction, device |
| TensorRT | engine | yolo26n.engine | ✅ | imgsz, half, dynamic, simplify, workspace, int8, nms, batch, data, fraction, device |
| CoreML | coreml | yolo26n.mlpackage | ✅ | imgsz, dynamic, half, int8, nms, batch, device |
| TF SavedModel | saved_model | yolo26n_saved_model/ | ✅ | imgsz, keras, int8, nms, batch, data, fraction, device |
| TF GraphDef | pb | yolo26n.pb | ❌ | imgsz, batch, device |
| TF Lite | tflite | yolo26n.tflite | ✅ | imgsz, half, int8, nms, batch, data, fraction, device |
| TF Edge TPU | edgetpu | yolo26n_edgetpu.tflite | ✅ | imgsz, int8, data, fraction, device |
| TF.js | tfjs | yolo26n_web_model/ | ✅ | imgsz, half, int8, nms, batch, data, fraction, device |
| PaddlePaddle | paddle | yolo26n_paddle_model/ | ✅ | imgsz, batch, device |
| MNN | mnn | yolo26n.mnn | ✅ | imgsz, batch, int8, half, device |
| NCNN | ncnn | yolo26n_ncnn_model/ | ✅ | imgsz, half, batch, device |
| IMX500 | imx | yolo26n_imx_model/ | ✅ | imgsz, int8, data, fraction, nms, device |
| RKNN | rknn | yolo26n_rknn_model/ | ✅ | imgsz, batch, name, device |
| ExecuTorch | executorch | yolo26n_executorch_model/ | ✅ | imgsz, batch, device |
| Axelera | axelera | yolo26n_axelera_model/ | ✅ | imgsz, batch, int8, data, fraction, device |
| DeepX | deepx | yolo26n_deepx_model/ | ✅ | imgsz, int8, data, optimize, device |
See full export details on the Export page.
Overriding Default Arguments
Override default arguments by passing them in the CLI as arg=value pairs.
Train a detection model for 10 epochs with a learning rate of 0.01:
yolo detect train data=coco8.yaml model=yolo26n.pt epochs=10 lr0=0.01Overriding Default Config File
Override the default.yaml configuration file entirely by passing a new file with the cfg argument, such as cfg=custom.yaml.
To do this, first create a copy of default.yaml in your current working directory with the yolo copy-cfg command, which creates a default_copy.yaml file.
You can then pass this file as cfg=default_copy.yaml along with any additional arguments, like imgsz=320 in this example:
yolo copy-cfg
yolo cfg=default_copy.yaml imgsz=320Solutions Commands
Ultralytics provides ready-to-use solutions for common computer vision applications through the CLI. The yolo solutions command exposes object counting, cropping, blurring, workout monitoring, heatmaps, instance segmentation, VisionEye, speed estimation, queue management, analytics, Streamlit inference, and zone-based tracking — see the Solutions page for the full catalog. Run yolo solutions help to list every supported solution and its arguments.
Count objects in a video or live stream:
yolo solutions count show=True
yolo solutions count source="path/to/video.mp4" # specify video file pathFor more information on Ultralytics solutions, visit the Solutions page.
FAQ
How do I use the Ultralytics YOLO command line interface (CLI) for model training?
To train a model using the CLI, execute a single-line command in the terminal. For example, to train a detection model for 10 epochs with a learning rate of 0.01, run:
yolo train data=coco8.yaml model=yolo26n.pt epochs=10 lr0=0.01This command uses the train mode with specific arguments. For a full list of available arguments, refer to the Configuration Guide.
What tasks can I perform with the Ultralytics YOLO CLI?
The Ultralytics YOLO CLI supports various tasks, including detection, segmentation, classification, pose estimation, and oriented bounding box detection. You can also perform operations like:
- Train a Model: Run
yolo train data=<data.yaml> model=<model.pt> epochs=<num>. - Run Predictions: Use
yolo predict model=<model.pt> source=<data_source> imgsz=<image_size>. - Export a Model: Execute
yolo export model=<model.pt> format=<export_format>. - Use Solutions: Run
yolo solutions <solution_name>for ready-made applications.
Customize each task with various arguments. For detailed syntax and examples, see the respective sections like Train, Predict, and Export.
How can I validate the accuracy of a trained YOLO model using the CLI?
To validate a model's accuracy, use the val mode. For example, to validate a pretrained detection model with a batch size of 1 and an image size of 640, run:
yolo val model=yolo26n.pt data=coco8.yaml batch=1 imgsz=640This command evaluates the model on the specified dataset and provides performance metrics like mAP, precision, and recall. For more details, refer to the Val section.
What formats can I export my YOLO models to using the CLI?
You can export YOLO models to various formats including ONNX, TensorRT, CoreML, TensorFlow, and more. For instance, to export a model to ONNX format, run:
yolo export model=yolo26n.pt format=onnxThe export command supports numerous options to optimize your model for specific deployment environments. For complete details on all available export formats and their specific parameters, visit the Export page.
How do I use the pre-built solutions in the Ultralytics CLI?
Ultralytics provides ready-to-use solutions through the solutions command. For example, to count objects in a video:
yolo solutions count source="path/to/video.mp4"These solutions require minimal configuration and provide immediate functionality for common computer vision tasks. To see all available solutions, run yolo solutions help. Each solution has specific parameters that can be customized to fit your needs.