Command Line Interface

The Ultralytics command line interface (CLI) provides a straightforward way to use Ultralytics YOLO models without needing a Python environment. The CLI supports running various tasks directly from the terminal using the yolo command, requiring no customization or Python code.



Watch: Mastering Ultralytics YOLO: CLI
Example

Ultralytics yolo commands use the following syntax:

yolo TASK MODE ARGS

Where:

  • TASK (optional) is one of [detect, segment, classify, pose, obb]
  • MODE (required) is one of [train, val, predict, export, track, benchmark]
  • ARGS (optional) are any number of custom arg=value pairs like imgsz=320 that override defaults.

See all ARGS in the full Configuration Guide or with yolo cfg.

Where:

  • TASK (optional) is one of [detect, segment, classify, pose, obb]. If not explicitly passed, YOLO will attempt to infer the TASK from the model type.
  • MODE (required) is one of [train, val, predict, export, track, benchmark]
  • ARGS (optional) are any number of custom arg=value pairs like imgsz=320 that override defaults. For a full list of available ARGS, see the Configuration page and default.yaml.
Warning

Arguments must be passed as arg=val pairs, separated by an equals = sign and delimited by spaces between pairs. Do not use -- argument prefixes or commas , between arguments.

  • yolo predict model=yolo26n.pt imgsz=640 conf=0.25
  • yolo predict model yolo26n.pt imgsz 640 conf 0.25
  • yolo predict --model yolo26n.pt --imgsz 640 --conf 0.25

Train

Train YOLO on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments, see the Configuration page.

Example

Start training YOLO26n on COCO8 for 100 epochs at image size 640:

yolo detect train data=coco8.yaml model=yolo26n.pt epochs=100 imgsz=640

Val

Validate the accuracy of the trained model on the COCO8 dataset. No arguments are needed as the model retains its training data and arguments as model attributes.

Example

Validate an official YOLO26n model:

yolo detect val model=yolo26n.pt

Predict

Use a trained model to run predictions on images.

Example

Predict with an official YOLO26n model:

yolo detect predict model=yolo26n.pt source='https://ultralytics.com/images/bus.jpg'

Export

Export a model to a different format like ONNX or CoreML.

Example

Export an official YOLO26n model to ONNX format:

yolo export model=yolo26n.pt format=onnx

Available Ultralytics export formats are in the table below. You can export to any format using the format argument, i.e., format='onnx' or format='engine'.

Formatformat ArgumentModelMetadataArguments
PyTorch-yolo26n.pt-
TorchScripttorchscriptyolo26n.torchscriptimgsz, half, dynamic, optimize, nms, batch, device
ONNXonnxyolo26n.onnximgsz, half, dynamic, simplify, opset, nms, batch, device
OpenVINOopenvinoyolo26n_openvino_model/imgsz, half, dynamic, int8, nms, batch, data, fraction, device
TensorRTengineyolo26n.engineimgsz, half, dynamic, simplify, workspace, int8, nms, batch, data, fraction, device
CoreMLcoremlyolo26n.mlpackageimgsz, dynamic, half, int8, nms, batch, device
TF SavedModelsaved_modelyolo26n_saved_model/imgsz, keras, int8, nms, batch, data, fraction, device
TF GraphDefpbyolo26n.pbimgsz, batch, device
TF Litetfliteyolo26n.tfliteimgsz, half, int8, nms, batch, data, fraction, device
TF Edge TPUedgetpuyolo26n_edgetpu.tfliteimgsz, int8, data, fraction, device
TF.jstfjsyolo26n_web_model/imgsz, half, int8, nms, batch, data, fraction, device
PaddlePaddlepaddleyolo26n_paddle_model/imgsz, batch, device
MNNmnnyolo26n.mnnimgsz, batch, int8, half, device
NCNNncnnyolo26n_ncnn_model/imgsz, half, batch, device
IMX500imxyolo26n_imx_model/imgsz, int8, data, fraction, nms, device
RKNNrknnyolo26n_rknn_model/imgsz, batch, name, device
ExecuTorchexecutorchyolo26n_executorch_model/imgsz, batch, device
Axeleraaxelerayolo26n_axelera_model/imgsz, batch, int8, data, fraction, device
DeepXdeepxyolo26n_deepx_model/imgsz, int8, data, optimize, device

See full export details on the Export page.

Overriding Default Arguments

Override default arguments by passing them in the CLI as arg=value pairs.

Tip

Train a detection model for 10 epochs with a learning rate of 0.01:

yolo detect train data=coco8.yaml model=yolo26n.pt epochs=10 lr0=0.01

Overriding Default Config File

Override the default.yaml configuration file entirely by passing a new file with the cfg argument, such as cfg=custom.yaml.

To do this, first create a copy of default.yaml in your current working directory with the yolo copy-cfg command, which creates a default_copy.yaml file.

You can then pass this file as cfg=default_copy.yaml along with any additional arguments, like imgsz=320 in this example:

Example
yolo copy-cfg
yolo cfg=default_copy.yaml imgsz=320

Solutions Commands

Ultralytics provides ready-to-use solutions for common computer vision applications through the CLI. The yolo solutions command exposes object counting, cropping, blurring, workout monitoring, heatmaps, instance segmentation, VisionEye, speed estimation, queue management, analytics, Streamlit inference, and zone-based tracking — see the Solutions page for the full catalog. Run yolo solutions help to list every supported solution and its arguments.

Example

Count objects in a video or live stream:

yolo solutions count show=True
yolo solutions count source="path/to/video.mp4" # specify video file path

For more information on Ultralytics solutions, visit the Solutions page.

FAQ

How do I use the Ultralytics YOLO command line interface (CLI) for model training?

To train a model using the CLI, execute a single-line command in the terminal. For example, to train a detection model for 10 epochs with a learning rate of 0.01, run:

yolo train data=coco8.yaml model=yolo26n.pt epochs=10 lr0=0.01

This command uses the train mode with specific arguments. For a full list of available arguments, refer to the Configuration Guide.

What tasks can I perform with the Ultralytics YOLO CLI?

The Ultralytics YOLO CLI supports various tasks, including detection, segmentation, classification, pose estimation, and oriented bounding box detection. You can also perform operations like:

  • Train a Model: Run yolo train data=<data.yaml> model=<model.pt> epochs=<num>.
  • Run Predictions: Use yolo predict model=<model.pt> source=<data_source> imgsz=<image_size>.
  • Export a Model: Execute yolo export model=<model.pt> format=<export_format>.
  • Use Solutions: Run yolo solutions <solution_name> for ready-made applications.

Customize each task with various arguments. For detailed syntax and examples, see the respective sections like Train, Predict, and Export.

How can I validate the accuracy of a trained YOLO model using the CLI?

To validate a model's accuracy, use the val mode. For example, to validate a pretrained detection model with a batch size of 1 and an image size of 640, run:

yolo val model=yolo26n.pt data=coco8.yaml batch=1 imgsz=640

This command evaluates the model on the specified dataset and provides performance metrics like mAP, precision, and recall. For more details, refer to the Val section.

What formats can I export my YOLO models to using the CLI?

You can export YOLO models to various formats including ONNX, TensorRT, CoreML, TensorFlow, and more. For instance, to export a model to ONNX format, run:

yolo export model=yolo26n.pt format=onnx

The export command supports numerous options to optimize your model for specific deployment environments. For complete details on all available export formats and their specific parameters, visit the Export page.

How do I use the pre-built solutions in the Ultralytics CLI?

Ultralytics provides ready-to-use solutions through the solutions command. For example, to count objects in a video:

yolo solutions count source="path/to/video.mp4"

These solutions require minimal configuration and provide immediate functionality for common computer vision tasks. To see all available solutions, run yolo solutions help. Each solution has specific parameters that can be customized to fit your needs.

Comments