New tutorial | Real-time inference with Ultralytics YOLO11 + Streamlit ⚡ Real-time inference is essential for many computer vision applications. In this tutorial, we walk through how to build and run a live inference application using YOLO11 and Streamlit. What'll be covered: ✅ Running the application using both CLI and Python ✅ Configuring application settings and switching models ✅ Testing real-time object detection and instance segmentation A practical guide to building interactive, real-time computer vision applications with YOLO11. Watch now ➡️ https://lnkd.in/e_CHMAXP
About us
Ultralytics is a leading AI company dedicated to creating transformative, open-source computer vision solutions. As creators of YOLO, the world's most popular real-time object detection framework, we empower millions globally—from individual developers to enterprise innovators—with advanced, accessible, and easy-to-use AI tools. Driven by relentless innovation and a commitment to execution, we continuously push AI boundaries, making it faster, lighter, and more accurate. Our mission is to democratize access to cutting-edge technology, providing everyone an equal opportunity to improve their lives and impact the world positively. Acta Non Verba—actions, not words.
- Website
-
http://www.ultralytics.com
External link for Ultralytics
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2022
- Specialties
- AI, Deep Learning, Data Science, Artificial Intelligence, Machine Learning, ML, SaaS, LLM, Computer Vision, and YOLO
Locations
-
Primary
Get directions
8 Devonshire Square
London, EC2M 4YJ, GB
-
Get directions
Calle de las Huertas 41
Madrid, Madrid 28014, ES
-
Get directions
9-1 Kefa Road
Shenzhen, Guangdong 518063, CN
Employees at Ultralytics
Updates
-
Train Ultralytics YOLO11 using the KITTI dataset! 🚗 The KITTI dataset remains one of the most influential benchmarks in computer vision, powering research in autonomous driving, traffic analysis, and urban scene understanding. With Ultralytics, you can load the entire dataset instantly using the ready-to-use KITTI YAML file and build models capable of understanding complex driving environments, from traffic flow to vulnerable road-user detection. Start now ➡️ https://bit.ly/485lB3t
-
-
New release Ultralytics v8.3.240 | More reliable SAM 2/3 video segmentation on Apple Silicon 🚀 This stability release improves SAM 3 on MPS, bounds long‑video memory growth, and smooths export + docs for faster day‑to‑day workflows. Minor updates: ✅ SAM 3 now runs reliably on Apple Silicon (MPS) with an MPS-safe rotary encoding path ✅ SAM 2/3 video inference memory is now bounded to reduce VRAM growth on long runs ✅ `pip install ultralytics[export]` now pulls `onnxslim` automatically for cleaner ONNX exports Ultralytics v8.3.240 release notes ➡️ https://lnkd.in/e_Qn-2dT
-
Crack segmentation with Ultralytics YOLO11! With the segmentation task, you can go beyond detecting surface cracks to outline their shape and length precisely, even when cracks are thin or partially hidden in noisy textures. This level of pixel-accurate detail is essential for structural health monitoring and predictive maintenance. You can also integrate this into real-time pipelines for bridges, buildings, machinery parts, and more. Read more ➡️ https://bit.ly/49t29yK
-
New release v8.3.239 | SAM 3 text prompts now plug-and-play 🧩 This update removes extra SAM 3 tokenizer setup, adds real-time disk/network throughput monitoring, and improves ONNX export compatibility (including Jetson JetPack 6) for smoother deploys. Minor updates: ✅ SAM 3 no longer needs manual BPE vocab downloads or `bpe_path`, aligning with the simplified API in the SAM 3 docs https://lnkd.in/eFgH7Eff ✅ `SystemLogger.get_metrics(rates=True)` adds live disk + network MB/s for better bottleneck diagnosis ✅ ONNX export now supports newer ONNX versions up to `<2.0.0`, improving Jetson/JetPack 6 workflows Ultralytics v8.3.239 release notes ➡️ https://lnkd.in/eaXRe97i
-
FlipUD & FlipLR: Simple augmentations making Ultralytics YOLO models stronger! 🔁 In computer vision, small transformations can create big improvements. Two of the most effective, yet underrated, data augmentation techniques are flipud (up–down flip) and fliplr (left–right flip). Why they matter: ✅ fliplr: Horizontal flip helps your model generalise when objects appear mirrored, like vehicles approaching from different directions or people facing left vs. right. ✅ flipud: Vertical flip is useful for aerial, industrial, or top-down vision tasks where objects can appear upside-down, boosting robustness in drones, robotics, and surveillance. By flipping existing images, you instantly create new, realistic variations, reducing overfitting and improving real-world performance without collecting extra data. Learn more ➡️ https://bit.ly/3JY0zur
-
-
New release Ultralytics v8.3.238 | SAM 3 concept/video segmentation is more robust and smoother to use 🧠🧩 This refinement-focused update improves SAM 3 batching stability, macOS (MPS) reliability, and installs, while making exports (TFLite/ONNX) more predictable for deployment. Minor updates: ✅ Fixed SAM 3 “encode once, query many times” prompt-batching cache bug for stable multi-prompt workflows ✅ Added SAM 3 tokenizer dependency auto-checks + improved Apple Silicon (MPS) attention reliability ✅ Hardened export paths (TFLite dtype fixes, safer ONNX FP16 guards, clearer version checks) Ultralytics v8.3.238 release notes ➡️ https://lnkd.in/ekNi5GPW
-
Ultralytics reposted this
Ultralytics SAM3 integration is live | Text prompt, visual prompt supported 🎉💙 Meta recently released a foundation model for promptable concept segmentation, which is now fully integrated into the ultralytics package, providing native support for concept segmentation with text prompts, image exemplar prompts, and video tracking capabilities. What's included: ✅ segmentation with text-prompt ✅ object tracking with text-prompt ✅ video inference supported. ✅ visual prompt, i.e, bbox, point also supported Read more about SAM3 ➡️ https://lnkd.in/d_miXPQN All you need to do is "pip install ultralytics". Give it a try and share your thoughts in the comments. We are always looking forward to improving the package 👇
-
New release v8.3.237 | SAM 3 image + video segmentation now supported 🚀 Ultralytics adds full SAM 3 support for concept segmentation (text + exemplar prompts) and video tracking, plus smoother exports and day-to-day training/validation improvements. See the new SAM 3 docs https://lnkd.in/eFgH7Eff for quick start. Minor updates: ✅ ONNX export: FP16 supported on CPU with graceful fallback ✅ More resume-training overrides (e.g., `workers`, `cache`, `patience`, `val`, `plots`) ✅ More robust Edge TPU/IMX dependency handling during export Ultralytics v8.3.237 release notes ➡️ https://lnkd.in/eRgGJ_k3
-
New tutorial | Train Ultralytics YOLO11 on the VisDrone dataset 🚁 The VisDrone dataset is one of the leading benchmarks for drone-based object detection and aerial scene understanding. In this tutorial, we show how to train YOLO11 on VisDrone and evaluate its performance on real aerial imagery. We cover: ✅ Dataset structure, classes & YAML configuration ✅ Setting up Ultralytics in Google Colab ✅ Training YOLO11 step by step on the VisDrone dataset ✅ Running predictions on drone imagery (cars, pedestrians, bicycles & more) A complete walkthrough for building aerial object detection models with YOLO11. Watch now ➡️ https://lnkd.in/dUpuywZN
-