Challenges in Training YOLO Models with Large Images for Coral TPU Deployment #22923
Replies: 1 comment 1 reply
-
|
👋 Hello @heisenburrger, thanks for starting this detailed discussion and for using Ultralytics YOLO for your Coral TPU deployment work 🚀 This is an automated response to help move things forward quickly — an Ultralytics engineer will also review your post and follow up with more specific guidance soon 🙂 In the meantime, to help us better understand your scenario (large images, aspect ratio differences, and Coral TPU 416-size requirement), please share as many of the following details as you can:
We also recommend checking the core docs while you wait, as many similar topics around image sizes and training setups are covered there: If this discussion eventually reveals a training-related issue or limitation, it can be very helpful if you can narrow things down as much as possible to a small, clearly described configuration (for example, a subset of your data and a minimal training command). That makes it much easier for us to reason about behavior and reproduce any potential problems. If at any point this turns into a 🐛 Bug Report (for example, if you see inconsistent behavior between image sizes or environments), please provide a minimum reproducible example so our team can investigate efficiently. If this remains a custom training ❓ Question (which it sounds like for now), please continue to:
You’re also very welcome to discuss tiling strategies and deployment considerations with the community:
UpgradeBefore we dive deeper, please ensure you are on the latest Upgrade to the latest pip install -U ultralyticsEnvironmentsYOLO may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
Using one of these can help rule out environment-specific issues when comparing training at large image sizes versus 416 for Coral TPU deployment. StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a YOLO model trained on my custom dataset. I originally set the image size to 1792 because my dataset images have a resolution of 1512×2040. However, I need to run the model on a Google Coral TPU, which requires an input image size of 416.
When I train the model using an image size of 416, the accuracy becomes very poor. I think this might be because my dataset has a different aspect ratio, so training with 416×416 images may not be ideal. But since my dataset is already prepared and annotated at 1512×2040, I’m not sure what to do.
For inference, I know I can use tiling, but can I also use tiling for training? Do I need to split every image into 416×416 tiles along with their corresponding annotations?
What is the best approach here?
Beta Was this translation helpful? Give feedback.
All reactions