usage/cfg/ #8471
Replies: 26 comments 79 replies
-
|
How to print confusion matrix after training the model using the weights file for yolov8? |
Beta Was this translation helpful? Give feedback.
-
|
I have trained a yolov8 pretrained model on 1200 image . now i want to improve the accuracy so i want to train the model on new 4500 images . can i train the best.pt model on 4500 images . I want to keep its initial learning from 1200 images and i want to update this learning on new images . so if i train the model on only new data then initial learining will override or it will keep as it is . or i have to train the model on old+new data for best.pt so that it will learn on old and new data ? |
Beta Was this translation helpful? Give feedback.
-
|
Traceback (most recent call last): what is the new syntax for the extraction of the co ordinates of the bounding boxes. the xyxy doesnt seems working plz help 😣 |
Beta Was this translation helpful? Give feedback.
-
|
How to modify class names in YOLOv8 model. For example: I have 3 classes "Dog", "Cat", "Chicken" I want to change to "Small dog", "Small cat", "Small chicken" and save the new model ? |
Beta Was this translation helpful? Give feedback.
-
|
I have the following folder structure: nc: 2 And under train/images/, I have folder for each class: yolo_data/train/images/benign and yolo_data/train/images/malignant. For the *.txt files for the labels, they are under yolo_data/train/labels/benign and .../malignant. Same structure for val. I keep getting FileNotFoundError: Found no valid file for the classes labels. Supported extensions are: .jpg, jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp. I'm not sure what i'm missing. Also, the error message is confusing to me as it's asking for the classes labels, but I thought the label files needed to be .txt. But, .txt is not listed as the supported file extensions. Do you have any ideas on what I can do to troubleshoot further? I tried different folder structures and different content for data.yaml as I found a few variations of data.yaml content online. Not sure what else to try. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I am working on a project in wich it is necessary to recognise if the same rider is in the frame or it is a differnt one. I have trained a model that detect cyclists and for the similarity test i have tried to work with the "embed" configuration because it is stated there that it can be used to perform similarity search. |
Beta Was this translation helpful? Give feedback.
-
|
i am actually doing a project on under water trash detection , i used yolov8 for it. i successfully train and got best model at les mAP. so i want to change the backbone architecture to EfficientDet . how can i chage it to improve my segmentation performance . what is the procedure required to do it . I saw a Yolov8-cls-resnet50 how to use that yaml file to train my dataset. pls help |
Beta Was this translation helpful? Give feedback.
-
|
Hello! I tried this command, but the labels/confidence levels are not shown. How do I fix this? model.predict(source='/content/test/images/3500_png.rf.4b9f17eb4355efb4e69c91649cf181ef.jpg', show=True, save=True, hide_labels = False, hide_conf = False, conf=0.5, iou = 0.5, save_txt=False, save_crop=False, show_boxes=False) |
Beta Was this translation helpful? Give feedback.
-
|
Hello, |
Beta Was this translation helpful? Give feedback.
-
|
The labels In the test is for the validation that the object detected by the yolo is correct or not. We know that, that there is humans in the images but how do yolo know? Through annotations that's why for validation purpose test also needed annotations Sent from my Galaxy
-------- Original message --------From: Nitya Pandey ***@***.***> Date: 20/04/2024 00:32 (GMT+05:30) To: ultralytics/ultralytics ***@***.***> Cc: Manaschintawar ***@***.***>, Comment ***@***.***> Subject: Re: [ultralytics/ultralytics] usage/cfg/ (Discussion #8471)
Hello,
I didn't understand the difference of test and train datasets. The model requires labels along with the images in the test datasets, but I doubt, why is that so? I mean, the test we should run on the new images, where we want to detect the objects, why do we need to have the labels for those obejcts?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
i cant train my dataset, can you help Ultralytics YOLOv8.2.2 🚀 Python-3.10.12 torch-2.2.1+cu121 CPU (Intel Xeon 2.20GHz) The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
|
There are graphs that are generate while training the models, you can see a video of a channel called computer vision engineer in this he explains how to check if out training is going in a good direction or not. And also how many epochs you are doing while training? On this also accuracy mattersSent from my Galaxy
-------- Original message --------From: Nitya Pandey ***@***.***> Date: 21/04/2024 02:34 (GMT+05:30) To: ultralytics/ultralytics ***@***.***> Cc: Manaschintawar ***@***.***>, Comment ***@***.***> Subject: Re: [ultralytics/ultralytics] usage/cfg/ (Discussion #8471)
Hey Glenn,
I have some more question, if you don't mind.
So did fine tuning of the pre trained model, and the results were as follows;
Accuracy: 0.70
Precision: 0.85
Recall or Completeness: 0.79
False Positive Rate (FPR): 0.14
I have doubt here, to what we should consider a good trained model. And one more thing, when I deployed this model to the rest of my datasets, the results were terrible, only a few detected featured were real. Could you help me with the same?
Thank you so much.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Yeah, training and testing sets should be separate from each otherSent from my Galaxy
-------- Original message --------From: Nitya Pandey ***@***.***> Date: 20/04/2024 20:12 (GMT+05:30) To: ultralytics/ultralytics ***@***.***> Cc: Manaschintawar ***@***.***>, Comment ***@***.***> Subject: Re: [ultralytics/ultralytics] usage/cfg/ (Discussion #8471)
Hello Glenn,
Thank you for the clarification, that was extremely helpful.
Well I still have some more question;
Should train and the test datasets entirely different from each other?
How can deploy my model to the sets of new images to find the features I am interested in? I am working with astronomical images, which are around 5000 in numbers.
Thank you again, I appreciate your time.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Dear Team: |
Beta Was this translation helpful? Give feedback.
-
|
Hello thank you for information , i have one question regarding cfg parameters, despite the fact that I am manually set some parameters , at the end when it is starting to train the model it will automatically remove all my manual setup for example I set lr0 : 0.001 and it will change it to 0.0043.. how can i fix it? |
Beta Was this translation helpful? Give feedback.
-
|
When starting to train the model and all the images are being classified into background instead of images. It is showing error no labels found in ...../labels.cahe, training not working currently. The directories are oraganised as: The number of files in images and labels are same and the complete path to train is provided in data.yaml file. Similar is for validation set but that works properly. I havent been able to figure it out in hours. Please help! |
Beta Was this translation helpful? Give feedback.
-
|
To see the range of values in this section https://docs.ultralytics.com/usage/cfg/#train-settings |
Beta Was this translation helpful? Give feedback.
-
|
This starts the hyperparameter evolution process for 300 generations. Each generation will try different hyperparameters and select the best-performing ones based on validation results. how can i do this |
Beta Was this translation helpful? Give feedback.
-
|
How can I show all of my image detecting result and labels when validating? |
Beta Was this translation helpful? Give feedback.
-
|
project.version(dataset.version).deploy(model_type="yolov10", model_path=f"/content/datasets/runs/detect/train/") An error occured when getting the model upload URL: 404 Client Error: Not Found for url: https://api.roboflow.com/roboflow-jvuqo/football-players-detection-3zvbc/12/uploadModel?api_key=KTtaUEZYiAcgjdpkccy8&modelType=yolov10&nocache=true { how can i solve this problem |
Beta Was this translation helpful? Give feedback.
-
|
Is it possible to use Cutmix and Mixup data augmentations for training YOLOv8 analogously to this PyTorch code example: https://pytorch.org/vision/main/auto_examples/transforms/plot_cutmix_mixup.html#after-the-dataloader |
Beta Was this translation helpful? Give feedback.
-
|
is there any way to reduce the font size and hide the box in yolo object tracking? |
Beta Was this translation helpful? Give feedback.
-
|
Hi yolo team, |
Beta Was this translation helpful? Give feedback.
-
|
Hello, |
Beta Was this translation helpful? Give feedback.
-
|
How can I use focal loss in YOLOv11 CLI training to handle an imbalanced dataset? |
Beta Was this translation helpful? Give feedback.
-
|
Greetings. I’m trying to understand what the best combination of training and inference parameters pertaining to image size for my specific imagery is. The training dataset is 16 bits depth images of the identical static size – 720 x 1280. The images coming for inference will be of the same specification. The model will be quantized to FP16 and deployed on edge device. It is my understanding that for the best speed, it’s better to specify the exact image size for training and for inference. How do I do this? If I issue WARNING There is also a warning that it cannot be done on multiple GPUs and will default to rect=False. This I can solve by specifying only one cuda device. Please, advise on how exactly to set the parameters for training and for inference so the model performs the best on static size imagery on edge device. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
usage/cfg/
Master YOLOv8 settings and hyperparameters for improved model performance. Learn to use YOLO CLI commands, adjust training settings, and optimize YOLO tasks & modes.
https://docs.ultralytics.com/usage/cfg/
Beta Was this translation helpful? Give feedback.
All reactions