modes/train/ #8075
Replies: 215 comments 569 replies
-
how to print IOU and f-score with the training result? |
Beta Was this translation helpful? Give feedback.
-
How are we able to save sample labls and predictions on the validation set during training? I remember it being easy from yolov5 but I have not been able to figure it out with yolov8. |
Beta Was this translation helpful? Give feedback.
-
If I am not mistaken, the logs shown during training also contain the box(P,R,[email protected] and [email protected]:0.95) and mask(P,R,[email protected] and [email protected]:0.95) for validation set during each epoch. Then why is it happening that during model.val() using the best.pt, I am getting worse metrics. From the training and validation curves, it is clear that the model is overfitting for the segmentation task but that is separate issue of overfitting. Can you please help me out in this? |
Beta Was this translation helpful? Give feedback.
-
So, imgsz works different when training than when predicting? For train: if it's an Is this right? |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have a segment model with customed data with single class, but there is a trend to overfit in the recent several training results, I tried adding more data in the training set with reduce box_loss and cls_loss in val, but dfl_loss is increasing. Is there suggestion to tuing the model. Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
I have a question for training the segmentation model. I have objects in my dataset that screen each other, such that the top object separates the segmentation mask of the bottom object into two independent parts. as far as I can see, the coordinates of each point are listed sequentially in the label file. If I add the points of the two masks one after the other in the coordinates of the same object, will I solve the problem? |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working on a project for android devices. The gpu and cpu powers of the device I have are weak. Will it speed up if I make the imgsz value 320 for train? Or what are your recommendations? What happens if the imgsz parameter for training is 640 and the imgsz parameter for prediction is 320? Or what changes if imgsz for training is 320 and imgsz for prediction is 320? Sorry for my English Note: I converted it to tflite model. Thanks. You are amazing |
Beta Was this translation helpful? Give feedback.
-
I've come to rely on YOLOv8 in my daily work; it's remarkably user-friendly. Thank you to the Ultralytics team for your excellent work on these models! I'm currently tackling a project focused on detecting minor defects on automobile engine parts. As the defects will be a smaller object in a given frame ,could you offer guidance on training arguments or techniques while training a model that might improve performance for this type of data? I'm also interested in exploring attention mechanisms to enhance the model performance, but I'd appreciate help understanding how to implement this. Special appreciation to Ultralytics team. |
Beta Was this translation helpful? Give feedback.
-
Running this provided example Which lead me to this Stackoverflow: https://stackoverflow.com/q/75111196/815507 There are solutions from Stackoverflow: I wonder if you could help and update the guide to provide the best resolution? |
Beta Was this translation helpful? Give feedback.
-
We need to disable blur augmentation. I have filed an issue, Glenn suggested me to use blur=0, but it is not a valid argument. #8824 |
Beta Was this translation helpful? Give feedback.
-
How can I train YOLOv8 with my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Hey, Was trying out training custom object detection model using pretrained YOLO-v8 model.
0% 0/250 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm working on a project where I plan to use YOLOv8 as the backbone for object detection, but I need a more hands-on approach during the training phase. How to I train the model manually, looping through epochs, perform forward propagation, calculate loss functions, backpropagate, and update weights? At the moment the model.train() seems to handle all of this automatically in the background. The end goal is knowledge distillation, but for a start I need to access these things. I haven't been able to find any examples of YOLOv8 being used in this way, some code and tips would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Im trying to understand concept of training. I would like to extend default classes with helmet, gloves, etc.
Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
When training, is it necessary to rename all images to 1,2,3,..., or just keep each corresponding annotation and image have the same name is okay? |
Beta Was this translation helpful? Give feedback.
-
ValueError: Invalid CUDA 'device=0,1' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU. torch.cuda.is_available(): True torch.cuda.device_count(): 1 os.environ['CUDA_VISIBLE_DEVICES']: None i have available two GPU and i am trying to access it and the code giving me this error what should i do in that case. one GPU is dedicated and one is external both are the same versions Nvidia RTX 4060 TI |
Beta Was this translation helpful? Give feedback.
-
When fixing the double detection issue, I removed the overlapping and duplicate detections in the dataset and prepared the dataset such that each frame has only one classification. After training the model with this dataset, when I use the model to detect objects in a frame with more than one classification as input, it detects only one classification. Why does this happen? |
Beta Was this translation helpful? Give feedback.
-
Hello, I couldn't find the forward function in the tasks.py file. Therefore, if I want to add a module before the image is input into the detection network and ensure that the parameters of this module are updated during training, how should I proceed? |
Beta Was this translation helpful? Give feedback.
-
Will image augmentation be applied automatically? |
Beta Was this translation helpful? Give feedback.
-
I trained YOLO on Windows, and it works fine there. However, when I try to detect objects using the same model on Ubuntu, it doesn’t work. How can I solve this issue? |
Beta Was this translation helpful? Give feedback.
-
Hi! I am running the following code in my training notebook: from ultralytics import YOLO Change the directory While running, the training automatically generates the folder path .../runs/classify/train My problem is that I´ve tried different configurations for the training, therefore I have something like Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hi, thank you for the ultralytics! |
Beta Was this translation helpful? Give feedback.
-
Hey~ Thanks! |
Beta Was this translation helpful? Give feedback.
-
I'm little big confused about "Resuming Interrupted Trainings". It's clear for me if it was unexpectedly interrupted and I want to continue training. What is not clear for me it's continue training. For example I trained my model based on pre-trained model with completely different dataset that coco dataset which was used to train original model. Now I have new images for object detection
|
Beta Was this translation helpful? Give feedback.
-
Hi |
Beta Was this translation helpful? Give feedback.
-
Hi, My initial CLI was: Then i tried: or: yolo detect train data=coco.yaml model=person_vehicle/run_1/weights/last.pt epochs=150 imgsz=640 batch=-1 classes=[0,2] name=run_1 project=person_vehicle device=0 resume=True But i'm getting: How can i continue the training for more epochs without loosing training data like gradients etc.? I would appreciate your help. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Excuse me, I want to convert the trained YOLOv8 segmentation model from a .pt file to a .tflite file. Because I need the compressed file in int8 format, I’ve noticed that the model accuracy drops significantly after compression. Both the training and conversion image sizes are fixed at 256×256, so I’m not sure if the noticeable drop in accuracy is mainly related to the float-to-int conversion. If part of the issue does stem from that, can I configure model.train() to train with int data types from the start? Or is there some misunderstanding in how I’m setting the training parameters? Additionally, since you are the designers of the model and have a deeper understanding of how training models work, I have a question to ask. During segmentation training, the training dataset consists of grayscale images with varying sizes. The task is to delineate the lighter, white regions in these grayscale images. Considering that the file needs to be converted to a low-precision .tflite, from your perspective, should the original images used for training be scaled up using the hyperparameter imgsz, or would it yield better results to add a black border to the bottom right of the images to reach 256×256 in size—assuming no changes to the label coordinates? Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Does it support distributed training across multiple machines and GPUs? |
Beta Was this translation helpful? Give feedback.
-
modes/train/
Step-by-step guide to train YOLOv8 models with Ultralytics YOLO including examples of single-GPU and multi-GPU training
https://docs.ultralytics.com/modes/train/
Beta Was this translation helpful? Give feedback.
All reactions