index #8012
Replies: 97 comments 169 replies
-
Hey everyone, Glenn here! 🚀 Dive into our comprehensive guide on YOLOv8, the pinnacle of real-time object detection and image segmentation technology. Whether you're just starting out or you're deep into the machine learning world, this page is your go-to resource for installing, predicting, and training with YOLOv8. Got questions or insights? This is the perfect spot to share your thoughts and learn from others in the community. Let's make the most of YOLOv8 together! 💡👥 |
Beta Was this translation helpful? Give feedback.
-
Hello, I am super new to computer vision, and I want to know if there is a way to isolate the detected texts (like you would isolate in Roboflow, where all the text areas detected will be split) so I can use it better in my text extraction model. Thank you. |
Beta Was this translation helpful? Give feedback.
-
if i pass the model an image if i want to extract class id and class name from result how can i do that |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
I tried using result.show() bit it says it has no obect show and using your
code, it says list has no object pred.
…On Sun, Mar 10, 2024, 3:34 AM Glenn Jocher ***@***.***> wrote:
@Zulkazeem <https://github.com/Zulkazeem> hey there! 👋 It looks like
your code is almost there, but if you're not getting any detections, there
might be a few things to check:
1.
*Model Confidence:* Ensure your model's confidence threshold isn't set
too high, which might prevent detections. Try lowering the conf
argument in your predict call.
2.
*Image Path:* Double-check the image path to ensure it's correct and
the image is accessible.
3.
*Model Compatibility:* Make sure the model you're using is appropriate
for the task. If it's trained on a very different dataset or for a
different task, it might not perform well on your images.
4.
*Looping Through Results:* The way you're iterating through pred and
then r seems a bit off. After calling predict, you should directly
access the detections, like so:
results = pred_model.predict(source=img_path)for result in results:
for *xyxy, conf, cls in result.pred[0]:
# Process each detection here
1. *Visualization:* Before trying to save or further process
detections, simply try visualizing them with result.show() to ensure
detections are being made.
If you've checked these and still face issues, it might be helpful to
share more details or error messages you're encountering. Keep
experimenting, and don't hesitate to reach out for more help! 🚀
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARQRRZCLUDZ4XTCWWM4NNBDYXQEG3AVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMZUGA4TK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I'm new to running models myself, and the last time I did any image training was about 15 years ago, though I am a Python veteran. I'd like to try running the building footprint models. Do you have a video series that can take me through setting up YOLOv8 and then running the model to extract footprints? |
Beta Was this translation helpful? Give feedback.
-
Hi, Can someone please help? Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
hi, I'm a student and I'm doing this for my undergraduate thesis. I'm implementing yolov8 model in android gallery's search mechanism. the purpose of yolov8 model is to scan media files and return images that has a bounding box label that matches the search query. i can make it work with yolov5s.torchscript.ptl using org.pytorch:pytorch_android_lite:1.10.0 and org.pytorch:pytorch_android_torchvision_lite:1.10.0 but it wont work with yolov8s.torchscript. the yolov5s.torchscript.ptl has a function to load the model and the classes.txt, is the yolov8 model wont need that? |
Beta Was this translation helpful? Give feedback.
-
when i am training yolov8 model i need to store current epoch number to a variable that can be used where ever i want ? |
Beta Was this translation helpful? Give feedback.
-
Hey Glenn, However I just wanna obtain the metrics for the lower half of the images. I tried modifying the labels and annotations files of the validation split to contain only those bounding boxes which are in the lower half. However this doesn't seem to work. Any suggestions? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a question about the YOLOv8 model. In the pre-trained model, there are labels like "person" and others, but if I create a new model with only the "person" label, will there be a performance difference on my computer between the pre-trained model and the model I create? |
Beta Was this translation helpful? Give feedback.
-
In yolo v8.1 I can't find confusion matrix and results.png? Where is it stored?? This is how I started my training: %cd /kaggle/working/HOME/YOLO_V8_OUTPUT !yolo train model=yolov8l.pt data=/kaggle/working/HOME/FRUITS_AND_VEGITABLES_NITHIN-6/data.yaml epochs=100 imgsz=640 patience = 10 device=0,1 project=/kaggle/working/HOME/YOLO_V8_OUTPUT |
Beta Was this translation helpful? Give feedback.
-
I am tasked with developing a shelf management system tailored for a specific brand. This system aims to automate the process of sales personnel visiting stores to assess product stock levels and required replenishments. Utilizing object detection, I intend to accurately count the products on the shelves and inform the salesperson of the quantities needed to refill. One major challenge to address is product occlusion, where items may partially or fully obscure others, complicating accurate counting. I'm particularly interested in exploring how YOLOv8, a popular object detection model, can be employed to tackle this problem effectively. Any guidance or insights on implementing such a solution would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi @glenn-jocher can you please have a look at this google doc, I have tried to explain the problem through screenshots, i am facing after fine tuning the model, I would really appreciate your kind guidance. https://docs.google.com/document/d/1WJ5SBdunWSqyd3FjgYgrZn2KjeYxezlel2LeXspmWAQ/edit?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
Hi everyone Please someone some ideas. (sorry by my English, greetings from Ecuador.) |
Beta Was this translation helpful? Give feedback.
-
Hello, I have some questions about YOLOv8. I have divided the Train, Value, and Test data sets myself. I want the relevant results and indicators of YOLOv8 on the Test data set. How should I achieve this? No method or related instructions have been found yet. |
Beta Was this translation helpful? Give feedback.
-
shall i keep pretrained = false or true while training model? |
Beta Was this translation helpful? Give feedback.
-
"I need to determine the length, breadth, and height of an object from an image using YOLO. I'm also allowed to use a LiDAR sensor for depth estimation. Can anyone provide guidance on how to achieve this? |
Beta Was this translation helpful? Give feedback.
-
hello, I am using ubuntu 18.04 on jetson nano and i have install ultralytics many2 time ( it returned successful )but i am running the code this "no module found" error. so i use ubuntu 20.04 and my code not running no error please help me |
Beta Was this translation helpful? Give feedback.
-
I want where the image really process in inference. torch model(frame). where is that exact line of code. in your import libraries. |
Beta Was this translation helpful? Give feedback.
-
In YOLO's brief history module, after jumping to YOLOV8, it is still YOLOV11. This is an incorrect link. Where is the link to YOLOV8? |
Beta Was this translation helpful? Give feedback.
-
hi. I would like to integrate yolo into my application which will be for sale. Can I use yolo directly or I need to get any licenses? |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Hello team Ultralytics, |
Beta Was this translation helpful? Give feedback.
-
Thank you. I got it. The usage of different YOLO models involves loading different yolov(8,11) n.yaml files isn’t it ?
发自我的iPhone
…------------------ Original ------------------
From: Glenn Jocher ***@***.***>
Date: Tue,Nov 19,2024 2:30 AM
To: ultralytics/ultralytics ***@***.***>
Cc: xiuyuan mao ***@***.***>, Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] index (Discussion #8012)
@maoxiuyuan hello, thank you for your question. YOLOv8 has not been merged into YOLOv11 files; they are separate models. You can find YOLOv8 in the Ultralytics repository, and I recommend checking the Ultralytics documentation for instructions on downloading and using YOLOv8 models. If you have further questions, feel free to check the YOLOv8 documentation.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I’m really appreciate for your reply ,it’s strongly useful for me.
发自我的iPhone
…------------------ Original ------------------
From: Glenn Jocher ***@***.***>
Date: Wed,Nov 20,2024 2:44 AM
To: ultralytics/ultralytics ***@***.***>
Cc: xiuyuan mao ***@***.***>, Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] index (Discussion #8012)
@maoxiuyuan yes, that's correct. Different YOLO models require loading different YAML files (e.g., yolov8n.yaml for YOLOv8 or yolo11n.yaml for YOLO11) to configure the model architecture and parameters. For detailed guidance, you can refer to the respective model documentation.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I want to use YOLO for object detection in the images. Is there ready pre-trained model, which can have many (al least 500) classes (labels) and not the standard COCO model with only 90 classes? Also, I would prefer that model can detect objects in the image of any size. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your help. My problem is, that if I go to YOLOv8 page here
https://docs.ultralytics.com/models/yolov8/#supported-tasks-and-modes I can
download the models, but once I click link "Detection Docs" in the
sentence "See Detection Docs for usage examples", then I redirected to
YOLO11 documentation and actually I cannot really find YOLOv8 documentation.
…On Thu, 21 Nov 2024 at 07:06, Glenn Jocher ***@***.***> wrote:
@zlelik <https://github.com/zlelik> for YOLO11, pre-trained models are
primarily trained on the COCO dataset. However, Ultralytics YOLO models can
be fine-tuned on other datasets to meet specific requirements. Regarding
the YOLOv8 model trained with Open Images v7, the output shape you
mentioned indicates a prediction for multiple classes and bounding boxes.
You can find documentation on interpreting these outputs in the Ultralytics
documentation. If you need further details, please refer to the YOLOv8
documentation <https://docs.ultralytics.com/models/yolov8/>.
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADTQ4K6PUOM7OZCCE4EUT4D2BVZ4VAVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMZTGIZDIMQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I believe there is a link misdirection in Datasets. After clicking on ImageNet in the docs I am getting redirected to |
Beta Was this translation helpful? Give feedback.
-
index
Explore a complete guide to Ultralytics YOLOv8, a high-speed, high-accuracy object detection & image segmentation model. Installation, prediction, training tutorials and more.
https://docs.ultralytics.com/
Beta Was this translation helpful? Give feedback.
All reactions