Does Ultralytics uses Gpu if GPU is available #2643
-
Im running on a gpu enabled instance. Do I need to move the image to GPU before inference and when i'm printing the result image = cv2.imread('./img.jpg')
result = model.predict(image)
print(result) this is what the output is
what does These represent ?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
@nuhmanpk hello! Yes, Ultralytics YOLOv8 will automatically use a GPU if it's available and CUDA is properly installed on your system. You don't need to manually move the image to the GPU before inference; the model's Regarding the output you're seeing, it represents the detection result for your image. Here's a brief explanation of the components:
The rest of the output provides additional information such as the original image, the path to the image file, and the inference speed. For more detailed explanations on the output format and how to interpret it, please refer to our documentation at https://docs.ultralytics.com. 😊 |
Beta Was this translation helpful? Give feedback.
@nuhmanpk hello! Yes, Ultralytics YOLOv8 will automatically use a GPU if it's available and CUDA is properly installed on your system. You don't need to manually move the image to the GPU before inference; the model's
predict
method handles that for you.Regarding the output you're seeing, it represents the detection result for your image. Here's a brief explanation of the components:
type: torch.Tensor
: Indicates that the boxes are stored in a PyTorch tensor.shape: torch.Size([1, 6])
: The shape of the tensor, where 1 indicates the number of detections and 6 represents the attributes of each detection.dtype: torch.float32
: The data type of the tensor elements, which is 32-bit floating-…