Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spelling fixes 4 #436

Merged
merged 1 commit into from
Jan 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2155,6 +2155,7 @@ vcLQKY
Roboflow
SRDF
OpenVINO’s
OpenVINO's
oakd
MRD
oAxaV
Expand All @@ -2166,4 +2167,30 @@ OpenAI's
PRjhA
jBE
gnhUcwYqrI
RTU
shrimplets
TetrisBot
serv
pairplot
CLK
datacenter
Printables
Dall
overpredicting
Autotuning
RTU
shrimplets
TetrisBot
pairplot
CLK
datacenter
Printables
Autotuning
Powerstrip
Chardev
tera
HailoTracker
Roboflow’s
Lescaudron
ECR

Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Even though this underwater air bubble and water pollution detection device is c

Then, I employed the web application to communicate with UNIHIKER to generate a pre-formatted CSV file from the stored sample text files (ultrasonic scan data records) and transfer the latest neural network model detection result (ultrasonic scan buffer and the detected label) via an HTTP GET request.

As mentioned repeatedly, each generated ultrasonic scan buffer provides 400 data points as a 20 x 20 ultrasonic image despite the fact that Nano ESP32 cannot utilize the given buffer to produce an ultrasonic image after running the neural network model with the Ridge classifier. Therefore, after receiving the latest model detection result via the web application, I employed UNIKIHER to modify a template image (black square) via the built-in OpenCV functions to convert the given ultrasonic scan buffer to a JPG file and save the modified image to visualize the latest aquatic ultrasonic scan with thoroughly encoded pixels.
As mentioned repeatedly, each generated ultrasonic scan buffer provides 400 data points as a 20 x 20 ultrasonic image despite the fact that Nano ESP32 cannot utilize the given buffer to produce an ultrasonic image after running the neural network model with the Ridge classifier. Therefore, after receiving the latest model detection result via the web application, I employed UNIHIKER to modify a template image (black square) via the built-in OpenCV functions to convert the given ultrasonic scan buffer to a JPG file and save the modified image to visualize the latest aquatic ultrasonic scan with thoroughly encoded pixels.

Since the RetinaNet object detection model provides accurate bounding box measurements, I also utilized UNIHIKER to modify the resulting images to draw the associated bounding boxes and save the modified resulting images as JPG files for further inspection.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,7 @@ $ mo --model_name ei-pnp_yolov5n_320 \
--input_model ei-pnp_yolov5n_320_batch32_epoch100_prune.onnx
```

After converting the model to OpenVINOs IR format, run the following script to compile it into a `.blob` file, which can be deployed to the OAK-D device.
After converting the model to OpenVINO's IR format, run the following script to compile it into a `.blob` file, which can be deployed to the OAK-D device.

```
import blobconverter
Expand Down Expand Up @@ -474,7 +474,7 @@ Planning groups in MoveIt 2 semantically describe different parts of the robot,

![moveit2\_assistant\_4](../../.gitbook/assets/robotic-arm-sorting-arduino-braccio/moveit2_assistant_4.png)

The Setup Assistant allows us to add predefined poses to the robots configuration, which can be useful for defining specific initial or ready poses. Later, the robot can be commanded to move to these poses using the MoveIt API. Click on the **Add Pose** and choose a name for the pose.
The Setup Assistant allows us to add predefined poses to the robot's configuration, which can be useful for defining specific initial or ready poses. Later, the robot can be commanded to move to these poses using the MoveIt API. Click on the **Add Pose** and choose a name for the pose.

![moveit2\_assistant\_5](../../.gitbook/assets/robotic-arm-sorting-arduino-braccio/moveit2_assistant_5.png)

Expand Down
Loading