diff --git a/embedded_ml_exercise.qmd b/embedded_ml_exercise.qmd index f9f0aa3f4..aa9104053 100644 --- a/embedded_ml_exercise.qmd +++ b/embedded_ml_exercise.qmd @@ -1,4 +1,4 @@ -### **Introduction** +# CV on Nicla Vision {.unnumbered} As we initiate our studies into embedded machine learning or tinyML, it\'s impossible to overlook the transformative impact of Computer @@ -31,7 +31,7 @@ computational load. By the end of this tutorial, you\'ll have a working prototype capable of classifying images in real time, all running on a low-power embedded system based on the Arduino Nicla Vision board. -### **Computer Vision** +## Computer Vision At its core, computer vision aims to enable machines to interpret and make decisions based on visual data from the world---essentially @@ -51,7 +51,7 @@ height="2.8333333333333335in"} Both models can be implemented on tiny devices like the Arduino Nicla Vision and used on real projects. Let\'s start with the first one. -### **Image Classification Project** +## Image Classification Project The first step in any ML project is to define our goal. In this case, it is to detect and classify two specific objects present in one image. For @@ -62,7 +62,7 @@ Brazilian parrot (named *Periquito*). Also, we will collect images of a ![](images_4/media/image36.jpg){width="6.5in" height="3.638888888888889in"} -### **Data Collection** +## Data Collection Once you have defined your Machine Learning project goal, the next and most crucial step is the dataset collection. You can use the Edge @@ -120,7 +120,7 @@ height="2.2083333333333335in"} You should return to Edge Impulse Studio and upload the dataset to your project. -### **Training the model with Edge Impulse Studio** +## Training the model with Edge Impulse Studio We will use the Edge Impulse Studio for training our model. Enter your account credentials at Edge Impulse and create a new project: @@ -131,7 +131,7 @@ height="4.263888888888889in"} > *Here, you can clone a similar project:* > *[NICLA-Vision_Image_Classification](https://studio.edgeimpulse.com/public/273858/latest).* -### **Dataset** +## Dataset Using the EI Studio (or *Studio*), we will pass over four main steps to have our model ready for use on the Nicla Vision board: Dataset, @@ -176,7 +176,7 @@ data seems OK. ![](images_4/media/image44.png){width="6.5in" height="4.263888888888889in"} -### **The Impulse Design** +## The Impulse Design In this phase, we should define how to: @@ -232,7 +232,7 @@ Press \[Save parameters\] and Generate all features: ![](images_4/media/image5.png){width="6.5in" height="4.263888888888889in"} -### **Model Design** +## Model Design In 2007, Google introduced [[MobileNetV1]{.underline}](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html), @@ -328,7 +328,7 @@ height="3.5in"} The result is excellent, with 77ms of latency, which should result in 13fps (frames per second) during inference. -### **Model Testing** +## Model Testing ![](images_4/media/image10.jpg){width="6.5in" height="3.8472222222222223in"} @@ -344,7 +344,7 @@ The result was, again, excellent. ![](images_4/media/image12.png){width="6.5in" height="4.263888888888889in"} -### **Deploying the model** +## Deploying the model At this point, we can deploy the trained model as.tflite and use the OpenMV IDE to run it using MicroPython, or we can deploy it as a C/C++ @@ -690,7 +690,7 @@ the result, deploying the models as Arduino\'s Library: ![](images_4/media/image4.jpg){width="6.5in" height="3.4444444444444446in"} -### **Conclusion** +## Conclusion Before we finish, consider that Computer Vision is more than just image classification. For example, you can develop Edge Machine Learning diff --git a/embedded_sys_exercise.qmd b/embedded_sys_exercise.qmd index 9253c89a2..728755487 100644 --- a/embedded_sys_exercise.qmd +++ b/embedded_sys_exercise.qmd @@ -1,4 +1,4 @@ -# Introduction +# Setup Nicla Vision {.unnumbered} The [Arduino Nicla Vision](https://docs.arduino.cc/hardware/nicla-vision) (sometimes called @@ -21,7 +21,7 @@ acting as a user interface. ![](images_2/media/image29.jpg){width="6.5in" height="3.861111111111111in"} -### **Two Parallel Cores** +## Two Parallel Cores The central processor is the dual-core [STM32H747,](https://content.arduino.cc/assets/Arduino-Portenta-H7_Datasheet_stm32h747xi.pdf?_gl=1*6quciu*_ga*MTQ3NzE4Mjk4Mi4xNjQwMDIwOTk5*_ga_NEXN8H46L5*MTY0NzQ0NTg1My4xMS4xLjE2NDc0NDYzMzkuMA..) @@ -41,7 +41,7 @@ all the on-chip peripherals and can run: ![](images_2/media/image22.jpg){width="5.78125in" height="5.78125in"} -### **Memory** +## Memory Memory is crucial for embedded machine learning projects. The NiclaV board can host up to 16 MB of QSPI Flash for storage. However, it is @@ -50,7 +50,7 @@ machine learning inferences; the STM32H747 is only 1MB, shared by both processors. This MCU also has incorporated 2MB of FLASH, mainly for code storage. -### **Sensors** +## Sensors - **Camera**: A GC2145 2 MP Color CMOS Camera. @@ -91,7 +91,7 @@ see the Nicla on Port and select it. > Upload button. You should see the Built-in LED (green RGB) blinking, > which means the Nicla board is correctly installed and functional!* -### **Testing the Microphone** +## Testing the Microphone On Arduino IDE, go to Examples \> PDM \> PDMSerialPlotter, open and run the sketch. Open the Plotter and see the audio representation from the @@ -103,7 +103,7 @@ height="4.361111111111111in"} > *Vary the frequency of the sound you generate and confirm that the mic > is working correctly.* -### **Testing the IMU** +## Testing the IMU Before testing the IMU, it will be necessary to install the LSM6DSOX library. For that, go to Library Manager and look for LSM6DSOX. Install @@ -139,7 +139,7 @@ object in front of it (max of 4m). ![](images_2/media/image13.jpg){width="6.5in" height="4.847222222222222in"} -### **Testing the Camera** +## Testing the Camera We can also test the camera using, for example, the code provided on Examples \> Camera \> CameraCaptureRawBytes. We can not see the image @@ -149,7 +149,7 @@ camera. Anyway, the best test with the camera is to see a live image. For that, we will use another IDE, the OpenMV. -### **Installing the OpenMV IDE** +## Installing the OpenMV IDE OpenMV IDE is the premier integrated development environment for use with OpenMV Cameras and the one on the Portenta. It features a powerful @@ -295,7 +295,7 @@ In [[the GitHub, You can find other Python scripts]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/tree/main/Micropython). Try to test the onboard sensors. -### **Connecting the Nicla Vision to Edge Impulse Studio** +## Connecting the Nicla Vision to Edge Impulse Studio We will use the Edge Impulse Studio later in other exercises. [Edge Impulse I](https://www.edgeimpulse.com/)s a leading development platform @@ -469,7 +469,7 @@ The ADC can be used for other valuable sensors, such as > only introduce how to connect external devices with the Nicla Vision > board using MicroPython.* -### **Conclusion** +## Conclusion The Arduino Nicla Vision is an excellent *tiny device* for industrial and professional uses! However, it is powerful, trustworthy, low power,