diff --git a/ embedded_ml_exercise.qmd b/ embedded_ml_exercise.qmd new file mode 100644 index 00000000..4d3673e3 --- /dev/null +++ b/ embedded_ml_exercise.qmd @@ -0,0 +1,747 @@ +4 Embedded AI + +# Exercise - Image Classification + +### **Introduction** + +As we initiate our studies into embedded machine learning or tinyML, +it\'s impossible to overlook the transformative impact of Computer +Vision (CV) and Artificial Intelligence (AI) in our lives. These two +intertwined disciplines redefine what machines can perceive and +accomplish, from autonomous vehicles and robotics to healthcare and +surveillance. + +More and more, we are facing an artificial intelligence (AI) revolution +where, as stated by Gartner, **Edge AI** has a very high impact +potential, and **it is for now**! + +![](images_4/media/image2.jpg){width="4.729166666666667in" +height="4.895833333333333in"} + +In the \"bull-eye\" of emerging technologies, radar is the *Edge +Computer Vision*, and when we talk about Machine Learning (ML) applied +to vision, the first thing that comes to mind is **Image +Classification**, a kind of ML \"Hello World\"! + +This exercise will explore a computer vision project utilizing +Convolutional Neural Networks (CNNs) for real-time image classification. +Leveraging TensorFlow\'s robust ecosystem, we\'ll implement a +pre-trained MobileNet model and adapt it for edge deployment. The focus +will be optimizing the model to run efficiently on resource-constrained +hardware without sacrificing accuracy. + +We\'ll employ techniques like quantization and pruning to reduce the +computational load. By the end of this tutorial, you\'ll have a working +prototype capable of classifying images in real time, all running on a +low-power embedded system based on the Arduino Nicla Vision board. + +### **Computer Vision** + +At its core, computer vision aims to enable machines to interpret and +make decisions based on visual data from the world---essentially +mimicking the capability of the human optical system. Conversely, AI is +a broader field encompassing machine learning, natural language +processing, and robotics, among other technologies. When you bring AI +algorithms into computer vision projects, you supercharge the system\'s +ability to understand, interpret, and react to visual stimuli. + +When discussing Computer Vision projects applied to embedded devices, +the most common applications that come to mind are *Image +Classification* and *Object Detection*. + +![image.png](images_4/media/image15.jpg){width="6.5in" +height="2.8333333333333335in"} + +Both models can be implemented on tiny devices like the Arduino Nicla +Vision and used on real projects. Let\'s start with the first one. + +### **Image Classification Project** + +The first step in any ML project is to define our goal. In this case, it +is to detect and classify two specific objects present in one image. For +this project, we will use two small toys: a *robot* and a small +Brazilian parrot (named *Periquito*). Also, we will collect images of a +*background* where those two objects are absent. + +![image.png](images_4/media/image36.jpg){width="6.5in" +height="3.638888888888889in"} + +### **Data Collection** + +Once you have defined your Machine Learning project goal, the next and +most crucial step is the dataset collection. You can use the Edge +Impulse Studio, the OpenMV IDE we installed, or even your phone for the +image capture. Here, we will use the OpenMV IDE for that. + +**Collecting Dataset with OpenMV IDE** + +First, create in your computer a folder where your data will be saved, +for example, \"data.\" Next, on the OpenMV IDE, go to Tools \> Dataset +Editor and select New Dataset to start the dataset collection: + +![image.png](images_4/media/image29.png){width="6.291666666666667in" +height="4.010416666666667in"} + +The IDE will ask you to open the file where your data will be saved and +choose the \"data\" folder that was created. Note that new icons will +appear on the Left panel. + +![image.png](images_4/media/image46.png){width="0.9583333333333334in" +height="1.5208333333333333in"} + +Using the upper icon (1), enter with the first class name, for example, +\"periquito\": + +![image.png](images_4/media/image22.png){width="3.25in" +height="2.65625in"} + +Run the dataset_capture_script.py, and clicking on the bottom icon (2), +will start capturing images: + +![image.png](images_4/media/image43.png){width="6.5in" +height="4.041666666666667in"} + +Repeat the same procedure with the other classes + +![image.png](images_4/media/image6.jpg){width="6.5in" +height="3.0972222222222223in"} + +> *We suggest around 60 images from each category. Try to capture +> different angles, backgrounds, and light conditions.* + +The stored images use a QVGA frame size 320x240 and RGB565 (color pixel +format). + +After capturing your dataset, close the Dataset Editor Tool on the Tools +\> Dataset Editor. + +On your computer, you will end with a dataset that contains three +classes: periquito, robot, and background. + +![image.png](images_4/media/image20.png){width="6.5in" +height="2.2083333333333335in"} + +You should return to Edge Impulse Studio and upload the dataset to your +project. + +### **Training the model with Edge Impulse Studio** + +We will use the Edge Impulse Studio for training our model. Enter your +account credentials at Edge Impulse and create a new project: + +![image.png](images_4/media/image45.png){width="6.5in" +height="4.263888888888889in"} + +> *Here, you can clone a similar project:* +> *[NICLA-Vision_Image_Classification](https://studio.edgeimpulse.com/public/273858/latest).* + +### **Dataset** + +Using the EI Studio (or *Studio*), we will pass over four main steps to +have our model ready for use on the Nicla Vision board: Dataset, +Impulse, Tests, and Deploy (on the Edge Device, in this case, the +NiclaV). + +![image.png](images_4/media/image41.jpg){width="6.5in" +height="4.194444444444445in"} + +Regarding the Dataset, it is essential to point out that our Original +Dataset, captured with the OpenMV IDE, will be split into three parts: +Training, Validation, and Test. The Test Set will be divided from the +beginning and left a part to be used only in the Test phase after +training. The Validation Set will be used during training. + +![image.png](images_4/media/image7.jpg){width="6.5in" +height="4.763888888888889in"} + +On Studio, go to the Data acquisition tab, and on the UPLOAD DATA +section, upload from your computer the files from chosen categories: + +![image.png](images_4/media/image39.png){width="6.5in" +height="4.263888888888889in"} + +Left to the Studio to automatically split the original dataset into +training and test and choose the label related to that specific data: + +![image.png](images_4/media/image30.png){width="6.5in" +height="4.263888888888889in"} + +Repeat the procedure for all three classes. At the end, you should see +your \"raw data in the Studio: + +![image.png](images_4/media/image11.png){width="6.5in" +height="4.263888888888889in"} + +The Studio allows you to explore your data, showing a complete view of +all the data in your project. You can clear, inspect, or change labels +by clicking on individual data items. In our case, a simple project, the +data seems OK. + +![image.png](images_4/media/image44.png){width="6.5in" +height="4.263888888888889in"} + +### **The Impulse Design** + +In this phase, we should define how to: + +- Pre-process our data, which consists of resizing the individual + > images and determining the color depth to use (RGB or Grayscale) + > and + +- Design a Model that will be \"Transfer Learning (Images)\" to + > fine-tune a pre-trained MobileNet V2 image classification model on + > our data. This method performs well even with relatively small + > image datasets (around 150 images in our case). + +![image.png](images_4/media/image23.jpg){width="6.5in" +height="4.0in"} + +Transfer Learning with MobileNet offers a streamlined approach to model +training, which is especially beneficial for resource-constrained +environments and projects with limited labeled data. MobileNet, known +for its lightweight architecture, is a pre-trained model that has +already learned valuable features from a large dataset (ImageNet). + +![image.png](images_4/media/image9.jpg){width="6.5in" +height="1.9305555555555556in"} + +By leveraging these learned features, you can train a new model for your +specific task with fewer data and computational resources yet achieve +competitive accuracy. + +![image.png](images_4/media/image32.jpg){width="6.5in" +height="2.3055555555555554in"} + +This approach significantly reduces training time and computational +cost, making it ideal for quick prototyping and deployment on embedded +devices where efficiency is paramount. + +Go to the Impulse Design Tab and create the *impulse*, defining an image +size of 96x96 and squashing them (squared form, without crop). Select +Image and Transfer Learning blocks. Save the Impulse. + +![image.png](images_4/media/image16.png){width="6.5in" +height="4.263888888888889in"} + +### **Image Pre-Processing** + +All input QVGA/RGB565 images will be converted to 27,640 features +(96x96x3). + +![image.png](images_4/media/image17.png){width="6.5in" +height="4.319444444444445in"} + +Press \[Save parameters\] and Generate all features: + +![image.png](images_4/media/image5.png){width="6.5in" +height="4.263888888888889in"} + +### **Model Design** + +In 2007, Google introduced +[[MobileNetV1]{.underline}](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html), +a family of general-purpose computer vision neural networks designed +with mobile devices in mind to support classification, detection, and +more. MobileNets are small, low-latency, low-power models parameterized +to meet the resource constraints of various use cases. in 2018, Google +launched [MobileNetV2: Inverted Residuals and Linear +Bottlenecks](https://arxiv.org/abs/1801.04381). + +MobileNet V1 and MobileNet V2 aim for mobile efficiency and embedded +vision applications but differ in architectural complexity and +performance. While both use depthwise separable convolutions to reduce +the computational cost, MobileNet V2 introduces Inverted Residual Blocks +and Linear Bottlenecks to enhance performance. These new features allow +V2 to capture more complex features using fewer parameters, making it +computationally more efficient and generally more accurate than its +predecessor. Additionally, V2 employs a non-linear activation in the +intermediate expansion layer. Still, it uses a linear activation for the +bottleneck layer, a design choice found to preserve important +information through the network better. MobileNet V2 offers a more +optimized architecture for higher accuracy and efficiency and will be +used in this project. + +Although the base MobileNet architecture is already tiny and has low +latency, many times, a specific use case or application may require the +model to be smaller and faster. MobileNets introduces a straightforward +parameter α (alpha) called width multiplier to construct these smaller, +less computationally expensive models. The role of the width multiplier +α is to thin a network uniformly at each layer. + +Edge Impulse Studio has available MobileNetV1 (96x96 images) and V2 +(96x96 and 160x160 images), with several different **α** values (from +0.05 to 1.0). For example, you will get the highest accuracy with V2, +160x160 images, and α=1.0. Of course, there is a trade-off. The higher +the accuracy, the more memory (around 1.3M RAM and 2.6M ROM) will be +needed to run the model, implying more latency. The smaller footprint +will be obtained at another extreme with MobileNetV1 and α=0.10 (around +53.2K RAM and 101K ROM). + +![image.png](images_4/media/image27.jpg){width="6.5in" +height="3.5277777777777777in"} + +For this project, we will use **MobileNetV2 96x96 0.1**, which estimates +a memory cost of 265.3 KB in RAM. This model should be OK for the Nicla +Vision with 1MB of SRAM. On the Transfer Learning Tab, select this +model: + +![image.png](images_4/media/image24.png){width="6.5in" +height="4.263888888888889in"} + +Another necessary technique to be used with Deep Learning is **Data +Augmentation**. Data augmentation is a method that can help improve the +accuracy of machine learning models, creating additional artificial +data. A data augmentation system makes small, random changes to your +training data during the training process (such as flipping, cropping, +or rotating the images). + +Under the rood, here you can see how Edge Impulse implements a data +Augmentation policy on your data: + + ----------------------------------------------------------------------- + \# Implements the data augmentation policy\ + def augment_image(image, label):\ + \# Flips the image randomly\ + image = tf.image.random_flip_left_right(image)\ + \ + \# Increase the image size, then randomly crop it down to\ + \# the original dimensions\ + resize_factor = random.uniform(1, 1.2)\ + new_height = math.floor(resize_factor \* INPUT_SHAPE\[0\])\ + new_width = math.floor(resize_factor \* INPUT_SHAPE\[1\])\ + image = tf.image.resize_with_crop_or_pad(image, new_height, new_width)\ + image = tf.image.random_crop(image, size=INPUT_SHAPE)\ + \ + \# Vary the brightness of the image\ + image = tf.image.random_brightness(image, max_delta=0.2)\ + \ + return image, label + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +Exposure to these variations during training can help prevent your model +from taking shortcuts by \"memorizing\" superficial clues in your +training data, meaning it may better reflect the deep underlying +patterns in your dataset. + +The final layer of our model will have 12 neurons with a 15% dropout for +overfitting prevention. Here is the Training result: + +![image.png](images_4/media/image31.jpg){width="6.5in" +height="3.5in"} + +The result is excellent, with 77ms of latency, which should result in +13fps (frames per second) during inference. + +### **Model Testing** + +![image.png](images_4/media/image10.jpg){width="6.5in" +height="3.8472222222222223in"} + +Now, you should take the data put apart at the start of the project and +run the trained model having them as input: + +![image.png](images_4/media/image34.png){width="3.1041666666666665in" +height="1.7083333333333333in"} + +The result was, again, excellent. + +![image.png](images_4/media/image12.png){width="6.5in" +height="4.263888888888889in"} + +### **Deploying the model** + +At this point, we can deploy the trained model as.tflite and use the +OpenMV IDE to run it using MicroPython, or we can deploy it as a C/C++ +or an Arduino library. + +![image.png](images_4/media/image28.jpg){width="6.5in" +height="3.763888888888889in"} + +**Arduino Library** + +First, Let\'s deploy it as an Arduino Library: + +![image.png](images_4/media/image48.png){width="6.5in" +height="4.263888888888889in"} + +You should install the library as.zip on the Arduino IDE and run the +sketch nicla_vision_camera.ino available in Examples under your library +name. + +> *Note that Arduino Nicla Vision has, by default, 512KB of RAM +> allocated for the M7 core and an additional 244KB on the M4 address +> space. In the code, this allocation was changed to 288 kB to guarantee +> that the model will run on the device +> (malloc_addblock((void\*)0x30000000, 288 \* 1024);).* + +The result was good, with 86ms of measured latency. + +![image.png](images_4/media/image25.jpg){width="6.5in" +height="3.4444444444444446in"} + +Here is a short video showing the inference results: +[[https://youtu.be/bZPZZJblU-o]{.underline}](https://youtu.be/bZPZZJblU-o) + +**OpenMV** + +It is possible to deploy the trained model to be used with OpenMV in two +ways: as a library and as a firmware. + +Three files are generated as a library: the.tflite model, a list with +the labels, and a simple MicroPython script that can make inferences +using the model. + +![image.png](images_4/media/image26.png){width="6.5in" +height="1.0in"} + +Running this model as a.tflite directly in the Nicla was impossible. So, +we can sacrifice the accuracy using a smaller model or deploy the model +as an OpenMV Firmware (FW). As an FW, the Edge Impulse Studio generates +optimized models, libraries, and frameworks needed to make the +inference. Let\'s explore this last one. + +Select OpenMV Firmware on the Deploy Tab and press \[Build\]. + +![image.png](images_4/media/image3.png){width="6.5in" +height="4.263888888888889in"} + +On your computer, you will find a ZIP file. Open it: + +![Pasted Graphic +64.png](images_4/media/image33.png){width="6.5in" +height="2.625in"} + +Use the Bootloader tool on the OpenMV IDE to load the FW on your board: + +![Pasted Graphic +63.png](images_4/media/image35.jpg){width="6.5in" +height="3.625in"} + +Select the appropriate file (.bin for Nicla-Vision): + +![Pasted Graphic +65.png](images_4/media/image8.png){width="6.5in" +height="1.9722222222222223in"} + +After the download is finished, press OK: + +![DFU firmware update +complete!.png](images_4/media/image40.png){width="3.875in" +height="5.708333333333333in"} + +If a message says that the FW is outdated, DO NOT UPGRADE. Select +\[NO\]. + +![image.png](images_4/media/image42.png){width="4.572916666666667in" +height="2.875in"} + +Now, open the script **ei_image_classification.py** that was downloaded +from the Studio and the.bin file for the Nicla. + +![image.png](images_4/media/image14.png){width="6.5in" +height="4.0in"} + +And run it. Pointing the camera to the objects we want to classify, the +inference result will be displayed on the Serial Terminal. + +![image.png](images_4/media/image37.png){width="6.5in" +height="3.736111111111111in"} + +**Changing Code to add labels:** + +The code provided by Edge Impulse can be modified so that we can see, +for test reasons, the inference result directly on the image displayed +on the OpenMV IDE. + +[[Upload the code from +GitHub,]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Micropython/nicla_image_classification.py) +or modify it as below: + + ----------------------------------------------------------------------- + \# Marcelo Rovai - NICLA Vision - Image Classification\ + \# Adapted from Edge Impulse - OpenMV Image Classification Example\ + \# \@24Aug23\ + \ + import sensor, image, time, os, tf, uos, gc\ + \ + sensor.reset() \# Reset and initialize the sensor.\ + sensor.set_pixformat(sensor.RGB565) \# Set pxl fmt to RGB565 (or + GRAYSCALE)\ + sensor.set_framesize(sensor.QVGA) \# Set frame size to QVGA (320x240)\ + sensor.set_windowing((240, 240)) \# Set 240x240 window.\ + sensor.skip_frames(time=2000) \# Let the camera adjust.\ + \ + net = None\ + labels = None\ + \ + try:\ + \# Load built in model\ + labels, net = tf.load_builtin_model(\'trained\')\ + except Exception as e:\ + raise Exception(e)\ + \ + clock = time.clock()\ + while(True):\ + clock.tick() \# Starts tracking elapsed time.\ + \ + img = sensor.snapshot()\ + \ + \# default settings just do one detection\ + for obj in net.classify(img,\ + min_scale=1.0,\ + scale_mul=0.8,\ + x_overlap=0.5,\ + y_overlap=0.5):\ + fps = clock.fps()\ + lat = clock.avg()\ + \ + print(\"\*\*\*\*\*\*\*\*\*\*\\nPrediction:\")\ + img.draw_rectangle(obj.rect())\ + \# This combines the labels and confidence values into a list of + tuples\ + predictions_list = list(zip(labels, obj.output()))\ + \ + max_val = predictions_list\[0\]\[1\]\ + max_lbl = \'background\'\ + for i in range(len(predictions_list)):\ + val = predictions_list\[i\]\[1\]\ + lbl = predictions_list\[i\]\[0\]\ + \ + if val \> max_val:\ + max_val = val\ + max_lbl = lbl\ + \ + \# Print label with the highest probability\ + if max_val \< 0.5:\ + max_lbl = \'uncertain\'\ + print(\"{} with a prob of {:.2f}\".format(max_lbl, max_val))\ + print(\"FPS: {:.2f} fps ==\> latency: {:.0f} ms\".format(fps, lat))\ + \ + \# Draw label with highest probability to image viewer\ + img.draw_string(\ + 10, 10,\ + max_lbl + \"\\n{:.2f}\".format(max_val),\ + mono_space = False,\ + scale=2\ + ) + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +Here you can see the result: + +![image.png](images_4/media/image47.jpg){width="6.5in" +height="2.9444444444444446in"} + +Note that the latency (136 ms) is almost double what we got directly +with the Arduino IDE. This is because we are using the IDE as an +interface and the time to wait for the camera to be ready. If we start +the clock just before the inference: + +![image.png](images_4/media/image13.jpg){width="6.5in" +height="2.0972222222222223in"} + +The latency will drop to only 71 ms. + +![image.png](images_4/media/image1.jpg){width="3.5520833333333335in" +height="1.53125in"} + +### ***OpenMV Cam runs about half as fast when connected to the IDE. The FPS should increase once disconnected.*** + +### **Post-Processing with LEDs** + +When working with embedded machine learning, we are looking for devices +that can continually proceed with the inference and result, taking some +action directly on the physical world and not displaying the result on a +connected computer. To simulate this, we will define one LED to light up +for each one of the possible inference results. + +![image.png](images_4/media/image38.jpg){width="6.5in" +height="3.236111111111111in"} + +For that, we should [[upload the code from +GitHub]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Micropython/nicla_image_classification_LED.py) +or change the last code to include the LEDs: + + ----------------------------------------------------------------------- + \# Marcelo Rovai - NICLA Vision - Image Classification with LEDs\ + \# Adapted from Edge Impulse - OpenMV Image Classification Example\ + \# \@24Aug23\ + \ + import sensor, image, time, os, tf, uos, gc, pyb\ + \ + ledRed = pyb.LED(1)\ + ledGre = pyb.LED(2)\ + ledBlu = pyb.LED(3)\ + \ + sensor.reset() \# Reset and initialize the sensor.\ + sensor.set_pixformat(sensor.RGB565) \# Set pixl fmt to RGB565 (or + GRAYSCALE)\ + sensor.set_framesize(sensor.QVGA) \# Set frame size to QVGA (320x240)\ + sensor.set_windowing((240, 240)) \# Set 240x240 window.\ + sensor.skip_frames(time=2000) \# Let the camera adjust.\ + \ + net = None\ + labels = None\ + \ + ledRed.off()\ + ledGre.off()\ + ledBlu.off()\ + \ + try:\ + \# Load built in model\ + labels, net = tf.load_builtin_model(\'trained\')\ + except Exception as e:\ + raise Exception(e)\ + \ + clock = time.clock()\ + \ + \ + def setLEDs(max_lbl):\ + \ + if max_lbl == \'uncertain\':\ + ledRed.on()\ + ledGre.off()\ + ledBlu.off()\ + \ + if max_lbl == \'periquito\':\ + ledRed.off()\ + ledGre.on()\ + ledBlu.off()\ + \ + if max_lbl == \'robot\':\ + ledRed.off()\ + ledGre.off()\ + ledBlu.on()\ + \ + if max_lbl == \'background\':\ + ledRed.off()\ + ledGre.off()\ + ledBlu.off()\ + \ + \ + while(True):\ + img = sensor.snapshot()\ + clock.tick() \# Starts tracking elapsed time.\ + \ + \# default settings just do one detection.\ + for obj in net.classify(img,\ + min_scale=1.0,\ + scale_mul=0.8,\ + x_overlap=0.5,\ + y_overlap=0.5):\ + fps = clock.fps()\ + lat = clock.avg()\ + \ + print(\"\*\*\*\*\*\*\*\*\*\*\\nPrediction:\")\ + img.draw_rectangle(obj.rect())\ + \# This combines the labels and confidence values into a list of + tuples\ + predictions_list = list(zip(labels, obj.output()))\ + \ + max_val = predictions_list\[0\]\[1\]\ + max_lbl = \'background\'\ + for i in range(len(predictions_list)):\ + val = predictions_list\[i\]\[1\]\ + lbl = predictions_list\[i\]\[0\]\ + \ + if val \> max_val:\ + max_val = val\ + max_lbl = lbl\ + \ + \# Print label and turn on LED with the highest probability\ + if max_val \< 0.8:\ + max_lbl = \'uncertain\'\ + \ + setLEDs(max_lbl)\ + \ + print(\"{} with a prob of {:.2f}\".format(max_lbl, max_val))\ + print(\"FPS: {:.2f} fps ==\> latency: {:.0f} ms\".format(fps, lat))\ + \ + \# Draw label with highest probability to image viewer\ + img.draw_string(\ + 10, 10,\ + max_lbl + \"\\n{:.2f}\".format(max_val),\ + mono_space = False,\ + scale=2\ + ) + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +Now, each time that a class gets a result superior of 0.8, the +correspondent LED will be light on as below: + +- Led Red 0n: Uncertain (no one class is over 0.8) + +- Led Green 0n: Periquito \> 0.8 + +- Led Blue 0n: Robot \> 0.8 + +- All LEDs Off: Background \> 0.8 + +Here is the result: + +![image.png](images_4/media/image18.jpg){width="6.5in" +height="3.6527777777777777in"} + +In more detail + +![image.png](images_4/media/image21.jpg){width="6.5in" +height="2.0972222222222223in"} + +### **Image Classification (non-official) Benchmark** + +Several development boards can be used for embedded machine learning +(tinyML), and the most common ones for Computer Vision applications +(with low energy), are the ESP32 CAM, the Seeed XIAO ESP32S3 Sense, the +Arduinos Nicla Vison, and Portenta. + +![image.png](images_4/media/image19.jpg){width="6.5in" +height="4.194444444444445in"} + +Using the opportunity, the same trained model was deployed on the +ESP-CAM, the XIAO, and Portenta (in this one, the model was trained +again, using grayscaled images to be compatible with its camera. Here is +the result, deploying the models as Arduino\'s Library: + +![image.png](images_4/media/image4.jpg){width="6.5in" +height="3.4444444444444446in"} + +### **Conclusion** + +Before we finish, consider that Computer Vision is more than just image +classification. For example, you can develop Edge Machine Learning +projects around vision in several areas, such as: + +- **Autonomous Vehicles**: Use sensor fusion, lidar data, and computer + > vision algorithms to navigate and make decisions. + +- **Healthcare**: Automated diagnosis of diseases through MRI, X-ray, + > and CT scan image analysis + +- **Retail**: Automated checkout systems that identify products as + > they pass through a scanner. + +- **Security and Surveillance**: Facial recognition, anomaly + > detection, and object tracking in real-time video feeds. + +- **Augmented Reality**: Object detection and classification to + > overlay digital information in the real world. + +- **Industrial Automation**: Visual inspection of products, predictive + > maintenance, and robot and drone guidance. + +- **Agriculture**: Drone-based crop monitoring and automated + > harvesting. + +- **Natural Language Processing**: Image captioning and visual + > question answering. + +- **Gesture Recognition**: For gaming, sign language translation, and + > human-machine interaction. + +- **Content Recommendation**: Image-based recommendation systems in + > e-commerce. diff --git a/embedded_sys_exercise.qmd b/embedded_sys_exercise.qmd new file mode 100644 index 00000000..d4d36577 --- /dev/null +++ b/embedded_sys_exercise.qmd @@ -0,0 +1,496 @@ +--- +title: "[]{#_6ugxbtjss6rg .anchor}2 Embedded Systems" +--- + +# Exercise - The Nicla Vision + +**Introduction** + +The [Arduino Nicla +Vision](https://docs.arduino.cc/hardware/nicla-vision) (sometimes called +*NiclaV*) is a development board that includes two processors that can +run tasks in parallel. It is part of a family of development boards with +the same form factor but designed for specific tasks, such as the [Nicla +Sense +ME](https://www.bosch-sensortec.com/software-tools/tools/arduino-nicla-sense-me/) +and the [Nicla +Voice](https://store-usa.arduino.cc/products/nicla-voice?_gl=1*l3abc6*_ga*MTQ3NzE4Mjk4Mi4xNjQwMDIwOTk5*_ga_NEXN8H46L5*MTY5NjM0Mzk1My4xMDIuMS4xNjk2MzQ0MjQ1LjAuMC4w). +The *Niclas* can efficiently run processes created with TensorFlow™ +Lite. For example, one of the cores of the NiclaV computing a computer +vision algorithm on the fly (inference), while the other leads with +low-level operations like controlling a motor and communicating or +acting as a user interface. + +> *The onboard wireless module allows the management of WiFi and +> Bluetooth Low Energy (BLE) connectivity simultaneously.* + +![image.png](images_2/media/image29.jpg){width="6.5in" +height="3.861111111111111in"} + +### **Two Parallel Cores** + +The central processor is the dual-core +[STM32H747,](https://content.arduino.cc/assets/Arduino-Portenta-H7_Datasheet_stm32h747xi.pdf?_gl=1*6quciu*_ga*MTQ3NzE4Mjk4Mi4xNjQwMDIwOTk5*_ga_NEXN8H46L5*MTY0NzQ0NTg1My4xMS4xLjE2NDc0NDYzMzkuMA..) +including a Cortex® M7 at 480 MHz and a Cortex® M4 at 240 MHz. The two +cores communicate via a Remote Procedure Call mechanism that seamlessly +allows calling functions on the other processor. Both processors share +all the on-chip peripherals and can run: + +- Arduino sketches on top of the Arm® Mbed™ OS + +- Native Mbed™ applications + +- MicroPython / JavaScript via an interpreter + +- TensorFlow™ Lite + +![image.png](images_2/media/image22.jpg){width="5.78125in" +height="5.78125in"} + +### **Memory** + +Memory is crucial for embedded machine learning projects. The NiclaV +board can host up to 16 MB of QSPI Flash for storage. However, it is +essential to consider that the MCU SRAM is the one to be used with +machine learning inferences; the STM32H747 is only 1MB, shared by both +processors. This MCU also has incorporated 2MB of FLASH, mainly for code +storage. + +### **Sensors** + +- **Camera**: A GC2145 2 MP Color CMOS Camera. + +- **Microphone**: A + > [MP34DT05,](https://content.arduino.cc/assets/Nano_BLE_Sense_mp34dt05-a.pdf?_gl=1*12fxus9*_ga*MTQ3NzE4Mjk4Mi4xNjQwMDIwOTk5*_ga_NEXN8H46L5*MTY0NzQ0NTg1My4xMS4xLjE2NDc0NDc3NzMuMA..) + > an ultra-compact, low-power, omnidirectional, digital MEMS + > microphone built with a capacitive sensing element and an IC + > interface. + +- **6-Axis IMU**: 3D gyroscope and 3D accelerometer data from the + > LSM6DSOX 6-axis IMU. + +- **Time of Flight Sensor**: The VL53L1CBV0FY Time-of-Flight sensor + > adds accurate and low power-ranging capabilities to the Nicla + > Vision. The invisible near-infrared VCSEL laser (including the + > analog driver) is encapsulated with receiving optics in an + > all-in-one small module below the camera. + +### **HW Installation (Arduino IDE)** + +Start connecting the board (USB-C) to your computer : + +![image.png](images_2/media/image14.jpg){width="6.5in" +height="3.0833333333333335in"} + +Install the Mbed OS core for Nicla boards in the Arduino IDE. Having the +IDE open, navigate to Tools \> Board \> Board Manager, look for Arduino +Nicla Vision on the search window, and install the board. + +![image.png](images_2/media/image2.jpg){width="6.5in" +height="2.7083333333333335in"} + +Next, go to Tools \> Board \> Arduino Mbed OS Nicla Boards and select +Arduino Nicla Vision. Having your board connected to the USB, you should +see the Nicla on Port and select it. + +> *Open the Blink sketch on Examples/Basic and run it using the IDE +> Upload button. You should see the Built-in LED (green RGB) blinking, +> which means the Nicla board is correctly installed and functional!* + +### **Testing the Microphone** + +On Arduino IDE, go to Examples \> PDM \> PDMSerialPlotter, open and run +the sketch. Open the Plotter and see the audio representation from the +microphone: + +![image.png](images_2/media/image9.png){width="6.5in" +height="4.361111111111111in"} + +> *Vary the frequency of the sound you generate and confirm that the mic +> is working correctly.* + +### **Testing the IMU** + +Before testing the IMU, it will be necessary to install the LSM6DSOX +library. For that, go to Library Manager and look for LSM6DSOX. Install +the library provided by Arduino: + +![image.png](images_2/media/image19.jpg){width="6.5in" +height="2.4027777777777777in"} + +Next, go to Examples \> Arduino_LSM6DSOX \> SimpleAccelerometer and run +the accelerometer test (you can also run Gyro and board temperature): + +![image.png](images_2/media/image28.png){width="6.5in" +height="4.361111111111111in"} + +### **Testing the ToF (Time of Flight) Sensor** + +As we did with IMU, installing the ToF library, the VL53L1X is +necessary. For that, go to Library Manager and look for VL53L1X. Install +the library provided by Pololu: + +![image.png](images_2/media/image15.jpg){width="6.5in" +height="2.4583333333333335in"} + +Next, run the sketch +[proximity_detection.ino](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Micropython/distance_image_meter.py): + +![image.png](images_2/media/image12.png){width="4.947916666666667in" +height="4.635416666666667in"} + +On the Serial Monitor, you will see the distance from the camera and an +object in front of it (max of 4m). + +![image.png](images_2/media/image13.jpg){width="6.5in" +height="4.847222222222222in"} + +### **Testing the Camera** + +We can also test the camera using, for example, the code provided on +Examples \> Camera \> CameraCaptureRawBytes. We can not see the image +directly, but it is possible to get the raw image data generated by the +camera. + +Anyway, the best test with the camera is to see a live image. For that, +we will use another IDE, the OpenMV. + +### **Installing the OpenMV IDE** + +OpenMV IDE is the premier integrated development environment for use +with OpenMV Cameras and the one on the Portenta. It features a powerful +text editor, debug terminal, and frame buffer viewer with a histogram +display. We will use MicroPython to program the camera. + +Go to the [OpenMV IDE page](https://openmv.io/pages/download), download +the correct version for your Operating System, and follow the +instructions for its installation on your computer. + +![image.png](images_2/media/image21.png){width="6.5in" +height="4.791666666666667in"} + +The IDE should open, defaulting the helloworld_1.py code on its Code +Area. If not, you can open it from Files \> Examples \> HelloWord \> +helloword.py + +![image.png](images_2/media/image7.png){width="6.5in" +height="4.444444444444445in"} + +Any messages sent through a serial connection (using print() or error +messages) will be displayed on the **Serial Terminal** during run time. +The image captured by a camera will be displayed in the **Camera +Viewer** Area (or Frame Buffer) and in the Histogram area, immediately +below the Camera Viewer. + +OpenMV IDE is the premier integrated development environment with OpenMV +Cameras and the Arduino Pro boards. It features a powerful text editor, +debug terminal, and frame buffer viewer with a histogram display. We +will use MicroPython to program the Nicla Vision. + +> *Before connecting the Nicla to the OpenMV IDE, ensure you have the +> latest bootloader version. To that, go to your Arduino IDE, select the +> Nicla board, and open the sketch on Examples \> STM_32H747_System +> STM_32H747_updateBootloader. Upload the code to your board. The Serial +> Monitor will guide you.* + +After updating the bootloader, put the Nicla Vision in bootloader mode +by double-pressing the reset button on the board. The built-in green LED +will start fading in and out. Now return to the OpenMV IDE and click on +the connect icon (Left ToolBar): + +![image.png](images_2/media/image23.jpg){width="4.010416666666667in" +height="1.0520833333333333in"} + +A pop-up will tell you that a board in DFU mode was detected and ask you +how you would like to proceed. First, select \"Install the latest +release firmware.\" This action will install the latest OpenMV firmware +on the Nicla Vision. + +![image.png](images_2/media/image10.png){width="6.5in" +height="2.6805555555555554in"} + +You can leave the option of erasing the internal file system unselected +and click \[OK\]. + +Nicla\'s green LED will start flashing while the OpenMV firmware is +uploaded to the board, and a terminal window will then open, showing the +flashing progress. + +![image.png](images_2/media/image5.png){width="4.854166666666667in" +height="3.5416666666666665in"} + +Wait until the green LED stops flashing and fading. When the process +ends, you will see a message saying, \"DFU firmware update complete!\". +Press \[OK\]. + +![image.png](images_2/media/image1.png){width="3.875in" +height="5.708333333333333in"} + +A green play button appears when the Nicla Vison connects to the Tool +Bar. + +![image.png](images_2/media/image18.jpg){width="4.791666666666667in" +height="1.4791666666666667in"} + +Also, note that a drive named "NO NAME" will appear on your computer.: + +![image.png](images_2/media/image3.png){width="6.447916666666667in" +height="2.4166666666666665in"} + +Every time you press the \[RESET\] button on the board, it automatically +executes the main.py script stored on it. You can load the +[main.py](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Micropython/main.py) +code on the IDE (File \> Open File\...). + +![image.png](images_2/media/image16.png){width="4.239583333333333in" +height="3.8229166666666665in"} + +> *This code is the \"Blink\" code, confirming that the HW is OK.* + +For testing the camera, let\'s run helloword_1.py. For that, select the +script on File \> Examples \> HelloWorld \> helloword.py, + +When clicking the green play button, the MicroPython script +(hellowolrd.py) on the Code Area will be uploaded and run on the Nicla +Vision. On-Camera Viewer, you will start to see the video streaming. The +Serial Monitor will show us the FPS (Frames per second), which should be +around 14fps. + +![image.png](images_2/media/image6.png){width="6.5in" +height="3.9722222222222223in"} + +Let\'s go through the [helloworld.py](http://helloworld.py/) script: + + ----------------------------------------------------------------------- + \# Hello World Example\ + \#\ + \# Welcome to the OpenMV IDE! Click on the green run arrow button below + to run the script!\ + \ + import sensor, image, time\ + \ + sensor.reset() \# Reset and initialize the sensor.\ + sensor.set_pixformat(sensor.RGB565) \# Set pixel format to RGB565 (or + GRAYSCALE)\ + sensor.set_framesize(sensor.QVGA) \# Set frame size to QVGA (320x240)\ + sensor.skip_frames(time = 2000) \# Wait for settings take effect.\ + clock = time.clock() \# Create a clock object to track the FPS.\ + \ + while(True):\ + clock.tick() \# Update the FPS clock.\ + img = sensor.snapshot() \# Take a picture and return the image.\ + print(clock.fps()) + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +In GitHub, you can find the Python scripts used here. + +The code can be split into two parts: + +- **Setup**: Where the libraries are imported and initialized, and the + > variables are defined and initiated. + +- **Loop**: (while loop) part of the code that runs continually. The + > image (img variable) is captured (a frame). Each of those frames + > can be used for inference in Machine Learning Applications. + +To interrupt the program execution, press the red \[X\] button. + +> *Note: OpenMV Cam runs about half as fast when connected to the IDE. +> The FPS should increase once disconnected.* + +In [[the GitHub, You can find other Python +scripts]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/tree/main/Micropython). +Try to test the onboard sensors. + +### **Connecting the Nicla Vision to Edge Impulse Studio** + +We will use the Edge Impulse Studio later in other exercises. [Edge +Impulse I](https://www.edgeimpulse.com/)s a leading development platform +for machine learning on edge devices. + +Edge Impulse officially supports the Nicla Vision. So, for starting, +please create a new project on the Studio and connect the Nicla to it. +For that, follow the steps: + +- Download the [last EI + > Firmware](https://cdn.edgeimpulse.com/firmware/arduino-nicla-vision.zip) + > and unzip it. + +- Open the zip file on your computer and select the uploader related + > to your OS: + +![image.png](images_2/media/image17.png){width="4.416666666666667in" +height="1.5520833333333333in"} + +- Put the Nicla-Vision on Boot Mode, pressing the reset button twice. + +- Execute the specific batch code for your OS for uploading the binary + > (arduino-nicla-vision.bin) to your board. + +Go to your project on the Studio, and on the Data Acquisition tab, +select WebUSB (1). A window will appear; choose the option that shows +that the Nicla is pared (2) and press \[Connect\] (3). + +![image.png](images_2/media/image27.png){width="6.5in" +height="4.319444444444445in"} + +In the Collect Data section on the Data Acquisition tab, you can choose +what sensor data you will pick. + +![image.png](images_2/media/image25.png){width="6.5in" +height="4.319444444444445in"} + +For example. IMU data: + +![image.png](images_2/media/image8.png){width="6.5in" +height="4.319444444444445in"} + +Or Image: + +![image.png](images_2/media/image4.png){width="6.5in" +height="4.319444444444445in"} + +And so on. You can also test an external sensor connected to the Nicla +ADC (pin 0) and the other onboard sensors, such as the microphone and +the ToF. + +### **Expanding the Nicla Vision Board (optional)** + +A last item to be explored is that sometimes, during prototyping, it is +essential to experiment with external sensors and devices, and an +excellent expansion to the Nicla is the [Arduino MKR Connector Carrier +(Grove +compatible)](https://store-usa.arduino.cc/products/arduino-mkr-connector-carrier-grove-compatible). + +The shield has 14 Grove connectors: five single analog inputs, one +single analog input, five single digital I/Os, one double digital I/O, +one I2C, and one UART. All connectors are 5V compatible. + +> *Note that besides all 17 Nicla Vision pins that will be connected to +> the Shield Groves, some Grove connections are disconnected.* + +![image.png](images_2/media/image20.jpg){width="6.5in" +height="4.875in"} + +This shield is MKR compatible and can be used with the Nicla Vision and +the Portenta. + +![image.png](images_2/media/image26.jpg){width="4.34375in" +height="5.78125in"} + +For example, suppose that on a TinyML project, you want to send +inference results using a LoRaWan device and add information about local +luminosity. Besides, with offline operations, a local low-power display +as an OLED display is advised. This setup can be seen here: + +![image.png](images_2/media/image11.jpg){width="6.5in" +height="4.708333333333333in"} + +The [Grove Light +Sensor](https://wiki.seeedstudio.com/Grove-Light_Sensor/) would be +connected to one of the single Analog pins (A0/PC4), the [LoRaWan +device](https://wiki.seeedstudio.com/Grove_LoRa_E5_New_Version/) to the +UART, and the [OLED](https://arduino.cl/producto/display-oled-grove/) to +the I2C connector. + +The Nicla Pins 3 (Tx) and 4 (Rx) are connected with the Shield Serial +connector. The UART communication is used with the LoRaWan device. Here +is a simple code to use the UART.: + + ----------------------------------------------------------------------- + \# UART Test - By: marcelo_rovai - Sat Sep 23 2023\ + \ + import time\ + from pyb import UART\ + from pyb import LED\ + \ + redLED = LED(1) \# built-in red LED\ + \ + \# Init UART object.\ + \# Nicla Vision\'s UART (TX/RX pins) is on \"LP1\"\ + uart = UART(\"LP1\", 9600)\ + \ + while(True):\ + uart.write(\"Hello World!\\r\\n\")\ + redLED.toggle()\ + time.sleep_ms(1000) + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +To verify if the UART is working, you should, for example, connect +another device as an [Arduino +UNO](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Arduino-IDE/teste_uart_UNO/teste_uart_UNO.ino), +displaying the Hello Word. + +![B3D78F51-83F9-413D-8BF9-ED46FAA82F49.GIF](images_2/media/image24.gif){width="2.8125in" +height="3.75in"} + +Here is a Hello World code to be used with the I2C OLED. The MicroPython +SSD1306 OLED driver (ssd1306.py), created by Adafruit, should also be +uploaded to the Nicla (the +[[ssd1306.py]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/blob/main/Micropython/ssd1306.py) +can be found in GitHub). + + ----------------------------------------------------------------------- + \# Nicla_OLED_Hello_World - By: marcelo_rovai - Sat Sep 30 2023\ + \ + #Save on device: MicroPython SSD1306 OLED driver, I2C and SPI + interfaces created by Adafruit\ + import ssd1306\ + \ + from machine import I2C\ + i2c = I2C(1)\ + \ + oled_width = 128\ + oled_height = 64\ + oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c)\ + \ + oled.text(\'Hello, World\', 10, 10)\ + oled.show() + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +Finally, here is a simple script to read the ADC value on pin \"PC4\" +(Nicla pin A0): + + ----------------------------------------------------------------------- + \# Light Sensor (A0) - By: marcelo_rovai - Wed Oct 4 2023\ + \ + import pyb\ + from time import sleep\ + \ + adc = pyb.ADC(pyb.Pin(\"PC4\")) \# create an analog object from a pin\ + val = adc.read() \# read an analog value\ + \ + while (True):\ + \ + val = adc.read()\ + print (\"Light={}\".format (val))\ + sleep (1) + ----------------------------------------------------------------------- + + ----------------------------------------------------------------------- + +The ADC can be used for other valuable sensors, such as +[Temperature](https://wiki.seeedstudio.com/Grove-Temperature_Sensor_V1.2/). + +> *Note that the above scripts ([[downloaded from +> Github]{.underline}](https://github.com/Mjrovai/Arduino_Nicla_Vision/tree/main/Micropython)) +> only introduce how to connect external devices with the Nicla Vision +> board using MicroPython.* + +### **Conclusion** + +The Arduino Nicla Vision is an excellent *tiny device* for industrial +and professional uses! However, it is powerful, trustworthy, low power, +and has suitable sensors for the most common embedded machine learning +applications such as vision, movement, sensor fusion, and sound. + +> *On the* *[GitHub +> repository,](https://github.com/Mjrovai/Arduino_Nicla_Vision/tree/main) +> you will find the last version of all the codes used or commented on +> in this exercise.* diff --git a/images_2/media/image1.png b/images_2/media/image1.png new file mode 100644 index 00000000..fc6af05b Binary files /dev/null and b/images_2/media/image1.png differ diff --git a/images_2/media/image10.png b/images_2/media/image10.png new file mode 100644 index 00000000..1abb3211 Binary files /dev/null and b/images_2/media/image10.png differ diff --git a/images_2/media/image11.jpg b/images_2/media/image11.jpg new file mode 100644 index 00000000..ec677570 Binary files /dev/null and b/images_2/media/image11.jpg differ diff --git a/images_2/media/image12.png b/images_2/media/image12.png new file mode 100644 index 00000000..827cd7db Binary files /dev/null and b/images_2/media/image12.png differ diff --git a/images_2/media/image13.jpg b/images_2/media/image13.jpg new file mode 100644 index 00000000..ef54e2b8 Binary files /dev/null and b/images_2/media/image13.jpg differ diff --git a/images_2/media/image14.jpg b/images_2/media/image14.jpg new file mode 100644 index 00000000..08380d3a Binary files /dev/null and b/images_2/media/image14.jpg differ diff --git a/images_2/media/image15.jpg b/images_2/media/image15.jpg new file mode 100644 index 00000000..89f94595 Binary files /dev/null and b/images_2/media/image15.jpg differ diff --git a/images_2/media/image16.png b/images_2/media/image16.png new file mode 100644 index 00000000..e42200ce Binary files /dev/null and b/images_2/media/image16.png differ diff --git a/images_2/media/image17.png b/images_2/media/image17.png new file mode 100644 index 00000000..7e91e833 Binary files /dev/null and b/images_2/media/image17.png differ diff --git a/images_2/media/image18.jpg b/images_2/media/image18.jpg new file mode 100644 index 00000000..2befeb33 Binary files /dev/null and b/images_2/media/image18.jpg differ diff --git a/images_2/media/image19.jpg b/images_2/media/image19.jpg new file mode 100644 index 00000000..48ea5478 Binary files /dev/null and b/images_2/media/image19.jpg differ diff --git a/images_2/media/image2.jpg b/images_2/media/image2.jpg new file mode 100644 index 00000000..cc513fc7 Binary files /dev/null and b/images_2/media/image2.jpg differ diff --git a/images_2/media/image20.jpg b/images_2/media/image20.jpg new file mode 100644 index 00000000..78af95e3 Binary files /dev/null and b/images_2/media/image20.jpg differ diff --git a/images_2/media/image21.png b/images_2/media/image21.png new file mode 100644 index 00000000..ca21a4e3 Binary files /dev/null and b/images_2/media/image21.png differ diff --git a/images_2/media/image22.jpg b/images_2/media/image22.jpg new file mode 100644 index 00000000..082fce1a Binary files /dev/null and b/images_2/media/image22.jpg differ diff --git a/images_2/media/image23.jpg b/images_2/media/image23.jpg new file mode 100644 index 00000000..01abff78 Binary files /dev/null and b/images_2/media/image23.jpg differ diff --git a/images_2/media/image24.gif b/images_2/media/image24.gif new file mode 100644 index 00000000..0ed868cb Binary files /dev/null and b/images_2/media/image24.gif differ diff --git a/images_2/media/image25.png b/images_2/media/image25.png new file mode 100644 index 00000000..37bfebed Binary files /dev/null and b/images_2/media/image25.png differ diff --git a/images_2/media/image26.jpg b/images_2/media/image26.jpg new file mode 100644 index 00000000..065f5fa0 Binary files /dev/null and b/images_2/media/image26.jpg differ diff --git a/images_2/media/image27.png b/images_2/media/image27.png new file mode 100644 index 00000000..f086876a Binary files /dev/null and b/images_2/media/image27.png differ diff --git a/images_2/media/image28.png b/images_2/media/image28.png new file mode 100644 index 00000000..31d21367 Binary files /dev/null and b/images_2/media/image28.png differ diff --git a/images_2/media/image29.jpg b/images_2/media/image29.jpg new file mode 100644 index 00000000..ae46fa75 Binary files /dev/null and b/images_2/media/image29.jpg differ diff --git a/images_2/media/image3.png b/images_2/media/image3.png new file mode 100644 index 00000000..43e6542f Binary files /dev/null and b/images_2/media/image3.png differ diff --git a/images_2/media/image4.png b/images_2/media/image4.png new file mode 100644 index 00000000..e6aa3c2b Binary files /dev/null and b/images_2/media/image4.png differ diff --git a/images_2/media/image5.png b/images_2/media/image5.png new file mode 100644 index 00000000..16e03f14 Binary files /dev/null and b/images_2/media/image5.png differ diff --git a/images_2/media/image6.png b/images_2/media/image6.png new file mode 100644 index 00000000..7b467707 Binary files /dev/null and b/images_2/media/image6.png differ diff --git a/images_2/media/image7.png b/images_2/media/image7.png new file mode 100644 index 00000000..31869aa1 Binary files /dev/null and b/images_2/media/image7.png differ diff --git a/images_2/media/image8.png b/images_2/media/image8.png new file mode 100644 index 00000000..2add8235 Binary files /dev/null and b/images_2/media/image8.png differ diff --git a/images_2/media/image9.png b/images_2/media/image9.png new file mode 100644 index 00000000..22ebf5be Binary files /dev/null and b/images_2/media/image9.png differ diff --git a/images_4/media/image1.jpg b/images_4/media/image1.jpg new file mode 100644 index 00000000..48985805 Binary files /dev/null and b/images_4/media/image1.jpg differ diff --git a/images_4/media/image10.jpg b/images_4/media/image10.jpg new file mode 100644 index 00000000..8cf6eb84 Binary files /dev/null and b/images_4/media/image10.jpg differ diff --git a/images_4/media/image11.png b/images_4/media/image11.png new file mode 100644 index 00000000..613f3581 Binary files /dev/null and b/images_4/media/image11.png differ diff --git a/images_4/media/image12.png b/images_4/media/image12.png new file mode 100644 index 00000000..24d869c4 Binary files /dev/null and b/images_4/media/image12.png differ diff --git a/images_4/media/image13.jpg b/images_4/media/image13.jpg new file mode 100644 index 00000000..5298efc0 Binary files /dev/null and b/images_4/media/image13.jpg differ diff --git a/images_4/media/image14.png b/images_4/media/image14.png new file mode 100644 index 00000000..d8702a68 Binary files /dev/null and b/images_4/media/image14.png differ diff --git a/images_4/media/image15.jpg b/images_4/media/image15.jpg new file mode 100644 index 00000000..5aea170d Binary files /dev/null and b/images_4/media/image15.jpg differ diff --git a/images_4/media/image16.png b/images_4/media/image16.png new file mode 100644 index 00000000..af218e99 Binary files /dev/null and b/images_4/media/image16.png differ diff --git a/images_4/media/image17.png b/images_4/media/image17.png new file mode 100644 index 00000000..8fc0973d Binary files /dev/null and b/images_4/media/image17.png differ diff --git a/images_4/media/image18.jpg b/images_4/media/image18.jpg new file mode 100644 index 00000000..38d81100 Binary files /dev/null and b/images_4/media/image18.jpg differ diff --git a/images_4/media/image19.jpg b/images_4/media/image19.jpg new file mode 100644 index 00000000..c60501e6 Binary files /dev/null and b/images_4/media/image19.jpg differ diff --git a/images_4/media/image2.jpg b/images_4/media/image2.jpg new file mode 100644 index 00000000..f5c2e439 Binary files /dev/null and b/images_4/media/image2.jpg differ diff --git a/images_4/media/image20.png b/images_4/media/image20.png new file mode 100644 index 00000000..dd254917 Binary files /dev/null and b/images_4/media/image20.png differ diff --git a/images_4/media/image21.jpg b/images_4/media/image21.jpg new file mode 100644 index 00000000..b552ec21 Binary files /dev/null and b/images_4/media/image21.jpg differ diff --git a/images_4/media/image22.png b/images_4/media/image22.png new file mode 100644 index 00000000..71987ffd Binary files /dev/null and b/images_4/media/image22.png differ diff --git a/images_4/media/image23.jpg b/images_4/media/image23.jpg new file mode 100644 index 00000000..48bcaef2 Binary files /dev/null and b/images_4/media/image23.jpg differ diff --git a/images_4/media/image24.png b/images_4/media/image24.png new file mode 100644 index 00000000..918d3a66 Binary files /dev/null and b/images_4/media/image24.png differ diff --git a/images_4/media/image25.jpg b/images_4/media/image25.jpg new file mode 100644 index 00000000..f4cf0e5d Binary files /dev/null and b/images_4/media/image25.jpg differ diff --git a/images_4/media/image26.png b/images_4/media/image26.png new file mode 100644 index 00000000..e6752d55 Binary files /dev/null and b/images_4/media/image26.png differ diff --git a/images_4/media/image27.jpg b/images_4/media/image27.jpg new file mode 100644 index 00000000..bc2635d3 Binary files /dev/null and b/images_4/media/image27.jpg differ diff --git a/images_4/media/image28.jpg b/images_4/media/image28.jpg new file mode 100644 index 00000000..b857f18f Binary files /dev/null and b/images_4/media/image28.jpg differ diff --git a/images_4/media/image29.png b/images_4/media/image29.png new file mode 100644 index 00000000..272a0fe8 Binary files /dev/null and b/images_4/media/image29.png differ diff --git a/images_4/media/image3.png b/images_4/media/image3.png new file mode 100644 index 00000000..bdc1b381 Binary files /dev/null and b/images_4/media/image3.png differ diff --git a/images_4/media/image30.png b/images_4/media/image30.png new file mode 100644 index 00000000..92857046 Binary files /dev/null and b/images_4/media/image30.png differ diff --git a/images_4/media/image31.jpg b/images_4/media/image31.jpg new file mode 100644 index 00000000..7b4d60bd Binary files /dev/null and b/images_4/media/image31.jpg differ diff --git a/images_4/media/image32.jpg b/images_4/media/image32.jpg new file mode 100644 index 00000000..b3764526 Binary files /dev/null and b/images_4/media/image32.jpg differ diff --git a/images_4/media/image33.png b/images_4/media/image33.png new file mode 100644 index 00000000..e6549013 Binary files /dev/null and b/images_4/media/image33.png differ diff --git a/images_4/media/image34.png b/images_4/media/image34.png new file mode 100644 index 00000000..28f85e7d Binary files /dev/null and b/images_4/media/image34.png differ diff --git a/images_4/media/image35.jpg b/images_4/media/image35.jpg new file mode 100644 index 00000000..9797154d Binary files /dev/null and b/images_4/media/image35.jpg differ diff --git a/images_4/media/image36.jpg b/images_4/media/image36.jpg new file mode 100644 index 00000000..5bfffd04 Binary files /dev/null and b/images_4/media/image36.jpg differ diff --git a/images_4/media/image37.png b/images_4/media/image37.png new file mode 100644 index 00000000..a91610b7 Binary files /dev/null and b/images_4/media/image37.png differ diff --git a/images_4/media/image38.jpg b/images_4/media/image38.jpg new file mode 100644 index 00000000..1c5e92a8 Binary files /dev/null and b/images_4/media/image38.jpg differ diff --git a/images_4/media/image39.png b/images_4/media/image39.png new file mode 100644 index 00000000..2c908993 Binary files /dev/null and b/images_4/media/image39.png differ diff --git a/images_4/media/image4.jpg b/images_4/media/image4.jpg new file mode 100644 index 00000000..2a0f5dfc Binary files /dev/null and b/images_4/media/image4.jpg differ diff --git a/images_4/media/image40.png b/images_4/media/image40.png new file mode 100644 index 00000000..28100ac2 Binary files /dev/null and b/images_4/media/image40.png differ diff --git a/images_4/media/image41.jpg b/images_4/media/image41.jpg new file mode 100644 index 00000000..6b9b7890 Binary files /dev/null and b/images_4/media/image41.jpg differ diff --git a/images_4/media/image42.png b/images_4/media/image42.png new file mode 100644 index 00000000..3d4bbead Binary files /dev/null and b/images_4/media/image42.png differ diff --git a/images_4/media/image43.png b/images_4/media/image43.png new file mode 100644 index 00000000..04bbb50a Binary files /dev/null and b/images_4/media/image43.png differ diff --git a/images_4/media/image44.png b/images_4/media/image44.png new file mode 100644 index 00000000..2514285d Binary files /dev/null and b/images_4/media/image44.png differ diff --git a/images_4/media/image45.png b/images_4/media/image45.png new file mode 100644 index 00000000..ccc0a398 Binary files /dev/null and b/images_4/media/image45.png differ diff --git a/images_4/media/image46.png b/images_4/media/image46.png new file mode 100644 index 00000000..deff43bc Binary files /dev/null and b/images_4/media/image46.png differ diff --git a/images_4/media/image47.jpg b/images_4/media/image47.jpg new file mode 100644 index 00000000..d38a5128 Binary files /dev/null and b/images_4/media/image47.jpg differ diff --git a/images_4/media/image48.png b/images_4/media/image48.png new file mode 100644 index 00000000..050992b0 Binary files /dev/null and b/images_4/media/image48.png differ diff --git a/images_4/media/image5.png b/images_4/media/image5.png new file mode 100644 index 00000000..585a0644 Binary files /dev/null and b/images_4/media/image5.png differ diff --git a/images_4/media/image6.jpg b/images_4/media/image6.jpg new file mode 100644 index 00000000..20205865 Binary files /dev/null and b/images_4/media/image6.jpg differ diff --git a/images_4/media/image7.jpg b/images_4/media/image7.jpg new file mode 100644 index 00000000..a7de9ffe Binary files /dev/null and b/images_4/media/image7.jpg differ diff --git a/images_4/media/image8.png b/images_4/media/image8.png new file mode 100644 index 00000000..d5648cfa Binary files /dev/null and b/images_4/media/image8.png differ diff --git a/images_4/media/image9.jpg b/images_4/media/image9.jpg new file mode 100644 index 00000000..60c44fae Binary files /dev/null and b/images_4/media/image9.jpg differ