From cd72deebae815f033bdbd6de16c571c833e2c9b0 Mon Sep 17 00:00:00 2001
From: Rybk <39853010+RVillarraso@users.noreply.github.com>
Date: Wed, 10 Apr 2024 19:01:23 +0200
Subject: [PATCH] Delete docs/_posts directory
---
docs/_posts/2023-09-25-Week-0.md | 48 -----------
docs/_posts/2023-10-05-Week-1.md | 75 -----------------
docs/_posts/2023-10-11-Week-2.md | 53 ------------
docs/_posts/2023-10-19-Week-3-4.md | 42 ----------
docs/_posts/2023-11-15-Week-5-6.md | 57 -------------
docs/_posts/2023-11-30-Week-6-7.md | 44 ----------
docs/_posts/2023-12-14-Week-8-9.md | 61 --------------
docs/_posts/2024-01-11-Week-10.md | 55 -------------
docs/_posts/2024-01-18-Week-11.md | 56 -------------
docs/_posts/2024-02-02-Week-12.md | 68 ---------------
docs/_posts/2024-02-09-Week-13.md | 60 --------------
docs/_posts/2024-02-22-Week-14.md | 68 ---------------
docs/_posts/2024-03-01-Week-15.md | 74 -----------------
docs/_posts/2024-03-08-Week-16.md | 82 ------------------
docs/_posts/2024-03-22-Week-17-18.md | 54 ------------
docs/_posts/2024-04-05-Week-19.md | 119 ---------------------------
16 files changed, 1016 deletions(-)
delete mode 100644 docs/_posts/2023-09-25-Week-0.md
delete mode 100644 docs/_posts/2023-10-05-Week-1.md
delete mode 100644 docs/_posts/2023-10-11-Week-2.md
delete mode 100644 docs/_posts/2023-10-19-Week-3-4.md
delete mode 100644 docs/_posts/2023-11-15-Week-5-6.md
delete mode 100644 docs/_posts/2023-11-30-Week-6-7.md
delete mode 100644 docs/_posts/2023-12-14-Week-8-9.md
delete mode 100644 docs/_posts/2024-01-11-Week-10.md
delete mode 100644 docs/_posts/2024-01-18-Week-11.md
delete mode 100644 docs/_posts/2024-02-02-Week-12.md
delete mode 100644 docs/_posts/2024-02-09-Week-13.md
delete mode 100644 docs/_posts/2024-02-22-Week-14.md
delete mode 100644 docs/_posts/2024-03-01-Week-15.md
delete mode 100644 docs/_posts/2024-03-08-Week-16.md
delete mode 100644 docs/_posts/2024-03-22-Week-17-18.md
delete mode 100644 docs/_posts/2024-04-05-Week-19.md
diff --git a/docs/_posts/2023-09-25-Week-0.md b/docs/_posts/2023-09-25-Week-0.md
deleted file mode 100644
index 50cf9da..0000000
--- a/docs/_posts/2023-09-25-Week-0.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: "Week 0 - TFM Proposal study"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - github pages
----
-
-# Summary
-
-In my initial meeting with my advisor, we outlined the core objectives of my Final Year Project (TFM).
-
-# TFM proposal study
-Development of a robot's perception in unstructured environments.
-
-
-# Work planning:
- 1. Contact to Goose dataset developers to find out data availability. Study dataset structure.
- 2. Obtain RELLIS-3D dataset and know its structure.
- 3. Develop semantic segmentation algorithm for testing.
- 4. Study other data sets (RUGD, FIRE, CITYSCAPES).
- 5. Prepare the work environment in Github.
-
-## Key Topics in Autonomous Driving
-
-- [Computer Vision](#computer-vision)
-- [Neural Networks](#neural-networks)
-
----
-
-## Computer Vision
-
-Computer Vision is a branch of artificial intelligence focused on the processing of visual information. In the context of autonomous driving, it is used to detect objects in the vehicle's environment and to extract relevant information from images captured by the vehicle's cameras. Techniques such as image segmentation and feature detection are employed to process visual information.
-
----
-
-## Neural Networks
-
-Neural Networks are a mathematical model inspired by the biological behavior of neurons and how these are organized in the brain. In the realm of autonomous driving, they are used to make decisions based on processed visual information. Various architectures of neural networks are described, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), along with different databases and simulators used in research.
-
----
-
-# References
-
-* [Goose Database] https://goose-dataset.de/
-
-
diff --git a/docs/_posts/2023-10-05-Week-1.md b/docs/_posts/2023-10-05-Week-1.md
deleted file mode 100644
index f24d0e0..0000000
--- a/docs/_posts/2023-10-05-Week-1.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: "Week 1 - Preparing environment and initial Semantic Segmentation"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - Goose Dataset
- - Contact to Goose developers
- - Work environtment on Github
- - Study of other datasets
- - Semantic segmentation
----
-
-## Objectives
-
-1. Contact to Goose dataset developers to find out data availability. Study dataset structure.
-2. Prepare the work environment in Github.
-3. Study other data sets (RUGD, FIRE, CITYSCAPES).
-6. Obtain RELLIS-3D dataset and know its structure.
-7. Develop semantic segmentation algorithm for testing.
----
-
-### 1. Goose Dataset
-
-This week I was getting to know the Goose dataset website, getting to know its data structure, reading related works and getting to know the neural network models used for semantic segmentation.
-
----
-
-### 2. Contact to Goose developers
-
-I contacted the developers of the Goose dataset to congratulate them and request when 2D images for semantic segmentation and LIDAR data will be available. They responded very kindly to me that within a couple of weeks the 2D images would be available and the rest of the data would be available at the end of May/June.
-
----
-
-### 3. Work environtment on Github
-
-Added different sections in Github to save information from different datasets that can be useful. Added documentation and a tutorial with the necessary steps to work with "SuperGradients" on Goose.
-
-### 4. Study of other datasets
-
-The following data sets were downloaded to the local computer:
- - RUGD
- - FIRE
- - CITYSCAPES
- - RELLIS 3D
- - INSTANCE-LEVEL_HUMAN_PARSING
-
----
-
-### 5. Semantic Segmentation
-
-- DeepLabV3+ semantic segmentation on "instance-level_human_parsing" dataset. (DeepLabV3+ is a ResNet50 pretrained model variation based on enconder-decoder blocks).
-- RELLIS-3D (a Multi-modal Dataset for Off-Road Robotics) (images and LIDAR) obtained and preprocessed from: https://gamma.umd.edu/publication.
-- Reproduction of Ga-Nav Sematic Segmentation of Rellis-3D Images according to: https://github.com/unmannedlab/RELLIS-3D
-
-- On going: adaptation the DeepLabV3+ model to train RELLIS-3D dataset.
-
----
-
-# Next Week Work Planning
-
- 1. Continue adapting DeepLabV3+ with the RELLIS-3D and RUGD datasets.
- 2. Perform Semantic Segmentation on Cityscapes dataset.
- 3. GitHub Pages.
- 4. Study the metrics and models proposed on the Cityscapes dataset website: https://www.cityscapes-dataset.com/benchmarks/
-
----
-
-# References
-
-* [RUGD Dataset] http://rugd.vision/
-* [FIRE Dataset] https://universe.roboflow.com/fire-dataset-tp9jt/fire-detection-sejra
-* [CITYSCAPES Dataset] https://www.cityscapes-dataset.com/
-* [INSTANCE-LEVEL_HUMAN_PARSING Dataset] https://paperswithcode.com/paper/instance-level-human-parsing-via-part
-* [RELLIS-3D Dataset] https://github.com/unmannedlab/RELLIS-3D
diff --git a/docs/_posts/2023-10-11-Week-2.md b/docs/_posts/2023-10-11-Week-2.md
deleted file mode 100644
index 5b3f9eb..0000000
--- a/docs/_posts/2023-10-11-Week-2.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: "Week 2 - Semantic segmentation and Github Pages"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - Semantic segmentation
- - Github pages
- - Cityscapes metrics
-
----
-
-# Objectives
-
-1. Continue adapting DeepLabV3+ with the RELLIS-3D and RUGD datasets.
-2. GitHub Pages.
-3. Perform Semantic Segmentation on Cityscapes dataset.
-4. Study the metrics and models proposed on the Cityscapes dataset website: https://www.cityscapes-dataset.com/benchmarks/
-
----
-
-# Progress This Week
-
-## 1. DeepLabV3+ on Rellis-3D
-
-This week I have had different problems. I have not been able to adapt the DeepLabV3 model with the RELLIS-3D dataset, it requires image preprocessing that has not been applied. On the other hand, I have not been able to successfully create the Github pages sections.
-
-## 2. Github Pages
-
-During today's meeting I met Sergio who will help me in the development of the TFM. He was telling me how to manage Github pages to clone the "docs" folder and be able to make weekly posts.
-
-## 3. Cityscapes Semantic segmentation
-
-Performed semantic segmentation with DeepLabV3 on Cityscapes dataset.
-
-## 4. Cityscapes metrics
-
-In different studies that compare semantic segmentation models that I have been able to read (among them cityscapes), in addition to the conventional metrics (F1-score, TP, TN, FP, FN), the IoU (Interception over Union) metrics are usually used, both for classes as categories.
-
----
-
-# Next Week Work Planning
-
- 1. Continue adapting DeepLabV3+ and GanAV with the RELLIS-3D and RUGD datasets.
- 2. GitHub Pages cloning repository.
- 3. Create post for the current and the last 3 weeks work progress.
-
-
----
-
-# References
-
-* [Cityscapes Metrics] https://www.cityscapes-dataset.com/benchmarks/#instance-level-scene-labeling-task
diff --git a/docs/_posts/2023-10-19-Week-3-4.md b/docs/_posts/2023-10-19-Week-3-4.md
deleted file mode 100644
index 53f14db..0000000
--- a/docs/_posts/2023-10-19-Week-3-4.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: "Week 3-4: First Rolls"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - github pages
- - Neural Networks
- - Autonomous Driving
- - Computer Vision
- - ROS
- - Gazebo
----
-
-# Summary
-
-Over the past two weeks, I've made significant progress by successfully programming two pilots, named Pilot 1 and Pilot 1.5. These pilots were designed to follow a red line on a racing circuit autonomously. By tweaking the PID controller parameters, I managed to optimize the lap time of the car. Alongside this practical work, I also deepened my understanding of autonomous driving by reading a second master's thesis.
-
-## Progress This Week
-
-### Pilot Development
-
-Last time, I was focused on getting the car to complete a circuit while adhering to a red line. After fine-tuning the PID controller's parameters, Pilot 1 was able to complete the lap in 1:45 minutes without significant oscillations. Although the car was stable, it was relatively slow. This led me to develop Pilot 1.5, which completed the lap in about 1 minute. The challenge here was to minimize oscillations while maintaining a higher speed.
-
-### Speed Optimization
-
-Our next objective is to create Pilot 2.0 with a secondary PID controller for linear speed management. The aim is to maximize speed during straight paths and moderate it during turns.
-
-### Tools and Frameworks
-
-I utilized ROS for implementing the control algorithms and Gazebo for simulation. These tools are instrumental in creating a controlled environment for training and testing the pilots.
-
-### Upcoming Work
-
-I am currently working on Pilot 2.0, which will serve as the basis for generating a dataset containing images and sensor data. This dataset will be used to train our first neural network-based autopilot.
-
----
-
-## References
-
-* [Master's Thesis by Enrique Y. Shinohara Soto](https://gsyc.urjc.es/jmplaza/students/tfm-deeplearning-autonomous_driving-enrique_shinohara-2023.pdf)
-* [Unibotics Academy](https://unibotics.org/academy/)
diff --git a/docs/_posts/2023-11-15-Week-5-6.md b/docs/_posts/2023-11-15-Week-5-6.md
deleted file mode 100644
index 6b9cefd..0000000
--- a/docs/_posts/2023-11-15-Week-5-6.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-title: "Week 5-6 - First Set"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - github pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
----
-
-# Summary
-
-Building upon the foundation laid in my initial post, this week was dedicated to developing Pilot 2.0, a more advanced version controlled by two PID controllers – one for linear speed and another for angular speed.
-
-# Progress This Week
-
-I engaged in the practical application of combining ROS2 with our simple circuit simulation in Gazebo. The primary task involved utilizing an ROS2 HAL class to process images from the car's camera and to transmit speed commands, effectively bringing Pilot 2.0 to life.
-
-## Overcoming PID Controller Challenges
-
-The first major hurdle was inverting the existing PID controller logic. In its original form, a larger error increased the speed, which was contrary to the desired functionality. After testing various approaches, including experimenting with a 1/PID formula, I opted for a more straightforward solution:
-
-- Inversion Formula: `maxspeed + minimumspeed - PID`
-
-This method ensured the output remained within the predefined speed limits. For instance, with a max speed of 10 and min speed of 5:
-
-- PID output of 5 resulted in 10 (10 + 5 - 5)
-- PID output of 10 led to 5 (10 + 5 - 10)
-- PID output of 7 yielded 8 (10 + 5 - 7)
-
-## Tuning the P, I, D Parameters
-
-Fine-tuning the proportional (P), integral (I), and derivative (D) parameters was a complex task due to the six-variable scenario – three each for the linear and angular PIDs. Despite these challenges, the adjustments paid off, enabling Pilot 2.0 to complete the simulated circuit in just over a minute.
-
-## Data Collection
-
-With the successful implementation of Pilot 2.0, I utilized ROS2 bag to collect data from this experiment. This dataset will be crucial for analysis.
-
-## Analizing the data
-
-Rosbag is great for recording the datasets, however, we need a format that can be red by PyTorch. In order to do this we can find a solution in stackoverflow that works quite good.
-
-Parting from there we can get the linear and angular speeds and the images asocieted with them, which is what we need in order to train a neural network.
-
-## Starting with pyTorch and PilotNet
-
-Using the example code we tried to train our first neural network. However the first problem we found is that we had to alter a few things in order to be able to use the code without an Nvidia GPU CUDA-capable.
-
-Once the code was adapted we tried to adapt the data as seen before, balance it (there where almost 3 times more data of straight lines than curves) and train the model. But we found a strange problem, our bag recorded with rosbag, somehow seemed to have more pairs of velocities (linear, angular) that it had images, which ment that even thought we had the data we have no idea what the result would be.
-
-# References
-
-* [Stack Overflow - Reading Custom Message Type with ROS2 Bag](https://stackoverflow.com/questions/73420147/how-to-read-custom-message-type-using-ros2bag)
-
diff --git a/docs/_posts/2023-11-30-Week-6-7.md b/docs/_posts/2023-11-30-Week-6-7.md
deleted file mode 100644
index 2c02d09..0000000
--- a/docs/_posts/2023-11-30-Week-6-7.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: "Week 6-7 - A New Beginning with Enhanced Capabilities"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - github pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Summary
-
-A significant upgrade this week: leveraging Black Friday sales, I finally purchased a new PC, promising enhanced performance for my projects. This advancement meant a shift from training neural networks without a dedicated graphics card to a more powerful setup, albeit with some initial reconfiguration required.
-
-# Progress This Week
-
-## Setting Up the New System
-
-The arrival of the new PC marked the start of a technical journey. Firstly, establishing a dual-boot system with Ubuntu, a task that was unexpectedly challenging due to Windows partitioning issues. Once Ubuntu was successfully installed, the necessary software, including ROS2 and PyTorch, was set up, enabling me to resume work on the autonomous driving project.
-
-## Revisiting the Expert Pilot and Data Collection
-
-With the new system operational, I re-initiated the expert pilot program. The data collection process, executed via ROS2 bag, was repeated to ensure compatibility with the upgraded setup. However, an anomaly emerged: the data counts from the "/cam_f1_left/image_raw" and "/cmd_vel" streams were misaligned, an issue warranting further investigation.
-
-## Enhanced PID Tuning
-
-The new PC's capabilities allowed for a more efficient tuning process of the PID parameters. With doubled processing frequency, adjustments that were previously time-consuming were now expedited, facilitating a smoother and more effective tuning phase.
-
-## Data Analysis and Neural Network Training
-
-Adopting the same analytical approach as before, I examined the discrepancies between the number of images and velocity recordings. To mitigate this issue, I filtered the data to include only those instances where both images and velocity data were available. The subsequent neural network training, enhanced by the new graphics card, was significantly faster.
-
-## PilotNet's Trial and Future Directions
-
-Despite the speedier training process, PilotNet's initial tests were not entirely successful. The model struggled to keep the vehicle on track, leading to a collision with a wall. This outcome underscores the need for further refinement and research.
-
-In the coming weeks, my focus will shift to investigating the root cause of the data misalignment and exploring the potential benefits of a larger and more varied dataset, possibly including more laps and different track configurations. The goal is to iteratively improve the model's performance and adaptability.
-
----
diff --git a/docs/_posts/2023-12-14-Week-8-9.md b/docs/_posts/2023-12-14-Week-8-9.md
deleted file mode 100644
index e1a9750..0000000
--- a/docs/_posts/2023-12-14-Week-8-9.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: "Week 8-9 - Finishing first autopilot"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - github pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Summary
-
-After a lot of work I have solved the unmatching numbers of the dataset and train a new pilot with it.
-
-# Progress This Week
-
-## Topic frequency
-
-The first thing we checked was the frequency of the topics that were being recorded with the ros2 bag, what I found out was that the image topic was being published at over 14 Hz while the velocity comands where being published by my program at 10 Hz. This explained why I was recording different number of messages from each topic.
-
-## Adjusting cmd_vel frequency
-
-Given that the topic I controlled most was the velocity comands one I tried to adjust it, however being the image published at such an odd rate 14.1... it was quite difficoult and even though I managed to get to a really similar amount of published messages it was not a quite elgant solution and decided to try other options. Keeping this one saved in case I did not find anything better.
-
-## Adjusting image frequency
-
-Looking into the image frequency for the next solution I found that the frequency of the published image was declared in the f1 car sdf. Which oddly was set to 20 Hz even though it was being published at aproximately 14 Hz. However, not loosing anything for trying, I changed it to 10 Hz so it would go accordingly with my own topic. This actually worked, and I got the exact same amount of data from each topic. However, the results of lowering the image frequency were that the car was actually going wrose in the circuit. So once again I saved the solution and keep on looking for a better one.
-
-For the record, I tried to get the frequency up but I never found why it got stuck at 14 instead of at 20 as the sdf stasblished, which did not happen to my own topic.
-
-## Time stamps and synchronization
-
-The next thing I tried and what had been suggested by my coordinator is to use the timestamps that ros2 topics have. I had tried other things because this was not trivial. However after some work on my code I was able to take the timestamps of each image and look in the array of velocities which was closer to it... and it worked!
-
-I was quite happy, after all that work it had proven to by worth it. However, inspired, I decided to get a little more done discarding some of the images. Now that my computer can process the data faster I can be picky and I did not want to have lots of images with the same velocities because if the img number was greater then the velocities would have to be reused... And that would probably make my trained pilot a lot worse.
-
-So I jumped to put some time conditions and if the closest velocity was not close enough it would be considered not to be an answer to the image and thus the image would get discarded. Happy with this idea I tried it and... None of the images were pick. I had gonne over something quite important, the timestamps of images and velocities where similar, but had to be taking its references from different places because there where a few minutes of difference.
-
-I knew from looking into the sdfs and launchers that the images where using the gazebo in time, which looked quite good. However I had no specified which time where my velocities where taking the time. Thinking it would be easy I imported from rclpy.clock import ROSClock and used the code found in some forums to add a concrete timestamp. However the kind of mesage I was publishing (Twist) did not accept that. I tried to change it to TwistStamped but the car was specting a Twist and did not moove. And I did not know how to change that because it was using a Plugin in order to work and I didn't have the courage to start touching it.
-
-I also found an interesting conversation in ros2 forums asking to change the type of messages used in navigation2 and other packages from Twist to TwistStamped for similar reasons.
-
-However, all the wark was not in vain. I had gotten to know all the code and ros2 topics quite better, and I had an idea I had suggested but had been discarded before...
-
-## Just doing it
-
-If I could not mengle with the image topic to change it's frequency becouse it would lower the quality of the pilot, and I could not used the timestamps because honestly, I did not know how to make them actually relate (I now guess I could have caculated differences between two images, for example first two, and asigning the first image the first velocity and the second one the velocity that corresponded with the time difference...). Sooooo, I created another topic, every time my program received an image it would copy it, and at the time of sending the velocities to cmd_vel it would send the last image received to car_image. This way I would ensure same frequency and at least I would know everey image was close in time to the velocity sent at the same instant.
-
-After trying and recording a few rosbags in the simple circuit and the many curves one I had quite the dataset, with same ammount of images and velocities, each corresponding to each other... Was looking good.
-
-
-## PilotNet's
-
-I again went to train it, it again went way too fast and when I tried it with a program that sent the images to the neural net, asked for the actions and sent those to cmd_vel it crashed on the first curve. I had failed, but this time I did not think it was the dataset, I was trying the pilot in a simple circuit and I had used a many curves one to ensure there were loads of data in curves. So I looked again in my trainer and realised, because my laptop was not nearly as fast I had reduced a lot the epoch number, just to be able to try and see if things worked. Acknowledging this I set it to go for 500 epochs, and in just a few minutes (10? 20? I was writing this so im not sure how much time passed), I had it trained. I tested again... And you can see the results in this link: [https://youtu.be/zCucXRBVxG4](https://youtu.be/zCucXRBVxG4).
-
----
diff --git a/docs/_posts/2024-01-11-Week-10.md b/docs/_posts/2024-01-11-Week-10.md
deleted file mode 100644
index 5d42f6d..0000000
--- a/docs/_posts/2024-01-11-Week-10.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-title: "Week 10 - A Few More Pilots"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Summary
-
-I have resumed my Final Year Project (TFG) after the Christmas break with some tasks left pending.
-
-# Progress This Week
-
-## Objective
-
-This week, on the suggestion of my advisor to get more accustomed to training PilotNets, I aimed to train more networks. Specifically, I wanted to create one specialized in straight lines (the first one I created will now be used as the one specialized in curves, since it was trained on an "only curves" circuit). After that, I planned to train another one to identify whether I'm on a straight or curved part of the circuit and request the prediction from the appropriate network.
-
-## Straight Net
-
-I did not want to use the same dataset, as I wanted there to be a difference when using the straight line expert or the curve expert. To achieve this, I started by adding a condition to my neural pilot: if the angular speed was within a certain range, then we were on a straight line. In such cases, I manually established the speed instead of using the one predicted by the net. With this, I recorded another dataset of around 18,500 pairs of images and velocity commands and trained a 500-epoch net.
-
-This took a while, but the results were worth it. I had a pilot that worked quite decently (the fastest yet) and accelerated significantly on curves.
-
-## Decision Making
-
-Not having yet implemented the decision-making neural net, we created a double neural pilot program that requested predictions from both networks and decided which to use based on the angular speeds. However, this wasn't exactly what I was looking for. Yes, the pilot performed better with the combination than with any of them alone. However, I knew that when the decision was wrong, it didn't matter much because even if the aggressive net was specialized in straight lines, it also knew how to turn... So I did it again, this time with even faster speeds in straight line situations.
-
-## Too Aggressive Net
-
-To save time, resources, and actually divide the job, I wanted the net to only learn how to go on a straight line. To do this, I generated another dataset in a similar way to the first one. However, this time, when preparing the data for training, I made a change: instead of balancing the data as before, I unbalanced it, taking only the data that had low angular speed, indicating that they were taken on a straight line.
-
-With this, and knowing that going straight did not require a too large dataset, I soon had a "2aggressive" net.
-
-## Graphing
-
-Midway through the training of the first aggressive net, I realized I had not implemented a way to evaluate the dataset so it would end when it stopped evolving, nor did I have any graphs to show its progress. So, before the second training, I implemented it, and here is the graph of the 2aggressive net:
-
-
-
- PilotNet Training Graph
-
-
-## Result and Pending Work
-
-The result is a really fast pilot that detects when it's on a straight line and hands control over to an aggressive net, and when it's on a curve, it gives control to a safe net. This is the best pilot yet. However, we are not done. Right now, I am working on another net that decides which of them to use, a TripleNeuralPilot...
-
----
diff --git a/docs/_posts/2024-01-18-Week-11.md b/docs/_posts/2024-01-18-Week-11.md
deleted file mode 100644
index 0bd6a16..0000000
--- a/docs/_posts/2024-01-18-Week-11.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: "Week 11 - Triple Neural Pilot and Reorganization"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Summary
-
-As stated last week I ended the Pilot that received an image as imput, used a CNN to choose if it had to be agressive or conservative and with that another CNN was given the image to pick the command velocities for the car. I also have updated the ropsitory, cleaned it a bit and
-
-# Progress This Week
-
-## Objective
-
-This week, as stablished in last week post I intended to end the triple neural pilot.After my meeting with my tutor I was asked to organize and update the ropsitory, make clear how wach dataset was created and how was each neural network trained and to create an explicit expert pilot that was usable for re-training the nets of the triple neural pilot.
-
-## Triple Neural Pilot
-
-I actually did this right after I ended the last post because I was quite Hyped up and did not have another exam for almost a week, not quite like today. In order to do it I generated a dataset with the Doble neural pilot and altered it when training so the linear speed was 1 or 2 depending on the angular speed. Wich in retrospective was not quite clean, this is because each net has been trained with the previous one plus hard interfeerence from myself, carring errors and actually making no sense to compare if three is really better than just one. That is why we will re do it parting from just one dataset created by a pilot that actually acomplishes to speed up in straight lines.
-
-However, the results where quite good, acomplishing to end the circuit in just under a minute: https://youtu.be/BAQ3ZfrG3EQ
-
-## Reorganizing and updating
-
-I cleaned a lot of things and uploded my ws to the repository, I also did make it a usable ROS2 ws so from now on I will be able to work from there. This was actually helpfull as I had a lot of things I did not need there and some initial tests that were not usable. However and just in case I have saved a buckup of everything.
-
-## My datasets and nets
-
-In order to undertund how my datasets and neuarl networks are related you can see this table:
-
-
-
- My datasets
-
-
-
-
- My CNNs
-
-
-It might be apreciated that agressive dataset is never used and that there is no second net used for double neural. This is because another model existed, agressive, that was used to generate the 2agressive dataset and was trained by the dataset with the same name (agressive). However, being replaced with 2agressive, it was deleated.
-
-## Result and Pending Work
-
-The result is a really fast pilot and a renew and clean repository, as well as having every asset well organized in order to keep going. I would have liked to get more donne and actually retrained all nets with the explicit pilot, but that will have to be next weeks work.
-
----
diff --git a/docs/_posts/2024-02-02-Week-12.md b/docs/_posts/2024-02-02-Week-12.md
deleted file mode 100644
index 3c75b30..0000000
--- a/docs/_posts/2024-02-02-Week-12.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: "Week 12 - Happy Birthday to Me"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Summary
-
-At the start of last weak I managed to destroy some important files for Ubuntu, which has ment that I have spent a weak reinstalling and re-testing everything. Luky for me the last time I followed my cordinator suggestions and uploaded most things to the repository (being datasets and models the exceptions). So I did not have to re-do most things. After completion I had a few problems I still don't undertand and I managed to advance to fulfill the objectives setted last time.
-
-# Progress This Week
-
-## Objective
-
-As stated in the last post the objective was to have an expert pilot that accelerated in straight lines. With this pilot we wanted to generate a single dataset to train a Neural and a TripleNeural pilot in order to be able to compare and decide if it was worth dividing the tasks between experts or if it was better to just have a general good pilot.
-
-## Expert Pilot
-
-For the expert pilot I used the same I had used For the previous models, however this time I created a clasification sistem that had 3 outputs, when it was a hard curve, it used the same speed given by the PID, when it was an easy curve it multiplied it by 1.3 and when it was a straght line by 2.
-
-This way we obtained a less precise (it had a bit of inertia and it somnetimes confused small curves with straght ones) but quite fast pilot that compleated the simple circuite (the one we will use to test for now) in 1 minute and 1 second.
-
-You can see the video here: https://youtu.be/yMTMOGzndFk
-
-## General Neural Pilot
-
-With the expert pilot done I recorded a dasest in the many curves circuit given that the curves are the hardest part for any pilot.
-
-Afterwards I trained a pilot with the full set of data, achieving a General Neural Pilot that was tested in the simple circuit. This Pilot was faster and less precise than the general pilot we had before, but this was to be spected given that those same changes had been observed in the pilots that generated the datasets. However it had a safer behaviour and compleated the testing circuit in 1 minute and 18 seconds.
-
-You can see the video here: https://youtu.be/4kDps2CMJWM
-
-## Triple Neural Pilot
-
-This pilot would work as descrived in previous posts, having to experts and a selector to choose which to ask:
-
-### Curves expert
-
-This expert was trained by using the same dataset as the general, but instead of balancing it, unbalancing it, which meand only picking those considered to be curves (easy or hard). The result was a net that could compleate the circuit alone in 1 minute 20 seconds. This is similar to the general one given that the training circuite had mostly curves, however in the videos you might apreciate the difference in straight lines.
-
-You can see the video here: https://youtu.be/2WUf97NpPeQ
-
-### Straight lines expert
-
-Similarly to the curves expert I used the same dataset, but just feeding the net with the straight line values, obtaining a crazy-for-speed net that could not pass the first curve of the circuit.
-
-You can see the video here: https://youtu.be/hvcO9IyZAx0
-
-### Selector
-
-The selector, has the last time, I trained it with the same dataset, but changeing the linear speed to 1 or 2 depending on the curve criteria I had followed in the other ones, leting it give me a range between both numbers. The closer the given number is to 1 the straighter it thinks the line to be. Which means that if I make the division in 1.5 it will be quite acurate, if I make it in 1.75 it will tend to accelerate more and in 1.25 it will try to not accelerate unless it is sure.
-
-## Result
-
-The result (with 1.5) was a pilot that completed the testing circuit in 58 seconds. Being the faster yet, plus, although not adjusting to the red line, it clearly followed the circuit.
-
-You can see the video here: https://youtu.be/E39KqsddeQA
-
----
diff --git a/docs/_posts/2024-02-09-Week-13.md b/docs/_posts/2024-02-09-Week-13.md
deleted file mode 100644
index 9c73927..0000000
--- a/docs/_posts/2024-02-09-Week-13.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: "Week 13 - Going for Perfection"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-This week, the goal was to create an Expert Pilot that followed the line perfectly and to replicate the same experiment as last week.
-
-## Expert Pilot
-
-To better follow the line, the PID controller I was using proved inefficient as I couldn't stop the inertia in time. To solve this, I experimented with a classification system. Depending on the angular speed, there were 5 classes with different linear speeds (12, 10, 9, 7, and 5). This approach yielded good results, although not perfect.
-
-You can see the video here: [https://youtu.be/bJKTk8FWSfY](https://youtu.be/bJKTk8FWSfY)
-
-## General Neural Pilot
-
-With the expert pilot completed, I recorded a dataset on the many curves circuit given that the curves are the hardest part for any pilot.
-
-Afterwards, I trained a pilot with the full set of data, achieving a General Neural Pilot similar to last week. However, this pilot had no regard for the central line and did not perform well. I think it might be a data issue, but I am not sure. What was obvious was that it would not accelerate on straight lines.
-
-## Triple Neural Pilot
-
-This pilot operated as described in previous posts, having two experts and a selector to choose which one to consult:
-
-### Curves Expert
-
-Trained as in other Triple Neural Pilots, I obtained an expert on following curves. It could complete the circuit alone, this time much better than the General Neural Pilot.
-
-You can see the video here: [https://youtu.be/yCArSqjyS64](https://youtu.be/yCArSqjyS64)
-
-### Straight Lines Expert
-
-Similarly to the curves expert, I used the same dataset but only fed the network with the straight line values. However, this time it accelerated at the start but stopped doing so at some point, making it ineffective as a straight lines expert.
-
-### Selector
-
-The selector functioned as always, and it doesn't really change much.
-
-## Result
-
-The results were discouraging. The Expert Pilot seemed to work fine, but the General Neural Pilot was a disaster, the Straight Lines expert was inadequate, and thus the Triple Pilot was ineffective.
-
-I think this might be due to the classification system for linear speed, making it difficult for the neural network to find the right values. However, I am not sure and will continue testing different approaches.
-
-All in all, it has been a really busy and challenging week, and I hope to have better results next time.
-
----
\ No newline at end of file
diff --git a/docs/_posts/2024-02-22-Week-14.md b/docs/_posts/2024-02-22-Week-14.md
deleted file mode 100644
index 58ff3e1..0000000
--- a/docs/_posts/2024-02-22-Week-14.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: "Week 14 - Changing everything"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-This time, instead of focusing on creating new pilots, we are gonna focus on how we create them and what we know about them.
-
-It is worth saying that we improved slightly the expert pilot and generated a new rosbag.
-
-## Data analysis
-
-First I refined the graphs generated from the rosbags recorded. The actual dataset has this distribution
-
-
-
- My datasets
-
-
-Looking at it I realized the amount of data of curves to each side was different, so I balanced the data:
-
-
-
- My datasets
-
-
-And with this I trained a pilot to test it, I won't bother with videos graphs because the pilots have been discarded for now.
-
-## Training
-
-One of the many problems I have been having was that the trainer only supported small datasets, this was because I was oppening the rosbag and all the data, and then spliting it a training. So I had thousands of images opend at the same time, which coused a crash. I was also balancing the data and augmenting it before the training (not storing it just processing the data from the rosbag) which is a bad practise and also makes it harder for the PC.
-
-The summury: I was doing the training all wrong and I had to re do it.
-
-Most of the code used is again not mine but from https://github.com/JdeRobot/DeepLearningStudio with some alterations to fit my needs.
-
-### Storing the data
-
-This was not too hard given that I had already worked a lot with reading the data from the rosbag, all I needed was to store the velocity comands in a csv, which was not hard to create, and save the images. In order to use correctly the neural net I cropped the images, cuting the top half wich had no relevant information.
-
-### Training the PilotNet
-
-First I had to generate the Dataset by loading all the velocity commands and images and aplying preprocesing and data augmentation if needed. Ausefull thing in this concrete case is to flip the images around its vertical axis and negate the angular velocity. In other cases this might not work, but in this simple circuits were the only relevant information is the red line in the center it actually works.
-
-After that not much diference exists in selecting the loss criteria, optimizer, spliting the data into training and validation...
-
-And so I started my first pilot. It went backward, I restarted and forward... It was inconsistent, but always really small speeds, amde no sense... Until I realized I was giving it the image withouth cropping, so I cropped it and tried again. Same result.
-
-I checked my csv, all correct; the data I was loading, all correct; the dataset... The way I was fliping the data was changing things I did not want it to change, so I commented it (I intend to go back to it when the rest works) and went back to it... No change.
-
-I kept traicing and checking things and I could find nothing, I can't say the amount of time spent without knowing why my net not only did not advance, but it was inconsistent from start, the values seamed compleatly random... Well, it results they were, I was not loading the actual saved net.
-
-With that I was able to train a pilot that went forward. It steel doesn't work well, and it is not turning at the moment, but at least it doesn't seam random.
-
----
\ No newline at end of file
diff --git a/docs/_posts/2024-03-01-Week-15.md b/docs/_posts/2024-03-01-Week-15.md
deleted file mode 100644
index 71243de..0000000
--- a/docs/_posts/2024-03-01-Week-15.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "Week 15 - No Improvement"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-The objective was to achieve a well-functioning neural pilot. Although most attempts started well, with the vehicle taking curves correctly, they suddenly stopped turning and crashed into the wall.
-
-## Data
-
-To improve the pilot, I acquired more data for training. It is important to note that the amount of data doubled due to data augmentation. Specifically, images were flipped along their vertical axis, and the angular speed was inverted. This not only doubled the data but also ensured a perfect balance between right and left curves. Training without this augmentation did not yield better results.
-
-We have three different datasets:
-
-### Many Curves
-
-This main dataset was taken on a circuit with many curves to both sides. It consists of 3522 pairs of images and velocity commands. The data distribution is as follows:
-
-
-
- Dataset Overview
-
-
-### Simple Circuit
-
-This dataset was generated on the circuit where we intend to test and compare the models. It has mainly been used to test the neural network after training, but also for training in some attempts. It consists of 1338 pairs of images and velocity commands. The data distribution is as follows:
-
-
-
- Dataset Overview
-
-
-### Montreal Circuit
-
-This dataset was generated on the Montreal circuit and consists of 1338 pairs of images and velocity commands. The data distribution is as follows:
-
-
-
- Dataset Overview
-
-
-### Montmelo Circuit
-
-This dataset was generated on the Montmelo circuit. It has not been used because the expert pilot did not perform consistently well. Furthermore, if we refer to past general models, we should not need additional data to make it work.
-
-## Training
-
-I have made numerous attempts, changing the learning rate and the number and combination of datasets. However, I observed a tendency for the loss to plateau between 1 and 0.7 and not decrease further. This pattern is evident in the training curves:
-
-
-
- Training Curve
-
-
-
- Improved Training Curve
-
-
-The second curve is better but not satisfactory. In both cases, it can be seen that from epoch 50 onwards, there appears to be a horizontal asymptote.
-
----
\ No newline at end of file
diff --git a/docs/_posts/2024-03-08-Week-16.md b/docs/_posts/2024-03-08-Week-16.md
deleted file mode 100644
index 3bad272..0000000
--- a/docs/_posts/2024-03-08-Week-16.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: "Week 16 - Improving and Testing"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-The objective was to achieve a well-functioning neural pilot. Although not much progress has been made, I feel I am closer to finding a solution.
-
-## Offline Testing
-
-The first thing I did was conduct offline testing, feeding the code a dataset. It ran the images through the network and compared outputs. Quite surprisingly, every network performed quite well on this test, even though the dataset I am using was recorded on the same circuit where I am testing them, and they are failing. The network I had from last week resulted in this graph:
-
-
-
- Dataset Overview
-
-
-## Normalization
-
-To get better networks in less time, I implemented normalization based on the max and min speeds of the expert pilot. I think the angular max speed was never reached, but this should mean only that instead of being compressed between 0 and 1, the data would be compressed between 0.25 and 0.75, which, given the amount of decimals we work with, should give no problems to the denormalization.
-
-This proved correct after I tested it offline (adding denormalization in the offline test and in the Neural Pilot):
-
-
-
- Dataset Overview
-
-
-It trained much faster this way, as can be appreciated in the training graph. This one was exaggerated, but in other trainings, it was appreciated too:
-
-
-
- Dataset Overview
-
-
-However, this one did not work either, crashing into the wall.
-
-## Affine Transformation
-
-Given that all worked greatly in offline testing and none worked in the real scenario, I assumed that the Neural Pilot was encountering itself with an unknown situation (maybe using less speed in straight lines but similar angular speeds it found itself a bit far off the line). To solve this, I used affine transformation, which allowed me to translate the images on the horizontal axis while maintaining the center of the image, giving the impression that the car was a bit outside the line:
-
-
-
- Dataset Overview
-
-
-This was done before flipping the image so every affined image would also get a flipped image, increasing the amount of data up to 14 thousand. When translating the image, the angular speed was increased proportionally to the translation.
-
-The training went fast, maybe it could have gotten better with more epochs:
-
-
-
- Dataset Overview
-
-
-And the offline test was great too:
-
-
-
- Dataset Overview
-
-
-However, in the real training, it performed better to start, but it ended up crashing. Adding this experience to the previous ones, I had the feeling that it has a problem turning left. At first, I thought it might be a problem in the denormalization. But if that were the case, I assume I would have seen it in the offline test as well.
-
-The result can be seen in: https://youtu.be/7FUQ7VWs6tQ
-
-I let it go after crashing because it turns right correctly, but it does not turn left. Every other attempt this week crashed into the right side too.
-
----
\ No newline at end of file
diff --git a/docs/_posts/2024-03-22-Week-17-18.md b/docs/_posts/2024-03-22-Week-17-18.md
deleted file mode 100644
index 1b9a2d9..0000000
--- a/docs/_posts/2024-03-22-Week-17-18.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: "Week 17-18 - Driving again"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-The objective was to achieve a well-functioning neural pilot, which was finally achieved.
-
-## Trying External Pilot
-
-After implementing normalization and affining in the preprocessing of the data, getting great training and offline graphs, and still not getting a working pilot, I needed something else to find where the problem was.
-
-In order to do this, I asked for a pilot and some datasets from another student ahead of me, [Alejandro Moncalvillo](https://github.com/RoboticsLabURJC/2022-tfg-alejandro-moncalvillo/tree/main). With his model, one I knew had to work, I was able to find some problems in the Neural Pilot, the biggest one regarding color. After solving this and training again, I got a model that worked, but only most of the time. One out of every three laps (first one included), it would crash in the first curve.
-
-Knowing now what I had to work on, I proceeded to the next step of this external approach.
-
-## Training with External Data
-
-I decided to start training with the data the external pilot had been trained on. This was initially challenging because we had different approaches to recording the datasets, so I needed some alterations in order to actually start training. For this, every step was traced, especially the image management; the images had to remain consistent with what the Neural Pilot was going to receive.
-
-After some failures, I tried training it with data from quite a lot of datasets of different circuits. This was actually easier than I thought due to some Python magic, and I ended training with a dataset that contained 56,500 data before any preprocessing was done, doubling the amount with the flipping preprocessing.
-
-The result might be seen here: [https://youtu.be/dum_p3uuFuk](https://youtu.be/dum_p3uuFuk)
-
-After that, I tried to add affine, which resulted in a pilot that failed quite a lot: [https://youtu.be/UvbVKGQogVk](https://youtu.be/UvbVKGQogVk)
-
-However, affining made it more complex and doubled the data; maybe I could make it find a good model faster using normalization, which, as seen before, obtained great results in accelerating training.
-
-The result was this: [https://youtu.be/a7uMsLUp8L0](https://youtu.be/a7uMsLUp8L0)
-
-Every training was done with 100 epochs.
-
-## For the Future
-
-Given the differences between datasets, I did not perform offline testing and did not include graphics because they were basically the same as in other posts.
-
-However, next time I will be using my own datasets and documenting everything properly.
-
-I also attempted to do the preprocessing during training instead of during data loading, but it proved incredibly difficult given that most of the preprocessing did not change the image but created another image with different characteristics.
-
----
diff --git a/docs/_posts/2024-04-05-Week-19.md b/docs/_posts/2024-04-05-Week-19.md
deleted file mode 100644
index 8d7df81..0000000
--- a/docs/_posts/2024-04-05-Week-19.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-title: "Week 19 - Neural Pilot 2.2"
-image: ../assets/images/logo.png
-categories:
- - Weekly Log
-tags:
- - GitHub Pages
- - ROS2
- - Gazebo
- - PID Controllers
- - Autonomous Driving
- - Neural Networks
- - Machine Learning
----
-
-# Progress This Week
-
-## Objective
-
-The objective was to achieve a well-functioning neural pilot, this time with my own datasets.
-
-## Summary
-
-With all the tests and improvements done in the last few weeks and more data for the training of the nets, I ran a final batch of Neural Pilots.
-
-In order to be able to compare them, all of them were trained with the same data (15534 pairs of labels and images) and parameters:
-- num_epochs = 100
-- batch_size = 128
-- learning_rate = 1e-3
-- val_split = 0.1
-- shuffle_dataset = True
-- save_iter = 50
-- random_seed = 471
-
-The exception being the data processing.
-
-## Expert Pilot
-
-The pilot which generated the datasets can be watched here: [https://youtu.be/um7ybjK6Sj0](https://youtu.be/um7ybjK6Sj0)
-
-## Without Processing
-
-Training graph:
-
-
-
- Training Graph
-
-
-Offline test:
-
-
-
- Offline Test
-
-
-Result ([https://youtu.be/voo4HVciFUY](https://youtu.be/voo4HVciFUY)): It goes surprisingly smoothly, but when it gets a bit apart from the line, it stops knowing what to do. This proves the dataset doesn't have enough data about these extreme cases. This surprised me a bit because one of the datasets, Montmelo, was generated in a run I thought was quite bad because the Expert Pilot kept stepping out of the line and having to find its way back. However, this is why I implemented the affine in my trainings, so let's check it out.
-
-## Affine
-
-In my previous attempt with someone else's data, the affine did not help much and ended up being detrimental. However, after a few adjustments, we got a useful tool.
-
-Training graph:
-
-
-
- Training Graph
-
-
-Offline test:
-
-
-
- Offline Test
-
-
-Result ([https://youtu.be/oLDNV-7qnoc](https://youtu.be/oLDNV-7qnoc)): With this, I finally achieved a pilot generated completely by myself (in this second round of code; I had working ones before) that worked quite well.
-
-## Normalization
-
-Training graph:
-
-
-
- Training Graph
-
-
-Offline test:
-
-
-
- Offline Test
-
-
-Result ([https://youtu.be/2d9Fo25WvdE](https://youtu.be/2d9Fo25WvdE)): As it could be expected, it accelerated the training, but ended up with a similar behavior to the neural pilot without processing.
-
-## Affine + Normalization
-
-Training graph:
-
-
-
- Training Graph
-
-
-Offline test:
-
-
-
- Offline Test
-
-
-Result ([https://youtu.be/4V87uoV7UQI](https://youtu.be/4V87uoV7UQI)): The training was really similar to the affine one, but the result was worse, although better than without it. It may be that compressing the data makes the net make more mistakes and being not that a big range (-1.5 to 1.5) in angular velocity those little mistakes end up being big ones.
-
-## Conclusion
-
-Given the great results the affine has brought, I should keep using it. On the other hand, having a powerful computer has helped the trainings be done in more or less half an hour, which means accelerating them with normalization is not necessary and has proved detrimental with the results, so I will park it for now unless given other instructions.
-
----