Skip to content

Commit

Permalink
Semana 16
Browse files Browse the repository at this point in the history
  • Loading branch information
Adrimapo committed Mar 7, 2024
1 parent fe593b7 commit 1fffebf
Show file tree
Hide file tree
Showing 16 changed files with 294 additions and 201 deletions.
12 changes: 8 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,12 @@ Versions
Ros Humble
Gazebo 11.10.2 -->

<!-- Instalación de tmux -->
<!-- Tmux installation -->

<!-- ---
--- -->
## Manually

1. For launch only the world

```bash
Expand All @@ -30,13 +31,14 @@ ros2 ros2 launch drone_driver expertPilot.launch.py out_dir:=TRACES_DIR trace:=B
## Automation scripts

### Dataset generator

1. Go to utils directory

```bash
cd PATH_TO_WORKSPACE/src/drone_driver/utils
```

2. Open a tmux sesion
2. Open a tmux session

```bash
tmux
Expand All @@ -47,19 +49,21 @@ tmux
```bash
./generateDataset.sh TIME_RECORDING DATASET_PATH
```

* **Time recording:** Int -> Time recording the topics of the drone.
* **Dataset path** String -> Path where the dataset and profiling data are located.will be save.

---

### Training script

1. Go to utils directory

```bash
cd PATH_TO_WORKSPACE/src/drone_driver/utils
```

2. Open a tmux sesion
2. Open a tmux session

```bash
tmux
Expand All @@ -73,4 +77,4 @@ tmux

* **Dataset path:** String -> Route where the dataset is located.
* **Profiling path:** String -> Path where the dataset and profiling data are located.
* **Model path:** String -> Route where the network model will be saved.
* **Model path:** String -> Route where the network model will be saved.
39 changes: 39 additions & 0 deletions docs/_posts/2024-03-07-Semana-16.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
title: "Semana 16 - Entrenamientos finales de simulación"
categories:
- Registro Semanal
tags:
- ROS2
- Aprendizaje Automático
---

## Índice

- [Índice](#índice)
- [Entrenamiento principal](#entrenamiento-principal)
- [Mejora](#mejora)

---

## Entrenamiento principal

Se creó otro piloto experto que fuera más estable y rápido. Sin embargo, tras numerosas pruebas, los resultados seguían siendo consistentes, aunque la automatización agilizó significativamente las pruebas.

Se realizaron varias pruebas:

1. **Aumento de los augmented labels:** La solución tardaba mucho más en converger, lo que resultaba en un entrenamiento de varias horas sin llegar a una solución óptima.

2. **Disminución de lr o batch:** No tuvo mucho efecto, pero se observó que si el batch era excesivamente grande, el dron no aprendía adecuadamente.

3. **Cambio de etiquetas:** Se redistribuyeron de manera más efectiva las etiquetas gracias a los gráficos de las semanas anteriores, lo cual mejoró los resultados, pero no lo suficiente.

4. **Loss más bajo:** Al finalizar el entrenamiento, se estableció el loss más bajo, ya que no parecía haber sobreajuste y así la red aprendería mejor.

---
---

## Mejora

Reflexionando, me di cuenta de que no es del todo óptimo entrenar una red neuronal con un solo piloto experto, ya que lo máximo que podría lograr sería imitarlo perfectamente y seguir sus pasos, pero lo que buscamos es una mejora.

Por eso mismo se me ocurrió configurar los parámetros de cada piloto experto para que sea óptimo en cada circuito, obtener el dataset de cada piloto experto y posteriormente entrenar la misma red con las imágenes generadas por cada piloto experto.
106 changes: 52 additions & 54 deletions src/drone_driver/include/data.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
#!/usr/bin/env python3

# Execute as => python3 train.py --n network2.tar

from rosbags.rosbag2 import Reader as ROS2Reader
import torch
import matplotlib.pyplot as plt
import random
Expand All @@ -17,12 +14,13 @@
import os
from PIL import Image

ANGULAR_UMBRALS = [-0.3, -0.1, 0, 0.1, 0.3, float('inf')] # label < umbral
LINEAR_UMBRALS = [5, 5.5, float('inf')]
ANGULAR_UMBRALS = [-0.5, -0.2, 0, 0.2, 0.5, float('inf')] # label < umbral
LINEAR_UMBRALS = [4.5, 5.5, float('inf')]

DATA_PATH = "../training_dataset"
LOWER_LIMIT = 0
UPPER_LIMIT = 3

USE_WEIGHTS = True # 0 false 1 false 2 True
USE_AUGMENTATION = True # 0 false 1 true 3 True

def get_image_dataset(folder_path):
images = []
Expand All @@ -48,7 +46,7 @@ def get_image_dataset(folder_path):
images.append(image_array)

except FileNotFoundError:
print("Error: Carpeta no encontrada.")
print("Error: File not found.")

return images

Expand All @@ -74,19 +72,19 @@ def get_vels(folder_path):
content = file.read()
numbers = [float(num) for num in content.split(',')]

vels.append([numbers[0], numbers[1]*10])
vels.append([numbers[0], numbers[1]*3])

except FileNotFoundError:
print("Error: Carpeta no encontrada.")
print("Error: File not found.")

return vels

class rosbagDataset(Dataset):
def __init__(self, main_dir, transform, boolAug=False, dataAument=1) -> None:
def __init__(self, main_dir, transform, boolAug=USE_AUGMENTATION, dataAugment=1) -> None:
self.main_dir = main_dir
self.transform = transform
self.minSamples = 25
self.dataAument = dataAument
self.minSamples = 10
self.dataAugment = dataAugment

self.imgData = get_image_dataset(main_dir + "/frontal_images")
self.velData = get_vels(main_dir + "/labels")
Expand All @@ -102,7 +100,7 @@ def get_dataset(self):
dataset = list(((self.imgData[i], self.velData[i]) for i in range(len(self.velData))))

original_size = len(dataset)
new_size = int(original_size * self.dataAument)
new_size = int(original_size * self.dataAugment)

for _ in range(new_size - original_size):
randIndex = random.randint(0, original_size - 1)
Expand All @@ -115,72 +113,73 @@ def get_dataset(self):
def __getitem__(self, item):
device = torch.device("cuda:0")

# Applys augmentation
# Apply augmentation
if self.applyAug:
image_tensor = torch.tensor(self.dataset[item][0]).to(device)
image_tensor, vel_tensor = self.applayAugmentation(device, image_tensor, self.dataset[item][1])
image_tensor, vel_tensor = self.applyAugmentation(device, image_tensor, self.dataset[item][1])

# Not applys augmentation
# Not apply augmentation
else:
image_tensor = self.transform(self.dataset[item][0]).to(device)
vel_tensor = torch.tensor(self.dataset[item][1]).to(device)

return (vel_tensor, image_tensor)


def applayAugmentation(self, device, imageTensor, velocityValue):
mov = 10
def applyAugmentation(self, device, imgTensor, velocityValue):
mov = 2

# Apply color augmentation
augmented_image_tensor = self.colorAugmentation()(image=imgTensor.cpu().numpy())['image']
imageTensor = torch.tensor(augmented_image_tensor).clone().detach().to(device)

# Aplys color augmentation
augmented_image_tensor = self.colorAugmeentation()(image=imageTensor.cpu().numpy())['image']
imgTensor = torch.tensor(augmented_image_tensor).to(device)

randNum = random.randint(0, 7)

# Flips the image so the angular will be in opposite way
if randNum == 3 or randNum == 4: # 2 / 8 chance
imgTensor = torch.flip(imgTensor, dims=[-1])
imageTensor = torch.flip(imageTensor, dims=[-1])
velTensor = torch.tensor((velocityValue[0], -velocityValue[1])).to(device)

# Horizontal movement
elif randNum == 2: # 1 / 8 chance
elif randNum == 20: # 1 / 8 chance
randMove = random.randint(-mov, mov)
imgTensor = F.affine(imgTensor, angle=0, translate=(randNum, 0), scale=1, shear=0)
imageTensor = F.affine(imageTensor, angle=0, translate=(randNum, 0), scale=1, shear=0)
velTensor = torch.tensor((velocityValue[0], velocityValue[1] + randMove/(mov*2))).to(device)

# Vertical movement
elif randNum == 1: # 1 / 8 chance
elif randNum == 10: # 1 / 8 chance
randMove = random.randint(-mov, mov)
imgTensor = F.affine(imgTensor, angle=0, translate=(0, randMove), scale=1, shear=0)
imageTensor = F.affine(imageTensor, angle=0, translate=(0, randMove), scale=1, shear=0)
velTensor = torch.tensor((velocityValue[0] - randMove/(mov*1.5), velocityValue[1])).to(device)

else: # No changes # 4 / 8 chance
velTensor = torch.tensor(velocityValue).to(device)

return imgTensor, velTensor
finalImgTensor = self.transform(imageTensor.cpu().numpy()).to(device)
return finalImgTensor, velTensor

def colorAugmeentation(self, p=0.5):
def colorAugmentation(self, p=0.5):
return A.Compose([
A.HorizontalFlip(p=p),
A.RandomBrightnessContrast(),
A.Normalize(),
ToTensorV2(),
A.Normalize()
])


def subSample(self, label, percent):
toDel = int(len(label) * percent)
subsampled = random.sample(label, len(label) - toDel)
subsample = random.sample(label, len(label) - toDel)

return subsampled
return subsample

def getLinearSubset(self, dataset, lowerBound, upperBound):
linearSub = [(img, vel) for img, vel in dataset if lowerBound < vel[0] <= upperBound]

return linearSub

def getAngularSubset(self, dataset, lowerBound, upperBound):
angularSub = [(img, vel) for img, vel in dataset if lowerBound < vel[1]/10 <= upperBound]
angularSub = [(img, vel) for img, vel in dataset if lowerBound < vel[1]/3 <= upperBound]

# Gets the linear subsets
angularLabeled = [self.getLinearSubset(angularSub, float('-inf'), LINEAR_UMBRALS[0])]
Expand All @@ -192,14 +191,14 @@ def getAngularSubset(self, dataset, lowerBound, upperBound):


def balancedDataset(self):
useWeights = False
useWeights = USE_WEIGHTS
# < 5 5.5 inf
weights = [(0.80, 0.0, 0.0), # < -0.6
(0.90, 0.7, 0.4), # < -0.3
(0.50, 0.5, 0.99), # < 0
(0.50, 0.5, 0.99), # < 0.3
(0.80, 0.7, 0.4), # < 0.6
(0.90, 0.0, 0.0)] # < inf
weights = [(0.99, 0.3, 0.2), # < -0.6
(0.80, 0.8, 0.4), # < -0.3
(0.60, 0.7, 0.99), # < 0
(0.60, 0.7, 0.99), # < 0.3
(0.80, 0.8, 0.4), # < 0.6
(0.99, 0.3, 0.2)] # < inf


# Gets all the subsets
Expand All @@ -216,10 +215,10 @@ def balancedDataset(self):
if len(linLabeled) > maxSamples:
maxSamples = len(linLabeled)

# Auments the data
maxSamples = int(maxSamples * self.dataAument)
# Augments the data
maxSamples = int(maxSamples * self.dataAugment)

# Balances all the clases
# Balances all the classes
balancedDataset = []

for i, angLabeled in enumerate(labeledDataset):
Expand All @@ -231,7 +230,7 @@ def balancedDataset(self):

if currentSubsetLen >= self.minSamples:

# Checks if thers enought samples
# Checks if there's enough samples
if currentSubsetLen < maxSamples:

# If the current length is less than maxSamples, we replicate the elements
Expand All @@ -243,7 +242,7 @@ def balancedDataset(self):
# If the current length is greater than or equal to maxSamples, we subsample
balancedSubset = random.sample(currentSubset, maxSamples)

# Subsamples if it is
# Subsample
if useWeights:
# Adjusts to weights
balancedSubset = self.subSample(balancedSubset, 1 - weights[i][j])
Expand All @@ -260,7 +259,7 @@ def balancedDataset(self):
for data in linLabeled:
rawDataset.append(data)


self.dataset = rawDataset
return rawDataset


Expand All @@ -272,19 +271,18 @@ def balancedDataset(self):

if __name__ == "__main__":

# Crear una instancia de tu conjunto de datos
dataset = rosbagDataset(main_dir=DATA_PATH, transform=dataset_transforms, boolAug=True, dataAument=2)
# Creates data instance
dataset = rosbagDataset(main_dir=DATA_PATH, transform=dataset_transforms, boolAug=True, dataAugment=2)

# Mostrar algunas imágenes y sus velocidades asociadas
for i in range(15): # Mostrar las primeras 5 imágenes
# Shows some images with the augmentation
for i in range(15):
idx = random.randint(0, len(dataset) - 1)
velocity, image = dataset[idx]

# Deshacer el cambio de forma y la normalización para mostrar la imagen correctamente
image = image.permute(1, 2, 0).cpu().numpy()
image = (image * 0.5) + 0.5 # Desnormalizar
image = (image * 0.5) + 0.5

# Mostrar la imagen y su velocidad asociada
# Shows the image and the associated velocity
plt.imshow(image)
plt.title(f"Velocidad: {velocity}")
plt.show()
2 changes: 1 addition & 1 deletion src/drone_driver/include/models.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env python3waas
#!/usr/bin/env python3
from torch import nn


Expand Down
6 changes: 3 additions & 3 deletions src/drone_driver/launch/as2_sim_circuit.launch.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def generate_launch_description():
name="gazebo",
additional_env=env_vars,

# Closes the tmux sesion on gazebo exit
# Closes the tmux session on gazebo exit
on_exit=[
OpaqueFunction(
function=exit_process_function,
Expand All @@ -60,10 +60,10 @@ def generate_launch_description():
)

tmuxAttach = ExecuteProcess(
# Aerostack2 depuration
# Aerostack2 terminal
# cmd=['gnome-terminal', '--', 'tmux', 'attach-session', '-t', namespace],

# No aditional window
# No additional window
cmd=['tmux', 'attach-session', '-t', namespace],
name="attach",
)
Expand Down
Loading

0 comments on commit 1fffebf

Please sign in to comment.