Skip to content

Commit

Permalink
Merge branch 'main' into 383-planning-evaluate-and-document-motion-pl…
Browse files Browse the repository at this point in the history
…anning
  • Loading branch information
niklasr22 authored Nov 2, 2024
2 parents df2d0a0 + 2ea043f commit a707d15
Show file tree
Hide file tree
Showing 9 changed files with 15,544 additions and 5 deletions.
340 changes: 340 additions & 0 deletions doc/assets/planning/planning_structure.drawio

Large diffs are not rendered by default.

Binary file added doc/assets/planning/planning_structure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/assets/research_assets/node_path_ros.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10,221 changes: 10,221 additions & 0 deletions doc/assets/research_assets/rosgraph.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4,669 changes: 4,669 additions & 0 deletions doc/assets/research_assets/rosgraph_leaf_topics.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 9 additions & 5 deletions doc/planning/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
# Planning Wiki

![Planning](../assets/planning/planning_structure.png)

## Overview

### [Preplanning](./Preplanning.md)
### [OpenDrive Converter (preplanning_trajectory.py)](./Preplanning.md)

Preplanning is very close to the global plan. The challenge of the preplanning focuses on creating a trajectory out of
This module focuses on creating a trajectory out of
an OpenDrive map (ASAM OpenDrive). As input it receives an xodr file (OpenDrive format) and the target points
from the leaderboard with the belonging actions. For example action number 3 means, drive through the intersection.

### [Global plan](./Global_Planner.md)
### [Global Planning (PrePlanner)](./Global_Planner.md)

The global planner is responsible for collecting and preparing all data from the leaderboard and other intern
components that is needed for the preplanning component.
Expand All @@ -28,7 +30,9 @@ decision tree, which is easy to adapt and to expand.

### [Local Planning](./Local_Planning.md)

The Local Planning component is responsible for evaluating short term decisions in the local environment of the ego vehicle. It containes components responsible for detecting collisions and reacting e. g. lowering speed.
The local planning also executes behaviors e. g. changes the trajectory for an overtake.
This module includes the Nodes: ACC, CollisionCheck, MotionPlanner

The Local Planning package is responsible for evaluating short term decisions in the local environment of the ego vehicle. It containes components responsible for detecting collisions and reacting e. g. lowering speed.
The local planning also executes behaviors e.g. changes the trajectory for an overtake.

![Overtake](../assets/planning/Overtake_car_trajectory.png)
183 changes: 183 additions & 0 deletions doc/research/paf24/general/current_state.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
# Current state of the simulation

**Summary:** The current state of the simulation is assessed by doing three runs of 20 mins (real world time), where all mistakes or anomalies are written down.

- [Goal](#goal)
- [Methodology](#methodology)
- [Observed Errors Grouped by Domains](#observed-errors-grouped-by-domains)
- [Infrastructure](#infrastructure)
- [Testing and Validation](#testing-and-validation)
- [Perception](#perception)
- [Localization and Mapping](#localization-and-mapping)
- [Decision-Making](#decision-making)
- [Path Planning](#path-planning)
- [Control](#control)
- [Raw notes](#raw-notes)
- [Run 1](#run-1)
- [Run 2](#run-2)
- [Run 3](#run-3)

## Goal

In order to understand the current state of the agent, it is crucial to assess the status quo and note the challenges it is faced with.

## Methodology

This assessment was done by three leaderboard runs in the CARLA simulator with the handover-state for PAF24. While doing so, all mistakes made by the agent have been noted, as well as possible anomalies occurring during the inspection.
After the review, the mistakes have been grouped by the roles defined in the project in order to make it easier to address the challenges in the respective domains. **Note:** Some mistakes overlap and communication is key when tackling these issues.

## Observed Errors Grouped by Domains

### Infrastructure

These issues relate to foundational aspects of the simulation environment and underlying software stability:

- **Simulator Performance Degradation:**
- Simulation slows down over time (from .33 to .29 rate), potentially impacting reaction times and sensor data processing.
- **Vehicle Despawning:**
- Random despawning of cars and potential timeout for stuck vehicles may interfere with the agent’s perception and response.

---

### Testing and Validation

These errors highlight the gaps in the testing and validation process, particularly areas that may need further testing to ensure proper functioning in the real environment:

- **Consistency in Object Detection:**
- Image segmentation flickering (e.g., police car with indicators), suggesting inadequate validation for dynamic objects with flashing lights.
- **Vision Node Stability:**
- Vision node appears to freeze occasionally, indicating possible untested scenarios or bugs in the perception pipeline.
- **Unrealistic Emergency Braking and Recovery Testing:**
- Unstable lane holding and recovery, resulting in inappropriate emergency braking maneuvers, suggests insufficient validation in complex recovery scenarios.
- **Misclassification of Tree Trunks:**
- Trees being detected as cars, indicating the need for validation of object detection in diverse environmental conditions.

---

### Perception

Errors within perception involve how the agent senses and understands its surroundings:

- **Object Misclassification and Collision:**
- Tree trunks mistakenly detected as cars.
- Crashes into bikers and parked cars, suggesting perception failures in identifying and avoiding static and moving obstacles.
- **Segmentation and Detection Instability:**
- Vision node freezing.
- Flickering segmentation for objects like police cars with indicators.
- **Lane Detection and Holding Errors:**
- Difficulty in stable lane holding, leading to unexpected lane deviations and emergency braking.
- Misinterpretation of open car doors, causing lane intrusions without sufficient clearance.

---

### Localization and Mapping

Issues with localization and mapping involve understanding and positioning within the environment:

- **Positioning Errors in Turns:**
- Turns are too wide, leading the agent onto the walkway, indicating potential localization issues in tight maneuvers.
- **Lane Holding and Position Drift:**
- Unstable lane holding with constant left and right drifting suggests potential mapping or localization inaccuracies.

---

### Decision-Making

Errors in decision-making relate to the agent's ability to make appropriate choices in response to various scenarios:

- **Right of Way Violations:**
- Fails to yield to oncoming traffic when turning left and when merging into traffic.
- Ignores open car doors when passing parked cars, causing dangerous close passes.
- **Erroneous Stopping and Acceleration:**
- Stops unnecessarily at green lights and struggles to resume smoothly after stopping.
- Abrupt stopping and starting at green lights, potentially due to aggressive speed control.
- **Repeated Mistakes in Overtaking and Lane Changes:**
- Treats temporary parked cars as regular vehicles to overtake without checking oncoming traffic, leading to unsafe lane changes.

---

### Path Planning

Path planning issues include errors in determining the correct and safest path:

- **Incorrect Overtaking Paths:**
- Attempts to overtake trees and temporary parked cars without considering oncoming traffic, showing flaws in path generation.
- **Wide Turning Paths:**
- Takes overly wide turns that lead to walkway intrusions.
- **Aggressive Lane Changes:**
- Lane change planning is overly aggressive, causing the vehicle to abruptly veer, triggering emergency stops to avoid collisions.

---

### Control

Control-related issues concern the vehicle’s execution of planned actions, like maintaining speed and stability:

- **Abrupt and Aggressive Speed Control:**
- Speed controller is too aggressive when accelerating from green lights, leading to abrupt stopping and starting.
- **Instability in Lane Holding:**
- Inconsistent lane holding, particularly after getting unstuck, results in unexpected deviations onto walkways.
- **Inconsistent Recovery Behavior:**
- Repeatedly gets stuck in various situations (e.g., speed limit signs or temporary parked cars) and fails to recover smoothly, indicating control issues in re-engaging the driving path.

---

## Raw notes

Here are the raw notes in case misunderstandings have been made when grouping the mistakes

### Run 1

- Scared to get out of parking spot
- lane not held causing problems when avoiding open car door
- stopping for no apparent reason
- does not keep lane (going left and right)
- driving into still standing car at red light
- impatient when waiting for light to turn green (after the crash, going back and forth)
- abrupt stopping and going when light turns green without reason → speed controller too aggressive?
- Problems to keep lane is causing emergency(?) brake maneuvers
- vision node seems to be frozen ?
- Detects bikers, crashes into them nonetheless
- lane change very aggressive causing emergency stop in order to not go into oncoming traffic
- gets stuck as a result
- simulator despawns cars randomly
- left turn does not give way to oncoming traffic when seeing them
- does the turn too wide, gets onto walkway
- simulation gets slower as time progresses, started at .33 rate, now at .29
- gets stuck in front of speed limit sign after doing turn too wide
- gets unstuck, lane holding too aggressive goes onto walkway again (integrator windup while being stuck?)
- gets stuck again (→ unstuck behavior bad)
- when getting unstuck, merges onto street without giving way to traffic on the road
- drives into oncoming traffic, traffic on the same lane overtakes on the right side and does not stop
- really stuck now

### Run 2

- merges without giving way to traffic
- does not respect open car door
- crashes into car in front when going after stop at red light
- stops at green light
- crashes into bikers
- kid runs onto street, agent crashes into oncoming traffic, gets stuck
- nudges away from the car it crashed into
- is now free but does not move
- crashes again
- police car with indicators on standing on the side is crashed into
- image segmentation for police car seems to be flickering
- tree trunk has bounding box (are trees detected as cars?)

### Run 3

- does not give way when exiting a parking spot
- LIDAR detects floor
- trajectory for overtaking is wrong / no overtake needed
- stops without reason
- tries to "overtake" tree (detects tree as car)
- playback ration temperature dependent likely
- after emergency brake stops too long
- left turn doesn't give way to oncoming traffic
- recovery leads to oncoming traffic (left turn situation maybe doesn't recognize street?) 9 min
- temporary parked car with indicators on counts as normal overtake (does not check oncoming traffic)
- temporary parked car with indicators is the crux
- Despawn time of cars ? Cars despawn when stuck → over time limit ?
- Trajectory correctly generated, just too deep in the mistakes
78 changes: 78 additions & 0 deletions doc/research/paf24/general/rviz.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Research about RViz

**Summary:** This page contains information on how to use RViz and how it is integrated into the project.

- [General overview](#general-overview)
- [Displays panel](#displays-panel)
- [Display types](#display-types)
- [Camera](#camera)
- [Image](#image)
- [PointCloud(2)](#pointcloud2)
- [Path](#path)
- [RViz configuration](#rviz-configuration)
- [Sources](#sources)

## General overview

Description from the git repository: **"rviz is a 3D visualizer for the Robot Operating System (ROS) framework."**

It can be used to visualize the state of the car in real-time.

- The Visualizer always has a 3D View panel in the middle. This is where all 3D data, from for example lidar and radar sensors, is shown.
- The most important panel is *Displays*. It is used to configure what data is displayed.
- All other panels are available under *Panels* in the menu bar.
- Panels can be regrouped by dragging their title bar.

## Displays panel

The *Displays* panel contains a list all the currently visualized data-displays and allows changing their settings and visibility.

The default configuration currently displays the center camera, lidar and radar point clouds, planned path and segmentation image.

Individual data-displays can be added and removed in the lower menu bar of the panel. Adding works in two ways:

- By type: The user then has to manually set up the settings including the topic which the display visualizes.
- By topic: In the *By topic* tab, RViz lists all available ros topics and allows to easily add them for visualization.

Do not forget to give the data-display a proper name when adding it (Renaming with F2 is also possible).

### Display types

There are several display types that can be added.
Depending on the type there are different settings available for the display.
The *Topic* setting controls from which ros topic the displays gets its data.

The most important display types are:

#### Camera

Shows an image from a camera. Allows overlaying other data on top of the image including *Path* and *PointCloud2* (*Visibility* setting)

Adding a camera also adds a new panel with the camera image.

#### Image

Shows an image. Also works for camera topics.

Adding an image also adds a new panel with the image.

#### PointCloud(2)

Shows a point cloud in the 3D View

#### Path

Shows a path in the 3D View

## RViz configuration

RViz can be fully configured with the GUI. The settings may then be saved with *File->Save Config*.

The default configuration file is located at [code/agent/config/rviz_config.rviz](../../../../code/agent/config/rviz_config.rviz)
and this path is defined in [code/agent/launch/agent.launch](../../../../code/agent/launch/agent.launch). It can be changed to use a different default config when running the leaderboard.

## Sources

<https://github.com/ros-visualization/rviz>

<http://wiki.ros.org/rviz/UserGuide>
44 changes: 44 additions & 0 deletions doc/research/paf24/system/architecture_documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Research about the existing architecture documentation

The repository already holds various documents about how the architecture was planned and how the architecture should
look or is
looking. As it is a crucial part of the project to understand the component interactions, especially the up-to-date
version of it, in this document I will give a brief overview of the existing documentation and some links to the
up-to-date versions.

## Existing architecture documentation from the previous semester

The main architecture documentation can be found [here](/doc/general/architecture.md).
It contains information to most nodes and what they subscribe / publish.

A miro board with the existing architecture (Perception, planning, acting) is existing.
![Architecture overview](/doc/assets/overview.jpg)
The miro-board can be
found [here](https://miro.com/welcomeonboard/a1F0d1dya2FneWNtbVk4cTBDU1NiN3RiZUIxdGhHNzJBdk5aS3N4VmdBM0R5c2Z1VXZIUUN4SkkwNHpuWlk2ZXwzNDU4NzY0NTMwNjYwNzAyODIzfDI=?share_link_id=785020837509).

This miro board does contain the main architecture details and flows of information but when comparing it to the
rosgraph of the nodes and topics it seems like the diagram is not complete.

### Current Rosgraph of the nodes and topics of the project

[//]: # "![Up to date ros graph](/doc/assets/research_assets/rosgraph.svg)"
![Up to date ros graph](/doc/assets/research_assets/rosgraph_leaf_topics.svg)

### Rewritten rosgraph divided into perception, planning and acting

![RosGraphDrawIO](/doc/assets/research_assets/node_path_ros.png)

There you can see the nodes and the in- and outputs. In represents the subscribed topics and out the published topics.
This should now be extended by what logic is happening in which node and how the nodes are connected.

## Perception architecture

- Extended information of how the perception works can be found [here](/doc/perception/README.md)

## Planning architecture

- Extended information of how the planning works can be found [here](/doc/planning/README.md)

## Acting architecture

- Extended information of how the acting works can be found [here](/doc/acting/architecture_documentation.md)

0 comments on commit a707d15

Please sign in to comment.