This demo showcases a speech-to-speech interaction with an autonomous taxi using RAI in an AWSIM environment with Autoware. Users can specify destinations verbally, and the system will process the request, plan the route, and navigate the taxi accordingly.
Note
This readme is a work in progress.
Before running this demo, ensure you have the following prerequisites installed:
- Autoware and AWSIM link as well as you have configured the speech to speech as in speech to speech doc
-
Start AWSIM and Autoware:
-
Run the taxi demo:
source ./setup_shell.sh ros2 launch examples/taxi-demo.launch.py
-
To interact with the taxi using speech, speak your destination into your microphone. The system will process your request and plan the route for the autonomous taxi.
The taxi demo utilizes several components:
- Speech recognition (ASR) to convert user's spoken words into text.
- RAI agent to process the request and interact with Autoware for navigation.
- Text-to-speech (TTS) to convert the system's response back into speech.
- Autoware for autonomous driving capabilities.
- AWSIM for simulation of the urban environment.
The main logic of the demo is implemented in the TaxiDemo
class, which can be found in:
examples/taxi-demo.py