diff --git a/images/main.png b/images/main.png new file mode 100644 index 0000000..575bca9 Binary files /dev/null and b/images/main.png differ diff --git a/images/main_update.png b/images/main_update.png deleted file mode 100644 index 499732b..0000000 Binary files a/images/main_update.png and /dev/null differ diff --git a/talk2drive.html b/talk2drive.html index 6eababe..bceef59 100644 --- a/talk2drive.html +++ b/talk2drive.html @@ -47,7 +47,7 @@
-

Large Language Models for Autonomous Driving: Real-World Experiments

+

Personalized Autonomous Driving with Large Language Models: Field Experiments

@@ -77,16 +77,18 @@

Can Cui, Zichong Yang, Yupeng Zhou, Yunsheng Ma, Juanwu Lu, Lingxi Li, Yaobi
-

Talk2Drive framework +

Talk2Drive framework architecture. - A human's spoken instructions are processed by cloud-based LLMs, + A human's spoken instructions I are processed by cloud-based LLMs, which synthesize contextual data C from weather, traffic conditions, and local traffic rules - information. - The LLMs generate executable codes P that are communicated to the vehicle's Electronic Control + information, the predefined system messages S and the history interaction H. + The LLMs generate executable language model programs (LMPs) P that are communicated to the + vehicle's Electronic Control Unit (ECU). - These codes operate the actuation of vehicle controls, ensuring that the human's intent is + These LMPs operate the actuation of vehicle controls, ensuring that the human's intent is translated into safe - and personalized driving actions. A memory module archives every command I, its resultant codes + and personalized driving actions. A memory module archives every command I, its resultant + programs P, and subsequent user feedback F, ensuring continuous refinement of the personalized driving experience.