diff --git a/_publications/neurad/critical_1.mp4 b/_publications/neurad/critical_1.mp4 new file mode 100644 index 0000000..8f75b13 Binary files /dev/null and b/_publications/neurad/critical_1.mp4 differ diff --git a/_publications/neurad/critical_2.mp4 b/_publications/neurad/critical_2.mp4 new file mode 100644 index 0000000..a0a6371 Binary files /dev/null and b/_publications/neurad/critical_2.mp4 differ diff --git a/_publications/neurad/critical_3.mp4 b/_publications/neurad/critical_3.mp4 new file mode 100644 index 0000000..57fba2d Binary files /dev/null and b/_publications/neurad/critical_3.mp4 differ diff --git a/_publications/neurad/neurad.md b/_publications/neurad/neurad.md index b9c0178..e150c89 100644 --- a/_publications/neurad/neurad.md +++ b/_publications/neurad/neurad.md @@ -93,4 +93,30 @@ An autonomous vehicle typically records the scene using multiple cameras, which Note that this is inspired by recent work on NeRFs in the wild , with two key differences. First, we only learn a single embedding per sensor, not per image, improving generalizaiton to novel views. Second, we apply the sensor embedding after the volume rendering, reducing the computational overhead. This is not possible in the original work, as they directly render RGB, whereas we render high-level features. +# Closed-loop simulation + +With NeuRAD, we can take non-eventful collected sensor data, + +
+ +
+ +and turn them into safety-critical scenarios for training and testing our AD systems + +
+ +
+ +with different variations + +
+ +
+ --- \ No newline at end of file diff --git a/_publications/neurad/original.mp4 b/_publications/neurad/original.mp4 new file mode 100644 index 0000000..24cad39 Binary files /dev/null and b/_publications/neurad/original.mp4 differ