Skip to content

Commit 16f01b9

Browse files
authored
Merge pull request #33 from JulienStanguennec-Leddartech/feat/wod-dataparser
feat/wod dataparser
2 parents 5101fee + 2bfa873 commit 16f01b9

15 files changed

+1120
-20
lines changed

.github/workflows/core_code_checks.yml

+3-2
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,18 @@ jobs:
1515

1616
steps:
1717
- uses: actions/checkout@v3
18-
- name: Set up Python 3.8.13
18+
- name: Set up Python 3.10.14
1919
uses: actions/setup-python@v4
2020
with:
21-
python-version: '3.8.13'
21+
python-version: '3.10.14'
2222
- uses: actions/cache@v2
2323
with:
2424
path: ${{ env.pythonLocation }}
2525
key: ${{ env.pythonLocation }}-${{ hashFiles('pyproject.toml') }}
2626
- name: Install dependencies
2727
run: |
2828
pip install --upgrade --upgrade-strategy eager -e .[dev]
29+
pip install waymo-open-dataset-tf-2-11-0==1.6.1
2930
- name: Check notebook cell metadata
3031
run: |
3132
python ./nerfstudio/scripts/docs/add_nb_tags.py --check

Dockerfile

+3
Original file line numberDiff line numberDiff line change
@@ -138,6 +138,9 @@ RUN git clone --recursive https://github.com/cvg/pixel-perfect-sfm.git && \
138138
python3.10 -m pip install --no-cache-dir -e . && \
139139
cd ..
140140

141+
# Install waymo-open-dataset
142+
RUN python3.10 -m pip install --no-cache-dir waymo-open-dataset-tf-2-11-0==1.6.1
143+
141144
# Copy nerfstudio folder.
142145
ADD . /nerfstudio
143146

README.md

+9-3
Original file line numberDiff line numberDiff line change
@@ -82,10 +82,10 @@ Our installation steps largely follow Nerfstudio, with some added dataset-specif
8282

8383
### Create environment
8484

85-
NeuRAD requires `python >= 3.8`. We recommend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/miniconda.html) before proceeding.
85+
NeuRAD requires `python >= 3.10`. We recommend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/miniconda.html) before proceeding.
8686

8787
```bash
88-
conda create --name neurad -y python=3.8
88+
conda create --name neurad -y python=3.10
8989
conda activate neurad
9090
pip install --upgrade pip
9191
```
@@ -109,6 +109,11 @@ pip install --upgrade pip "setuptools<70.0"
109109
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
110110
```
111111

112+
For support of Waymo-Open-Dataset v2 (requires python3.10, also dependencies from this package are very strict so cannot add it to pyproject.toml and need install first):
113+
```bash
114+
pip install waymo-open-dataset-tf-2-11-0==1.6.1
115+
```
116+
112117
We refer to [Nerfstudio](https://github.com/nerfstudio-project/nerfstudio/blob/v1.0.3/docs/quickstart/installation.md) for more installation support.
113118

114119
### Installing NeuRAD
@@ -227,8 +232,9 @@ To add a dataset, create `nerfstudio/data/dataparsers/mydataset.py` containing o
227232
| 🚗 [Argoverse 2](https://www.argoverse.org/av2.html) | 7 ring cameras + 2 stereo cameras | 2 x 32-beam lidars |
228233
| 🚗 [PandaSet](https://pandaset.org/) ([huggingface download](https://huggingface.co/datasets/georghess/pandaset)) | 6 cameras | 64-beam lidar |
229234
| 🚗 [KITTIMOT](https://www.cvlibs.net/datasets/kitti/eval_tracking.php) ([Timestamps](https://www.cvlibs.net/datasets/kitti/raw_data.php)) | 2 stereo cameras | 64-beam lidar
235+
| 🚗 [Waymo v2](https://waymo.com/open/) | 5 cameras | 64-beam lidar
230236

231-
237+
A brief introduction about Waymo dataparser for NeuRAD can be found in [waymo_dataparser.md](./nerfstudio/data//dataparsers/waymo_dataparser.md)
232238

233239
## Adding Methods
234240

1.77 MB
Loading
2.08 MB
Loading
1.87 MB
Loading

nerfstudio/cameras/cameras.py

+13-3
Original file line numberDiff line numberDiff line change
@@ -921,12 +921,22 @@ def _compute_rays_for_vr180(
921921

922922
if self.metadata and "rolling_shutter_offsets" in self.metadata and "velocities" in self.metadata:
923923
cam_idx = camera_indices.squeeze(-1)
924-
heights, rows = self.height[cam_idx], coords[..., 0:1]
925-
duration = self.metadata["rolling_shutter_offsets"][cam_idx].diff()
926-
time_offsets = rows / heights * duration + self.metadata["rolling_shutter_offsets"][cam_idx][..., 0:1]
924+
offsets = self.metadata["rolling_shutter_offsets"][cam_idx]
925+
duration = offsets.diff()
926+
if "rs_direction" in metadata and metadata["rs_direction"] == "Horizontal":
927+
# wod (LEFT_TO_RIGHT or RIGHT_TO_LEFT)
928+
width, cols = self.width[cam_idx], coords[..., 1:2]
929+
time_offsets = cols / width * duration + offsets[..., 0:1]
930+
else:
931+
# pandaset (TOP_TO_BOTTOM)
932+
heights, rows = self.height[cam_idx], coords[..., 0:1]
933+
time_offsets = rows / heights * duration + offsets[..., 0:1]
934+
927935
origins = origins + self.metadata["velocities"][cam_idx] * time_offsets
928936
times = times + time_offsets
929937
del metadata["rolling_shutter_offsets"] # it has served its purpose
938+
if "rs_direction" in metadata:
939+
del metadata["rs_direction"] # it has served its purpose
930940

931941
return RayBundle(
932942
origins=origins,

nerfstudio/cameras/lidars.py

+3
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@
3535
from nerfstudio.utils.misc import strtobool, torch_compile
3636
from nerfstudio.utils.tensor_dataclass import TensorDataclass
3737

38+
# torch._dynamo.config.suppress_errors = True
3839
TORCH_DEVICE = Union[torch.device, str] # pylint: disable=invalid-name
3940

4041
HORIZONTAL_BEAM_DIVERGENCE = 3.0e-3 # radians, or meters at a distance of 1m
@@ -50,6 +51,7 @@ class LidarType(Enum):
5051
VELODYNE64E = auto()
5152
VELODYNE128 = auto()
5253
PANDAR64 = auto()
54+
WOD64 = auto()
5355

5456

5557
LIDAR_MODEL_TO_TYPE = {
@@ -59,6 +61,7 @@ class LidarType(Enum):
5961
"VELODYNE64E": LidarType.VELODYNE64E,
6062
"VELODYNE128": LidarType.VELODYNE128,
6163
"PANDAR64": LidarType.PANDAR64,
64+
"WOD64": LidarType.WOD64,
6265
}
6366

6467

nerfstudio/configs/dataparser_configs.py

+2
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@
2626
from nerfstudio.data.dataparsers.kittimot_dataparser import KittiMotDataParserConfig
2727
from nerfstudio.data.dataparsers.nuscenes_dataparser import NuScenesDataParserConfig
2828
from nerfstudio.data.dataparsers.pandaset_dataparser import PandaSetDataParserConfig
29+
from nerfstudio.data.dataparsers.wod_dataparser import WoDParserConfig
2930
from nerfstudio.data.dataparsers.zod_dataparser import ZodDataParserConfig
3031
from nerfstudio.plugins.registry_dataparser import discover_dataparsers
3132

@@ -35,6 +36,7 @@
3536
"argoverse2-data": Argoverse2DataParserConfig(),
3637
"zod-data": ZodDataParserConfig(),
3738
"pandaset-data": PandaSetDataParserConfig(),
39+
"wod-data": WoDParserConfig(),
3840
}
3941

4042
external_dataparsers, _ = discover_dataparsers()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
# NeuRAD on Waymo open dataset
2+
3+
## About
4+
Thanks to the excellent work of NeuRAD, we reproduce some results on the Waymo open dataset.
5+
6+
Our goal in reproducing and open-sourcing this waymo dataparser for NeuRAD is to provide a basic reference for the self-driving community and to inspire more work.
7+
8+
In the same folder, there is [wod_dataparser.py](./wod_dataparser.py) which followed the [README-Adding Datasets](https://github.com/georghess/neurad-studio?tab=readme-ov-file#adding-datasets) suggestions. In addition, we added also [wod_utils.py](./wod_utils.py) which did the main work for converting/exporting Waymo dataset.
9+
10+
In addition, we have also added the rolling shutter support for Waymo dataset as the rolling shutter direction is horizontal instead of the vertical one in Pandaset. Here are some examples of the comparison results (on squence of 10588):
11+
![](./../../../docs/_static/imgs/NeuRAD-RS-Waymo-Front.png)
12+
![](./../../../docs/_static/imgs/NeuRAD-RS-Waymo-Left.png)
13+
![](./../../../docs/_static/imgs/NeuRAD-RS-Waymo-Right.png)
14+
15+
16+
### Benchmark between Pandaset & Waymo
17+
| Dataset | Sequence | Frames | Cameras | PSNR | SSIM | LIPS |
18+
|--- |--- |--- |--- |--- |--- |--- |
19+
| Pandaset | 006 |80 | FC |25.1562​|0.8044​ |0.1575​|
20+
| Pandaset | 011 |80 | 360 |26.3919​|0.8057​ |0.2029​|
21+
| Waymo | 10588771936253546636| 50 | FC | 27.5555|0.8547|0.121
22+
| Waymo | 473735159277431842 | 150| FC | 29.1758|0.8717|0.1592
23+
| Waymo | 4468278022208380281 | ALL| FC |30.5247​|0.8787​|0.1701​
24+
25+
Notes: All above results were obtained with the same hyperparameters and configurations from NeuRAD paper (**Appendix A**)
26+
27+
### Results
28+
#### Waymo RGB rendering - Sequence 10588 - 3 cameras (FC_LEFT, FC, FC_RIGHT)
29+
[![Sequence 10588 - 3 cameras](http://img.youtube.com/vi/eR1bHeh7p8A/0.jpg)](https://www.youtube.com/watch?v=eR1bHeh7p8A)
30+
> Up is ground truth, bottom is rendered.
31+
32+
#### Actor removal - Sequence 20946​ - FC cameras
33+
[![Sequence 20946](http://img.youtube.com/vi/mkMdzAvTez4/0.jpg)](https://www.youtube.com/watch?v=mkMdzAvTez4)
34+
> Left is ground truth, right is rendered.
35+
36+
#### Novel view synthesis - Sequence 20946​ - Ego vehicle 1m up
37+
[![Ego vehicle 1m up](http://img.youtube.com/vi/U8VRboWLj_c/0.jpg)](https://www.youtube.com/watch?v=U8VRboWLj_c)
38+
> Left is ground truth, right is rendered.
39+
40+
#### Novel view synthesis - Sequence 20946​ - Ego vehicle 1m left
41+
[![Ego vehicle 1m left](http://img.youtube.com/vi/q_HFmc6JPzQ/0.jpg)](https://www.youtube.com/watch?v=q_HFmc6JPzQ)
42+
> Left is ground truth, right is rendered.
43+
44+
## Links
45+
46+
Results has been done with waymo open dataset [v2.0.0, gcloud link](https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_2_0_0)
47+
48+
## Contributors
49+
50+
- Lei Lei, Leddartech
51+
- Julien Stanguennec, Leddartech
52+
- Pierre Merriaux, Leddartech

0 commit comments

Comments
 (0)