.7z Calibration Sequences \u00b6 Hanheld Sequences \u00b6 Picture Sequence Features Preview handheld_grass00 Textureless preview handheld_room00 Dynmaic preview handheld_room01 Dynmaic preview handheld_escalator00 Non-inertial preview handheld_escalator01 Non-inertial preview handheld_underground00 Structureless preview Legged Robot Sequences \u00b6 Picture Sequence Features Preview legged_grass00 Structureless, Deformable preview legged_grass01 Structureless, Deformable preview legged_room00 Dynamic preview legged_transition00 Illumination, GNSS-deined preview legged_underground00 Structureless preview UGV Sequences \u00b6 Picture Sequence Features Preview ugv_parking00 Structureless preview ugv_parking01 Structureless preview ugv_parking02 Structureless preview ugv_parking03 Structureless preview ugv_campus00 Large-Scale preview ugv_campus01 Fast Motion preview ugv_transition00 GNSS-Denied preview ugv_transition01 GNSS-Denied preview Vehicle Sequences \u00b6 Picture Sequence Features Preview vehicle_campus00 Large-Scale preview vehicle_campus01 Large-Scale preview vehicle_street00 Large-Scale, Dynmaic preview vehicle_tunnel00 Low Texture and Structure preview vehicle_downhill00 Illumination preview vehicle_highway00 Structureless preview vehicle_highway01 Structureless preview vehicle_multilayer00 Perceptual Aliasing preview Some High-Resolution GT Maps \u00b6 Environment Preview UGV Campus Experiments \u00b6 Calibration \u00b6 Projected Point Cloud with Camera-LiDAR Calibration ( LCE-Calib ) Localization Evaluation \u00b6 Running FAST-LIO2 : handheld_room00, legged_grass00, ugv_campus00, vehicle_highway00 Mapping Evaluation \u00b6 Monocular Depth Estimation \u00b6 Tools \u00b6 The development tool can be used by clicking the button below Development Tools Issues \u00b6 If you have any issues with the theme, please report them on the repository: Report Issues Related Works \u00b6 FusionPortable-release works were used in the following papers. Please checkout these workds if you are interested. (Please contact us if you would like your work mentioned here). LiDAR Only Neural Representations for Real-Time SLAM , IEEE RAL 2023 Publications \u00b6 FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments Hexiang Wei*, Jianhao Jiao*, Xiangcheng Hu, Jingwen Yu, Xupeng Xie, Jin Wu, Yilong Zhu, Yuxuan Liu, etc. Under Review [Arxiv] Contact \u00b6 Dr. Jianhao Jiao (jiaojh1994 at gmail dot com): General problems of the dataset Mr. Hexiang Wei (cranefly88 at gmail dot com): Problems related to hardware Contributors \u00b6","title":"FusionPortableV2"},{"location":"dataset/fusionportable_v2/#news","text":"(20240408) Initial development tools have been released. (20240407) Data of FusionPortable are also stored in Google Drive .","title":"News"},{"location":"dataset/fusionportable_v2/#overview","text":"","title":"Overview"},{"location":"dataset/fusionportable_v2/#sensors","text":"Handheld Sensor : 128-beam Ouster LiDAR (OS1, 120m range) Handheld Sensor : Stereo FILR BFS-U3-31S4C cameras Handheld Sensor : Stereo DAVIS346 cameras Handheld Sensor : STIM300 IMU Handheld Sensor : 3DM-GQ7-GNSS/INS UGV Sensor : Omron E6B2-CWZ6C wheel encoder Legged Robot Sensor : Built-in joint encoders, contact sensors, and IMU of the Unitree A1","title":"Sensors"},{"location":"dataset/fusionportable_v2/#various-platforms","text":"","title":"Various Platforms"},{"location":"dataset/fusionportable_v2/#ground-truth-devices","text":"","title":"Ground-Truth Devices"},{"location":"dataset/fusionportable_v2/#third-view-of-data-collection","text":"Environment Platform Preview Escalator Handheld Corridor Handheld Underground Parking Lot Legged Robot Campus UGV Outdoor Parking Lot UGV","title":"Third-View of Data Collection"},{"location":"dataset/fusionportable_v2/#details","text":"","title":"Details"},{"location":"dataset/fusionportable_v2/#organization","text":"","title":"Organization"},{"location":"dataset/fusionportable_v2/#trajectories-of-sequences","text":"","title":"Trajectories of Sequences"},{"location":"dataset/fusionportable_v2/#download-sequence","text":"","title":"Download Sequence"},{"location":"dataset/fusionportable_v2/#calibration-sequences","text":"","title":"Calibration Sequences"},{"location":"dataset/fusionportable_v2/#hanheld-sequences","text":"Picture Sequence Features Preview handheld_grass00 Textureless preview handheld_room00 Dynmaic preview handheld_room01 Dynmaic preview handheld_escalator00 Non-inertial preview handheld_escalator01 Non-inertial preview handheld_underground00 Structureless preview","title":"Hanheld Sequences"},{"location":"dataset/fusionportable_v2/#legged-robot-sequences","text":"Picture Sequence Features Preview legged_grass00 Structureless, Deformable preview legged_grass01 Structureless, Deformable preview legged_room00 Dynamic preview legged_transition00 Illumination, GNSS-deined preview legged_underground00 Structureless preview","title":"Legged Robot Sequences"},{"location":"dataset/fusionportable_v2/#ugv-sequences","text":"Picture Sequence Features Preview ugv_parking00 Structureless preview ugv_parking01 Structureless preview ugv_parking02 Structureless preview ugv_parking03 Structureless preview ugv_campus00 Large-Scale preview ugv_campus01 Fast Motion preview ugv_transition00 GNSS-Denied preview ugv_transition01 GNSS-Denied preview","title":"UGV Sequences"},{"location":"dataset/fusionportable_v2/#vehicle-sequences","text":"Picture Sequence Features Preview vehicle_campus00 Large-Scale preview vehicle_campus01 Large-Scale preview vehicle_street00 Large-Scale, Dynmaic preview vehicle_tunnel00 Low Texture and Structure preview vehicle_downhill00 Illumination preview vehicle_highway00 Structureless preview vehicle_highway01 Structureless preview vehicle_multilayer00 Perceptual Aliasing preview","title":"Vehicle Sequences"},{"location":"dataset/fusionportable_v2/#some-high-resolution-gt-maps","text":"Environment Preview UGV Campus","title":"Some High-Resolution GT Maps"},{"location":"dataset/fusionportable_v2/#experiments","text":"","title":"Experiments"},{"location":"dataset/fusionportable_v2/#calibration","text":"Projected Point Cloud with Camera-LiDAR Calibration ( LCE-Calib )","title":"Calibration"},{"location":"dataset/fusionportable_v2/#localization-evaluation","text":"Running FAST-LIO2 : handheld_room00, legged_grass00, ugv_campus00, vehicle_highway00","title":"Localization Evaluation"},{"location":"dataset/fusionportable_v2/#mapping-evaluation","text":"","title":"Mapping Evaluation"},{"location":"dataset/fusionportable_v2/#monocular-depth-estimation","text":"","title":"Monocular Depth Estimation"},{"location":"dataset/fusionportable_v2/#tools","text":"The development tool can be used by clicking the button below Development Tools","title":"Tools"},{"location":"dataset/fusionportable_v2/#issues","text":"If you have any issues with the theme, please report them on the repository: Report Issues","title":"Issues"},{"location":"dataset/fusionportable_v2/#related-works","text":"FusionPortable-release works were used in the following papers. Please checkout these workds if you are interested. (Please contact us if you would like your work mentioned here). LiDAR Only Neural Representations for Real-Time SLAM , IEEE RAL 2023","title":"Related Works"},{"location":"dataset/fusionportable_v2/#publications","text":"FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments Hexiang Wei*, Jianhao Jiao*, Xiangcheng Hu, Jingwen Yu, Xupeng Xie, Jin Wu, Yilong Zhu, Yuxuan Liu, etc. Under Review [Arxiv]","title":"Publications"},{"location":"dataset/fusionportable_v2/#contact","text":"Dr. Jianhao Jiao (jiaojh1994 at gmail dot com): General problems of the dataset Mr. Hexiang Wei (cranefly88 at gmail dot com): Problems related to hardware","title":"Contact"},{"location":"dataset/fusionportable_v2/#contributors","text":"","title":"Contributors"},{"location":"perception/tbd/","text":"","title":"Tbd"},{"location":"slam/fl2sam/","text":"FL2SAM","title":"Fl2sam"}]}
\ No newline at end of file
+{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"FusionPortable Research Dashboard Overview of Our Contributions: \u00b6 Calibration \u00b6 PBACalib: Targetless LiDAR-Camera Extrinsic Calibration , IEEE RAL 2023 LCECalib: Target-Based LiDAR-Frame/Event Camera Extrinsic Calibration , IEEE TMech 2023 SLAM \u00b6 PALoc: Advancing SLAM Benchmarking with Prior-Assisted 6-DoF Trajectory Generation and Uncertainty Estimation , IEEE TMech 2024 Perception \u00b6 FSNet: Full-Scale Unsupervised Monocular Depth Prediction , IEEE TASE 2023 Mapping \u00b6 Cobra: Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor Environments Datasets \u00b6 FusionPortable: Multi-Sensor Campus-Scene Dataset on Diverse Platforms , IEEE IROS 2022 FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments , Under Review Contact \u00b6 Dr. Jianhao Jiao (jiaojh1994 at gmail dot com) License \u00b6 Cinder is licensed under the MIT license .","title":"Home"},{"location":"#overview-of-our-contributions","text":"","title":"Overview of Our Contributions:"},{"location":"#calibration","text":"PBACalib: Targetless LiDAR-Camera Extrinsic Calibration , IEEE RAL 2023 LCECalib: Target-Based LiDAR-Frame/Event Camera Extrinsic Calibration , IEEE TMech 2023","title":"Calibration"},{"location":"#slam","text":"PALoc: Advancing SLAM Benchmarking with Prior-Assisted 6-DoF Trajectory Generation and Uncertainty Estimation , IEEE TMech 2024","title":"SLAM"},{"location":"#perception","text":"FSNet: Full-Scale Unsupervised Monocular Depth Prediction , IEEE TASE 2023","title":"Perception"},{"location":"#mapping","text":"Cobra: Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor Environments","title":"Mapping"},{"location":"#datasets","text":"FusionPortable: Multi-Sensor Campus-Scene Dataset on Diverse Platforms , IEEE IROS 2022 FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments , Under Review","title":"Datasets"},{"location":"#contact","text":"Dr. Jianhao Jiao (jiaojh1994 at gmail dot com)","title":"Contact"},{"location":"#license","text":"Cinder is licensed under the MIT license .","title":"License"},{"location":"index2/","text":"Cinder Theme for MkDocs About \u00b6 Cinder is a clean, responsive theme for static documentation sites that are generated with MkDocs . It's built on the Bootstrap 3 framework and includes pre-packaged: highlight.js v9.18.0 syntax highlighting with support for 185 languages (over 30 by default) and over 90 styles FontAwesome v5.12.0 icon support smashingly legible type scheme to get your message out to your users You are viewing the theme in action and can see a selection of the theme elements on the Specimen page . Install \u00b6 Required : Python 3.4+ Install MkDocs & Create a New Project \u00b6 If you haven't installed MkDocs yet, use the following command to install it: $ pip install mkdocs Next, navigate to a clean directory and create a new MkDocs project with the following command: $ mkdocs new [projectname] Replace [projectname] with the name of your project (without the brackets). Then navigate to the root of your project directory: $ cd [projectname] Install the Cinder Theme \u00b6 Download the Cinder theme archive by clicking the button below. Download Cinder Unpack the contents of the archive into a directory named cinder at the top level of your MkDocs project directory. Your project directory should now look like this: . \u251c\u2500\u2500 mkdocs.yml \u251c\u2500\u2500 cinder \u2502 \u251c\u2500\u2500 css \u2502 \u251c\u2500\u2500 img \u2502 \u251c\u2500\u2500 js \u2502 \u251c\u2500\u2500 base.html \u2502 \u251c\u2500\u2500 content.html \u2502 \u251c\u2500\u2500 404.html \u2502 \u251c\u2500\u2500 nav-sub.html \u2502 \u251c\u2500\u2500 nav.html \u2502 \u2514\u2500\u2500 toc.html \u2514\u2500\u2500 docs \u2514\u2500\u2500 index.md MkDocs projects use a YAML settings file called mkdocs.yml . This is located in the root of your project directory after you use the mkdocs new command. Open the file in a text editor and modify it to include the theme settings as follows: site_name: [YOURPROJECT] theme: name: null custom_dir: 'cinder' nav: - Home: index.md See the MkDocs documentation for additional details. Updates, the Manual Approach If you choose the manual install approach, you can update your Cinder theme by downloading the new cinder.zip release archive and including it in your project. Then re-build your static site files (see instructions below). Test with a Local Site Server \u00b6 Use the following command to establish a local server for your site: $ mkdocs serve Then open your site in any browser at the URL http://localhost:8000 . Create Your Site \u00b6 Add Content with Markdown Syntax \u00b6 Get to work on your site home page by opening the docs/index.md file and editing it in Markdown syntax. The HTML automatically updates in the browser when you save the Markdown file if you use the MkDocs server (see command above). Add New Pages \u00b6 Add new pages to your site by creating a new Markdown file in your docs directory, then linking to the new page in the mkdocs.yml file. This uses a Page Name : Markdown file syntax. For example, to add an About page using a Markdown file that is located on the path docs/about.md , you would format the mkdocs.yml file as follows: site_name: [YOURPROJECT] theme: name: null custom_dir: 'cinder' nav: - Home: index.md - About: about.md Add additional pages to your site by repeating the above series of steps. Build Your Site \u00b6 Build your site files with the command: $ mkdocs build Your site files are built in the site directory and are ready to use. Deploy the contents of the site directory to your web server. Important Configuration Issues \u00b6 Please review these issues before you push your site into a production setting! 1. Set the site_url configuration field \u00b6 You must set the site_url field in your mkdocs.yml file to the appropriate production URL in order to generate a valid sitemap.xml file ( issue #80 ). Here is an example from the Cinder project mkdocs.yml file : site_name: Cinder site_url: https://sourcefoundry.org/cinder/ site_author: Christopher Simpkins site_description: \"A clean, responsive theme for static documentation websites that are generated with MkDocs\" repo_url: \"https://github.com/chrissimpkins/cinder\" copyright: \"Cinder is licensed under the MIT license\" theme: name: null custom_dir: cinder colorscheme: github highlightjs: true hljs_languages: - yaml nav: - Home: index.md - Specimen: specimen.md markdown_extensions: - admonition The sitemap.xml file will be located at [SITE_URL]/sitemap.xml when you push your site into the production environment. During development the sitemap.xml file can be found at http://127.0.0.1:8000/sitemap.xml . Site Customization \u00b6 The following are a few common customizations that you might be interested in. For much more detail about the configuration of your site, check out the MkDocs Configuration documentation . Syntax Highlighting Color Scheme \u00b6 Cinder supports the 90+ highlightjs color schemes . The github color scheme that you see on this page is the default and will be used if you do not specify otherwise. To change to a different scheme, include the colorscheme field under the theme field in your mkdocs.yml file and enter the color scheme name. For example, to switch to the Dracula theme , enter the following: theme: name: null custom_dir: cinder colorscheme: dracula and then rebuild your site. The color scheme name should match the base name of the highlightjs CSS file. See the src/styles directory of the highlightjs repository for a complete list of these CSS paths. Syntax Highlighting Language Support \u00b6 By default, Cinder supports the ~30 syntaxes listed under common in the documentation . You can broaden support to any of the over 130 highlightjs languages using definitions in your mkdocs.yml file. To add a new language, create a list of additional languages as a hljs_languages sub-field under the theme field in the mkdocs.yml file. The definitions are formatted as a newline-delimited list with - characters. For example, to add support for the Julia and Perl languages, format your configuration file like this: theme: name: null custom_dir: cinder hljs_languages: - julia - perl Use the base file name of the JavaScript files located in the CDN for your syntax definitions. Site Favicon \u00b6 Create an img subdirectory in your docs directory and add a custom favicon.ico file. See the MkDocs documentation for additional details. Add Your Own CSS Stylesheets \u00b6 Create a css directory inside your docs directory and add your CSS files. You can overwrite any of the Cinder styles in your CSS files. Then include your CSS files in the mkdocs.yml file with the extra_css field: site_name: [YOURPROJECT] theme: cinder extra_css: - \"css/mystyle.css\" - \"css/myotherstyle.css\" nav: - Home: index.md - About: about.md Your CSS styles fall at the end of the cascade and will override all styles included in the theme (including Bootstrap and default Cinder styles). You can find the Cinder and Bootstrap CSS files on the paths cinder/css/cinder.css and cinder/css/bootstrap.min.css , respectively. Add Your Own JavaScript \u00b6 Create a js directory inside your docs directory and add your JS files. Then include your JS files in the mkdocs.yml file with the extra_javascript field: site_name: [YOURPROJECT] theme: cinder extra_javascript: - \"js/myscript.js\" - \"js/myotherscript.js\" nav: - Home: index.md - About: about.md Keyboard shortcuts \u00b6 Place the following in your mkdocs.yml file to enable keyboard shortcuts: shortcuts: help: 191 # ? next: 39 # right arrow previous: 37 # left arrow search: 83 # s The numbers correspond to the key that you would like to use for that shortcut. You can use https://keycode.info/ to find the keycode you want. Extending Cinder \u00b6 Create a new directory within your project (e.g., cinder-theme-ext/ ) and create main.html . Add the following line at the top of the HTML file. {% extends \"base.html\" %} Instead of using theme_dir: cinder in mkdocs.yml , use: theme: name: cinder custom_dir: [custom dir] Refer to MkDocs Documentation - Using the theme custom_dir for more information. Use the following examples as reference. You can put your own Jinja2 within the blocks. More information can be found in MkDocs Documentation - Overriding Template Blocks . Adding extra HTML to the head tag \u00b6 Append to main.html : {% block extrahead %} {% endblock %} Replacing footer \u00b6 Append to main.html : {% block footer %}
{% if config.copyright %} {{ config.copyright }}
{% endif %} Documentation built with MkDocs. {% if page.meta.revision_date %}
Updated {{ page.meta.revision_date }} {% endif %}
{% endblock %} page.meta.revision_date can be set by using meta-data (front-matter) at the beginning of your Markdown document or using mkdocs-git-revision-date-plugin . Github or Bitbucket Repository Link \u00b6 Include the repo_url field and define it with your repository URL: site_name: [YOURPROJECT] theme: cinder repo_url: \"https://github.com/chrissimpkins/cinder\" nav: - Home: index.md - About: about.md The link appears at the upper right hand corner of your site. License Declaration and Link \u00b6 The Cinder theme displays your license declaration in the footer if you include a copyright field and define it with the text (and optionally the HTML link) that you would like to display: site_name: [YOURPROJECT] theme: cinder copyright: \"Cinder is licensed under the MIT license \" nav: - Home: index.md - About: about.md Disabling Theme Features \u00b6 The Cinder theme can turn off some theme features entirely in mkdocs.yml , for situations where you don't need these features. If this is all the customization required, it saves overriding theme files. For example: theme: name: cinder # Turn off Previous/Next navigation links in the navbar disable_nav_previous_next: true # Turn off Search in the navbar disable_nav_search: true # Turn off the site_name link in the navbar disable_nav_site_name: true # Turn off the footer entirely disable_footer: true # Turn off the default footer message, but display the page revision date if it's available disable_footer_except_revision: true Issues \u00b6 If you have any issues with the theme, please report them on the Cinder repository: Report Issue Active Issues License \u00b6 Cinder is licensed under the MIT license .","title":"Index2"},{"location":"index2/#about","text":"Cinder is a clean, responsive theme for static documentation sites that are generated with MkDocs . It's built on the Bootstrap 3 framework and includes pre-packaged: highlight.js v9.18.0 syntax highlighting with support for 185 languages (over 30 by default) and over 90 styles FontAwesome v5.12.0 icon support smashingly legible type scheme to get your message out to your users You are viewing the theme in action and can see a selection of the theme elements on the Specimen page .","title":"About"},{"location":"index2/#install","text":"Required : Python 3.4+","title":"Install"},{"location":"index2/#install-mkdocs-create-a-new-project","text":"If you haven't installed MkDocs yet, use the following command to install it: $ pip install mkdocs Next, navigate to a clean directory and create a new MkDocs project with the following command: $ mkdocs new [projectname] Replace [projectname] with the name of your project (without the brackets). Then navigate to the root of your project directory: $ cd [projectname]","title":"Install MkDocs & Create a New Project"},{"location":"index2/#install-the-cinder-theme","text":"Download the Cinder theme archive by clicking the button below. Download Cinder Unpack the contents of the archive into a directory named cinder at the top level of your MkDocs project directory. Your project directory should now look like this: . \u251c\u2500\u2500 mkdocs.yml \u251c\u2500\u2500 cinder \u2502 \u251c\u2500\u2500 css \u2502 \u251c\u2500\u2500 img \u2502 \u251c\u2500\u2500 js \u2502 \u251c\u2500\u2500 base.html \u2502 \u251c\u2500\u2500 content.html \u2502 \u251c\u2500\u2500 404.html \u2502 \u251c\u2500\u2500 nav-sub.html \u2502 \u251c\u2500\u2500 nav.html \u2502 \u2514\u2500\u2500 toc.html \u2514\u2500\u2500 docs \u2514\u2500\u2500 index.md MkDocs projects use a YAML settings file called mkdocs.yml . This is located in the root of your project directory after you use the mkdocs new command. Open the file in a text editor and modify it to include the theme settings as follows: site_name: [YOURPROJECT] theme: name: null custom_dir: 'cinder' nav: - Home: index.md See the MkDocs documentation for additional details.","title":"Install the Cinder Theme"},{"location":"index2/#test-with-a-local-site-server","text":"Use the following command to establish a local server for your site: $ mkdocs serve Then open your site in any browser at the URL http://localhost:8000 .","title":"Test with a Local Site Server"},{"location":"index2/#create-your-site","text":"","title":"Create Your Site"},{"location":"index2/#add-content-with-markdown-syntax","text":"Get to work on your site home page by opening the docs/index.md file and editing it in Markdown syntax. The HTML automatically updates in the browser when you save the Markdown file if you use the MkDocs server (see command above).","title":"Add Content with Markdown Syntax"},{"location":"index2/#add-new-pages","text":"Add new pages to your site by creating a new Markdown file in your docs directory, then linking to the new page in the mkdocs.yml file. This uses a Page Name : Markdown file syntax. For example, to add an About page using a Markdown file that is located on the path docs/about.md , you would format the mkdocs.yml file as follows: site_name: [YOURPROJECT] theme: name: null custom_dir: 'cinder' nav: - Home: index.md - About: about.md Add additional pages to your site by repeating the above series of steps.","title":"Add New Pages"},{"location":"index2/#build-your-site","text":"Build your site files with the command: $ mkdocs build Your site files are built in the site directory and are ready to use. Deploy the contents of the site directory to your web server.","title":"Build Your Site"},{"location":"index2/#important-configuration-issues","text":"","title":"Important Configuration Issues"},{"location":"index2/#1-set-the-site_url-configuration-field","text":"You must set the site_url field in your mkdocs.yml file to the appropriate production URL in order to generate a valid sitemap.xml file ( issue #80 ). Here is an example from the Cinder project mkdocs.yml file : site_name: Cinder site_url: https://sourcefoundry.org/cinder/ site_author: Christopher Simpkins site_description: \"A clean, responsive theme for static documentation websites that are generated with MkDocs\" repo_url: \"https://github.com/chrissimpkins/cinder\" copyright: \"Cinder is licensed under the MIT license\" theme: name: null custom_dir: cinder colorscheme: github highlightjs: true hljs_languages: - yaml nav: - Home: index.md - Specimen: specimen.md markdown_extensions: - admonition The sitemap.xml file will be located at [SITE_URL]/sitemap.xml when you push your site into the production environment. During development the sitemap.xml file can be found at http://127.0.0.1:8000/sitemap.xml .","title":"1. Set the site_url configuration field"},{"location":"index2/#site-customization","text":"The following are a few common customizations that you might be interested in. For much more detail about the configuration of your site, check out the MkDocs Configuration documentation .","title":"Site Customization"},{"location":"index2/#syntax-highlighting-color-scheme","text":"Cinder supports the 90+ highlightjs color schemes . The github color scheme that you see on this page is the default and will be used if you do not specify otherwise. To change to a different scheme, include the colorscheme field under the theme field in your mkdocs.yml file and enter the color scheme name. For example, to switch to the Dracula theme , enter the following: theme: name: null custom_dir: cinder colorscheme: dracula and then rebuild your site. The color scheme name should match the base name of the highlightjs CSS file. See the src/styles directory of the highlightjs repository for a complete list of these CSS paths.","title":"Syntax Highlighting Color Scheme"},{"location":"index2/#syntax-highlighting-language-support","text":"By default, Cinder supports the ~30 syntaxes listed under common in the documentation . You can broaden support to any of the over 130 highlightjs languages using definitions in your mkdocs.yml file. To add a new language, create a list of additional languages as a hljs_languages sub-field under the theme field in the mkdocs.yml file. The definitions are formatted as a newline-delimited list with - characters. For example, to add support for the Julia and Perl languages, format your configuration file like this: theme: name: null custom_dir: cinder hljs_languages: - julia - perl Use the base file name of the JavaScript files located in the CDN for your syntax definitions.","title":"Syntax Highlighting Language Support"},{"location":"index2/#site-favicon","text":"Create an img subdirectory in your docs directory and add a custom favicon.ico file. See the MkDocs documentation for additional details.","title":"Site Favicon"},{"location":"index2/#add-your-own-css-stylesheets","text":"Create a css directory inside your docs directory and add your CSS files. You can overwrite any of the Cinder styles in your CSS files. Then include your CSS files in the mkdocs.yml file with the extra_css field: site_name: [YOURPROJECT] theme: cinder extra_css: - \"css/mystyle.css\" - \"css/myotherstyle.css\" nav: - Home: index.md - About: about.md Your CSS styles fall at the end of the cascade and will override all styles included in the theme (including Bootstrap and default Cinder styles). You can find the Cinder and Bootstrap CSS files on the paths cinder/css/cinder.css and cinder/css/bootstrap.min.css , respectively.","title":"Add Your Own CSS Stylesheets"},{"location":"index2/#add-your-own-javascript","text":"Create a js directory inside your docs directory and add your JS files. Then include your JS files in the mkdocs.yml file with the extra_javascript field: site_name: [YOURPROJECT] theme: cinder extra_javascript: - \"js/myscript.js\" - \"js/myotherscript.js\" nav: - Home: index.md - About: about.md","title":"Add Your Own JavaScript"},{"location":"index2/#keyboard-shortcuts","text":"Place the following in your mkdocs.yml file to enable keyboard shortcuts: shortcuts: help: 191 # ? next: 39 # right arrow previous: 37 # left arrow search: 83 # s The numbers correspond to the key that you would like to use for that shortcut. You can use https://keycode.info/ to find the keycode you want.","title":"Keyboard shortcuts"},{"location":"index2/#extending-cinder","text":"Create a new directory within your project (e.g., cinder-theme-ext/ ) and create main.html . Add the following line at the top of the HTML file. {% extends \"base.html\" %} Instead of using theme_dir: cinder in mkdocs.yml , use: theme: name: cinder custom_dir: [custom dir] Refer to MkDocs Documentation - Using the theme custom_dir for more information. Use the following examples as reference. You can put your own Jinja2 within the blocks. More information can be found in MkDocs Documentation - Overriding Template Blocks .","title":"Extending Cinder"},{"location":"index2/#adding-extra-html-to-the-head-tag","text":"Append to main.html : {% block extrahead %} {% endblock %}","title":"Adding extra HTML to the head tag"},{"location":"index2/#replacing-footer","text":"Append to main.html : {% block footer %}
{% if config.copyright %} {{ config.copyright }}
{% endif %} Documentation built with MkDocs. {% if page.meta.revision_date %}
Updated {{ page.meta.revision_date }} {% endif %}
{% endblock %} page.meta.revision_date can be set by using meta-data (front-matter) at the beginning of your Markdown document or using mkdocs-git-revision-date-plugin .","title":"Replacing footer"},{"location":"index2/#github-or-bitbucket-repository-link","text":"Include the repo_url field and define it with your repository URL: site_name: [YOURPROJECT] theme: cinder repo_url: \"https://github.com/chrissimpkins/cinder\" nav: - Home: index.md - About: about.md The link appears at the upper right hand corner of your site.","title":"Github or Bitbucket Repository Link"},{"location":"index2/#license-declaration-and-link","text":"The Cinder theme displays your license declaration in the footer if you include a copyright field and define it with the text (and optionally the HTML link) that you would like to display: site_name: [YOURPROJECT] theme: cinder copyright: \"Cinder is licensed under the MIT license \" nav: - Home: index.md - About: about.md","title":"License Declaration and Link"},{"location":"index2/#disabling-theme-features","text":"The Cinder theme can turn off some theme features entirely in mkdocs.yml , for situations where you don't need these features. If this is all the customization required, it saves overriding theme files. For example: theme: name: cinder # Turn off Previous/Next navigation links in the navbar disable_nav_previous_next: true # Turn off Search in the navbar disable_nav_search: true # Turn off the site_name link in the navbar disable_nav_site_name: true # Turn off the footer entirely disable_footer: true # Turn off the default footer message, but display the page revision date if it's available disable_footer_except_revision: true","title":"Disabling Theme Features"},{"location":"index2/#issues","text":"If you have any issues with the theme, please report them on the Cinder repository: Report Issue Active Issues","title":"Issues"},{"location":"index2/#license","text":"Cinder is licensed under the MIT license .","title":"License"},{"location":"specimen/","text":"Cinder Theme Specimen Typography \u00b6 Typefaces \u00b6 Headers: Inter Body: Open Sans Code: Hack Body Copy \u00b6 You think water moves fast? You should see ice. It moves like it has a mind . Like it knows it killed the world once and got a taste for murder. After the avalanche, it took us a week to climb out . Now, I don't know exactly when we turned on each other, but I know that seven of us survived the slide... and only five made it out. Now we took an oath, that I'm breaking now. We said we'd say it was the snow that killed the other two, but it wasn't. Nature is lethal but it doesn't hold a candle to man. Like inline code? Here is a string for you 010101010 . Lead Body Copy \u00b6 Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Duis mollis, est non commodo luctus. Headings \u00b6 All HTML headings, through , are available. .h1 through .h6 classes are also available, for when you want to match the font styling of a heading but still want your text to be displayed inline. h1. Heading h2. Heading h3. Heading h4. Heading h5. Heading h6. Heading h1. Heading Secondary text h2. Heading Secondary text h3. Heading Secondary text h4. Heading Secondary text h5. Heading Secondary text h6. Heading Secondary text Blockquotes \u00b6 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Someone famous in Source Title Lists \u00b6 mkdocs new [dir-name] - Create a new project. mkdocs serve - Start the live-reloading docs server. mkdocs build - Build the documentation site. mkdocs help - Print this help message. Horizontal Description Lists \u00b6 Something This is something SomethingElse This is something else Contextual Colors \u00b6 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Code \u00b6 Inline Code \u00b6 This is an example of inline code #import requests Preformatted Code Blocks with Syntax Highlighting def request(method, url, **kwargs): \"\"\"Constructs and sends a :class:`Request `. Usage:: >>> import requests >>> req = requests.request('GET', 'https://httpbin.org/get') \"\"\" # By using the 'with' statement we are sure the session is closed, thus we # avoid leaving sockets open which can trigger a ResourceWarning in some # cases, and look like a memory leak in others. with sessions.Session() as session: return session.request(method=method, url=url, **kwargs) def get(url, params=None, **kwargs): r\"\"\"Sends a GET request. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary, list of tuples or bytes to send in the body of the :class:`Request`. :param \\*\\*kwargs: Optional arguments that ``request`` takes. :return: :class:`Response ` object :rtype: requests.Response \"\"\" kwargs.setdefault('allow_redirects', True) return request('get', url, params=params, **kwargs) (Source code sample from the Python requests library , Apache License, v2.0 ) Syntax highlighting support is available for and of the following languages listed on the highlightjs website . See the mkdocs \"styling your docs\" hljs_languages section for info on how to load languages dynamically. Note Include source code formatted in Github-flavored Markdown fenced code blocks with an info string that defines the supported programming language to automate syntax highlighting when your site is built. Tables \u00b6 Striped Table \u00b6 # Head 1 Head 2 Head 3 1 Box 1 Box 2 Box 3 2 Box 4 Box 5 Box 6 3 Box 7 Box 8 Box 9 Bordered Table \u00b6 # Head 1 Head 2 Head 3 1 Box 1 Box 2 Box 3 2 Box 4 Box 5 Box 6 3 Box 7 Box 8 Box 9 Buttons \u00b6 Default Buttons \u00b6 Link Button Styled Buttons \u00b6 Default Primary Success Info Warning Danger Button Sizes \u00b6 Large button Large button Default button Default button Small button Small button Extra small button Extra small button Block Level Buttons \u00b6 Block level button Block level button Alerts \u00b6 A simple primary alert\u2014check it out! A simple secondary alert\u2014check it out! A simple success alert\u2014check it out! A simple danger alert\u2014check it out! A simple warning alert\u2014check it out! A simple info alert\u2014check it out! A simple light alert\u2014check it out! A simple dark alert\u2014check it out! Callouts \u00b6 Default Callout This is a default callout. Primary Callout This is a primary callout. Success Callout This is a success callout. Info Callout This is an info callout. Warning Callout This is a warning callout. Danger Callout This is a danger callout. Admonitions \u00b6 The following admonitions are formatted like the callouts above but can be implemented in Markdown when you include the admonition Markdown extension in your mkdocs.yml file. Add the following setting to mkdocs.yml : markdown_extensions: - admonition and then follow the instructions in the extension documentation to author admonitions in your documentation. Cinder currently supports note , warning , and danger admonition types. Note Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is a note def func(arg) { # notable things are in here! return None } Warning Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is a warning def func(arg) { # be careful! return None } Danger Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is dangerous def func(arg) { # BOOM! return None }","title":"Specimen"},{"location":"specimen/#typography","text":"","title":"Typography"},{"location":"specimen/#typefaces","text":"Headers: Inter Body: Open Sans Code: Hack","title":"Typefaces"},{"location":"specimen/#body-copy","text":"You think water moves fast? You should see ice. It moves like it has a mind . Like it knows it killed the world once and got a taste for murder. After the avalanche, it took us a week to climb out . Now, I don't know exactly when we turned on each other, but I know that seven of us survived the slide... and only five made it out. Now we took an oath, that I'm breaking now. We said we'd say it was the snow that killed the other two, but it wasn't. Nature is lethal but it doesn't hold a candle to man. Like inline code? Here is a string for you 010101010 .","title":"Body Copy"},{"location":"specimen/#lead-body-copy","text":"Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Duis mollis, est non commodo luctus.","title":"Lead Body Copy"},{"location":"specimen/#headings","text":"All HTML headings, through , are available. .h1 through .h6 classes are also available, for when you want to match the font styling of a heading but still want your text to be displayed inline.","title":"Headings"},{"location":"specimen/#blockquotes","text":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Someone famous in Source Title","title":"Blockquotes"},{"location":"specimen/#lists","text":"mkdocs new [dir-name] - Create a new project. mkdocs serve - Start the live-reloading docs server. mkdocs build - Build the documentation site. mkdocs help - Print this help message.","title":"Lists"},{"location":"specimen/#horizontal-description-lists","text":"Something This is something SomethingElse This is something else","title":"Horizontal Description Lists"},{"location":"specimen/#contextual-colors","text":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer posuere erat a ante.","title":"Contextual Colors"},{"location":"specimen/#code","text":"","title":"Code"},{"location":"specimen/#inline-code","text":"This is an example of inline code #import requests","title":"Inline Code"},{"location":"specimen/#tables","text":"","title":"Tables"},{"location":"specimen/#striped-table","text":"# Head 1 Head 2 Head 3 1 Box 1 Box 2 Box 3 2 Box 4 Box 5 Box 6 3 Box 7 Box 8 Box 9","title":"Striped Table"},{"location":"specimen/#bordered-table","text":"# Head 1 Head 2 Head 3 1 Box 1 Box 2 Box 3 2 Box 4 Box 5 Box 6 3 Box 7 Box 8 Box 9","title":"Bordered Table"},{"location":"specimen/#buttons","text":"","title":"Buttons"},{"location":"specimen/#default-buttons","text":"Link Button","title":"Default Buttons"},{"location":"specimen/#styled-buttons","text":"Default Primary Success Info Warning Danger","title":"Styled Buttons"},{"location":"specimen/#button-sizes","text":"Large button Large button Default button Default button Small button Small button Extra small button Extra small button","title":"Button Sizes"},{"location":"specimen/#block-level-buttons","text":"Block level button Block level button","title":"Block Level Buttons"},{"location":"specimen/#alerts","text":"A simple primary alert\u2014check it out! A simple secondary alert\u2014check it out! A simple success alert\u2014check it out! A simple danger alert\u2014check it out! A simple warning alert\u2014check it out! A simple info alert\u2014check it out! A simple light alert\u2014check it out! A simple dark alert\u2014check it out!","title":"Alerts"},{"location":"specimen/#callouts","text":"","title":"Callouts"},{"location":"specimen/#admonitions","text":"The following admonitions are formatted like the callouts above but can be implemented in Markdown when you include the admonition Markdown extension in your mkdocs.yml file. Add the following setting to mkdocs.yml : markdown_extensions: - admonition and then follow the instructions in the extension documentation to author admonitions in your documentation. Cinder currently supports note , warning , and danger admonition types. Note Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is a note def func(arg) { # notable things are in here! return None } Warning Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is a warning def func(arg) { # be careful! return None } Danger Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa. # this is dangerous def func(arg) { # BOOM! return None }","title":"Admonitions"},{"location":"submit/","text":"Submit Results \u00b6","title":"Submit Results"},{"location":"submit/#submit-results","text":"","title":"Submit Results"},{"location":"auth/login/","text":"Login to submmit \u00b6 {{form.username(class_='form-control', placeholder='Username')}} {{form.password(class_='form-control', placeholder='Password')}} Login If you do not have an account, please first register an account here .","title":"Login to submmit"},{"location":"auth/login/#login-to-submmit","text":"{{form.username(class_='form-control', placeholder='Username')}} {{form.password(class_='form-control', placeholder='Password')}} Login If you do not have an account, please first register an account here .","title":"Login to submmit"},{"location":"auth/register/","text":"Register New User \u00b6 {{form.full_name(class_='form-control')}} {{form.email(class_='form-control')}} {{form.username(class_='form-control')}} {{form.password(class_='form-control')}} {{form.password_confirmation(class_='form-control')}} Register","title":"Register New User"},{"location":"auth/register/#register-new-user","text":"{{form.full_name(class_='form-control')}} {{form.email(class_='form-control')}} {{form.username(class_='form-control')}} {{form.password(class_='form-control')}} {{form.password_confirmation(class_='form-control')}} Register","title":"Register New User"},{"location":"calibration/lcecalib/","text":"LCECalib Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With A Globally Optimal Solution Introduction \u00b6 The combination of LiDARs-frame cameras-event cameras becomes a key factor to achieve robust perception of a mobile robot. However, to jointly exploit these sensors, the challenging extrinsic calibration problem should be addressed. An automatic checkerboard-based extrinsic calibration approach is proposed. Four contributions are presented: Automatic feature extraction and checkerboard tracking method from LiDAR\u2019s point clouds. Realistic frame images reconstruction from event streams, which are applied by traditional corner detectors. An initialization-refinement procedure to estimate extrinsics. A unified and globally optimal solution. Sensors \u00b6 Experiments \u00b6 Reconstructed images from events \u00b6 Pure Events Reconstructed Images Projecting points onto images with calibrated extrinsics \u00b6 Camera-VLP16 Calibration Camera-Ouster128 Calibration Code \u00b6 The dataset tool can be used by clicking the button below LCECalib Publications \u00b6 LCECalib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With A Globally Optimal Solution Jianhao Jiao, Feiyi Chen, Hexiang Wei, Jin Wu, Ming Liu IEEE/ASME Transactions on Mechatronics (T-MECH), 2023 [bibtex]","title":"LCECalib"},{"location":"calibration/lcecalib/#introduction","text":"The combination of LiDARs-frame cameras-event cameras becomes a key factor to achieve robust perception of a mobile robot. However, to jointly exploit these sensors, the challenging extrinsic calibration problem should be addressed. An automatic checkerboard-based extrinsic calibration approach is proposed. Four contributions are presented: Automatic feature extraction and checkerboard tracking method from LiDAR\u2019s point clouds. Realistic frame images reconstruction from event streams, which are applied by traditional corner detectors. An initialization-refinement procedure to estimate extrinsics. A unified and globally optimal solution.","title":"Introduction"},{"location":"calibration/lcecalib/#sensors","text":"","title":"Sensors"},{"location":"calibration/lcecalib/#experiments","text":"","title":"Experiments"},{"location":"calibration/lcecalib/#reconstructed-images-from-events","text":"Pure Events Reconstructed Images","title":"Reconstructed images from events"},{"location":"calibration/lcecalib/#projecting-points-onto-images-with-calibrated-extrinsics","text":"Camera-VLP16 Calibration Camera-Ouster128 Calibration","title":"Projecting points onto images with calibrated extrinsics"},{"location":"calibration/lcecalib/#code","text":"The dataset tool can be used by clicking the button below LCECalib","title":"Code"},{"location":"calibration/lcecalib/#publications","text":"LCECalib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With A Globally Optimal Solution Jianhao Jiao, Feiyi Chen, Hexiang Wei, Jin Wu, Ming Liu IEEE/ASME Transactions on Mechatronics (T-MECH), 2023 [bibtex]","title":"Publications"},{"location":"calibration/pbacalib/","text":"PBACalib Targetless Extrinsic Calibration for High-Resolution LiDAR-Camera System Based on Plane-Constrained Bundle Adjustment Introduction \u00b6 In the autonomous driving industry, the resolution of the LiDAR equipped on the vehicle grows rapidly with reduced cost and the release of new solid-state LiDAR (e.g., Livox). However, most existing works focus on mechanical LiDAR (e.g., Velodyne) and rely on prepared artificial targets, such as checkerboard, circle, and sphere, which are sometimes unavailable. Besides, it is challenging to implement some old methods on dense LiDAR because of different data structures. For instance, the large number of bleeding points that exist around depth-discontinuous edges in dense LiDAR, degrade the performances of some edge extraction algorithms. Moreover, zero-valued and multi-valued mapping problems also make the mutual information-based method unstable. On the other hand, the mounting position and orientation of sensors depend on the actual needs, and it fails some calibration methods. Some other targetless methods resort to the depth-continuous edges of the environment. In this case, LiDARs need to be mounted upwards to observe enough edges of the buildings for the calibration, which is not practical in some cases. Considering the above challenges, we propose PBACalib, which captures several pairs of images and point clouds around a plane with arbitrary texture to calibrate the extrinsics between dense LiDAR (livox) and camera . Our contirbutions are summerized as follows: A novel targetless extrinsic calibration method for high-resolution LiDAR and camera based on planeconstrained bundle adjustment. It only needs a common textured ground, wall, or other planes to accomplish the calibration. Validity analysis on the collected dataset. We theoretically analyze the distribution of the collected data and introduce a confidence factor to determine whether the input data is sufficient for calibration. Specific requirements are listed to guide users to stabilize the calibration result, which are: 1) we need at least four poses; 2) the target planes do not intersect at the same point; 3) at least three normal vectors are non-coplanar Evaluated with various simulation, real-world and comparison experiments, which reveal that the proposed method is accurate and robust. To benefit the community, we publicly release the source code on Github Sensors \u00b6 Experiments \u00b6 The calibration results compared with other method \u00b6 The qualitative comparison using Yuan\u2019s approach and our method. The projected points are colorized by points\u2019 intensity value Projecting points onto images with calibrated extrinsics \u00b6 4 valid calibration scenes, and the qualitative results in real world. The projected points in top two figures (a)(b) are colorized by points\u2019 intensity value. In the bottom two figures(c)(d), different colors represent different planes, which are iteratively extracted by plane RANSAC Code \u00b6 The dataset tool can be used by clicking the button below PBACalib Publications \u00b6 PBACalib: Targetless Extrinsic Calibration for High-Resolution LiDAR-Camera System Based on Plane-Constrained Bundle Adjustment Feiyi Chen, Liang Li, Shuyang Zhang, Jin Wu and Lujia Wang IEEE Robotics and Automation Letters, 2022 [paper] [supplementary] [bibtex]","title":"PBACalib"},{"location":"calibration/pbacalib/#introduction","text":"In the autonomous driving industry, the resolution of the LiDAR equipped on the vehicle grows rapidly with reduced cost and the release of new solid-state LiDAR (e.g., Livox). However, most existing works focus on mechanical LiDAR (e.g., Velodyne) and rely on prepared artificial targets, such as checkerboard, circle, and sphere, which are sometimes unavailable. Besides, it is challenging to implement some old methods on dense LiDAR because of different data structures. For instance, the large number of bleeding points that exist around depth-discontinuous edges in dense LiDAR, degrade the performances of some edge extraction algorithms. Moreover, zero-valued and multi-valued mapping problems also make the mutual information-based method unstable. On the other hand, the mounting position and orientation of sensors depend on the actual needs, and it fails some calibration methods. Some other targetless methods resort to the depth-continuous edges of the environment. In this case, LiDARs need to be mounted upwards to observe enough edges of the buildings for the calibration, which is not practical in some cases. Considering the above challenges, we propose PBACalib, which captures several pairs of images and point clouds around a plane with arbitrary texture to calibrate the extrinsics between dense LiDAR (livox) and camera . Our contirbutions are summerized as follows: A novel targetless extrinsic calibration method for high-resolution LiDAR and camera based on planeconstrained bundle adjustment. It only needs a common textured ground, wall, or other planes to accomplish the calibration. Validity analysis on the collected dataset. We theoretically analyze the distribution of the collected data and introduce a confidence factor to determine whether the input data is sufficient for calibration. Specific requirements are listed to guide users to stabilize the calibration result, which are: 1) we need at least four poses; 2) the target planes do not intersect at the same point; 3) at least three normal vectors are non-coplanar Evaluated with various simulation, real-world and comparison experiments, which reveal that the proposed method is accurate and robust. To benefit the community, we publicly release the source code on Github","title":"Introduction"},{"location":"calibration/pbacalib/#sensors","text":"","title":"Sensors"},{"location":"calibration/pbacalib/#experiments","text":"","title":"Experiments"},{"location":"calibration/pbacalib/#the-calibration-results-compared-with-other-method","text":"The qualitative comparison using Yuan\u2019s approach and our method. The projected points are colorized by points\u2019 intensity value","title":"The calibration results compared with other method"},{"location":"calibration/pbacalib/#projecting-points-onto-images-with-calibrated-extrinsics","text":"4 valid calibration scenes, and the qualitative results in real world. The projected points in top two figures (a)(b) are colorized by points\u2019 intensity value. In the bottom two figures(c)(d), different colors represent different planes, which are iteratively extracted by plane RANSAC","title":"Projecting points onto images with calibrated extrinsics"},{"location":"calibration/pbacalib/#code","text":"The dataset tool can be used by clicking the button below PBACalib","title":"Code"},{"location":"calibration/pbacalib/#publications","text":"PBACalib: Targetless Extrinsic Calibration for High-Resolution LiDAR-Camera System Based on Plane-Constrained Bundle Adjustment Feiyi Chen, Liang Li, Shuyang Zhang, Jin Wu and Lujia Wang IEEE Robotics and Automation Letters, 2022 [paper] [supplementary] [bibtex]","title":"Publications"},{"location":"challenge/prcv2022_vslam/","text":"PRCV2022 The FusionPortable-VSLAM Challenge \u23ec Dataset | \ud83e\udea7 Challenge | \ud83c\udfeb RAM-LAB | \ud83e\uddf1 VisDrone | \ud83d\udce7 Email | \ud83d\udcdd Docs | \ud83d\udcc3 Paper Introduction \u00b6 This visual SLAM benchmark is based on the FusionPortable dataset, which covers a variety of environments in The Hong Kong University of Science and Technology campus by utilizing multiple platforms for data collection. It provides a large range of difficult scenarios for Simultaneous Localization and Mapping (SLAM). All these sequences are characterized by structure-less areas and varying illumination conditions to best represent the real-world scenarios and pose great challenges to the SLAM algorithms which were verified in confined lab environments. Sensor Characteristics 3D LiDAR ( not provided ) Ouster OS1-128, 128 channels, 120m range Frame Camera * 2 FILR BFS-U3-31S4C\uff0c resolution: 1024 \u00d7 768 Event Camera * 2 DAVIS346, resolution: 346 \u00d7 240\uff0c2 built-in imu IMU (body_imu) STIM300 GPS ZED-F9P RTK-GPS Ground Truth Leica BLK360 Imaging Laser Scanner Latest News \u00b6 [08.10]: the evaluation codes released! [08.09]: the ground thruth of 20220216_garden_day released! [08.07]: calibration dataset released. [08.01]: challenge data sequences released. Evaluation \u00b6 Evaluation Method \u00b6 We provide the tools for the trajectory evaluation here . The submission will be ranked based on the completeness and frequency of the trajectory as well as on the position accuracy (ATE) . The score is based on the ATE of individual points on the trajectory. Points with the error smaller than a distance threshold are added to your final score. This evaluation scheme is inspired by HILTI Challenge . Output trajectories should be transformed into the body_imu frame, We will align the trajectory with the dense ground truth points using a rigid transformation. Then the Absolute Trajectory Error (ATE) of a set of discrete point is computed. At each ground truth point, extra penalty points are added to the final score depending on the amount of error at this point: Error Score (points) <= 5cm 10 <= 30cm 6 <= 50cm 3 <= 100cm 1 > 100cm 0 Each sequence will be evaluated over a maximum of 200 points, which leads to a maximum of $N\\times 200$ points being evaluated among $N$ sequences. Given an example: Leaderboard \u00b6 Sign up for an account and submit your results in the evaluation system, the live leaderboard will update your ranking. Submission Guidelines \u00b6 Trajectory Results Please upload a .zip file consisting of a list of text files named as the sequence name shown as follows: traj/20220215_canteen_night.txt traj/20220215_garden_night.txt traj/20220219_MCR_slow_00.txt traj/20220226_campus_road_day.txt .... These text files should put in a folder of \"traj\" , and then compress as a .zip file, such as \" traj.zip *\" The text files should have the following contents(TUM format): 1644928761.036623716 0.0 0.0 0.0 0.0 0.0 0.0 1.0 .... Each row contains timestamp_s tx ty tz qx qy qz qw . The timestamps are in the unit of second which are used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting time specified above, and only poses after the starting time will be used for evaluation. The poses should specify the poses of the body IMU in the world frame. If the estimated poses are in the frame of other sensors, one should transform these poses into the world frame of the body IMU as T_bodyw_body = T_body_sensor * T_sensorw_sensor * T_body_sensor^(-1); . Do not publicly release your trajectory estimates, as we might re-use some of the datasets for future competitions. A team can only register one account. Quota can only be obtained by joining the WeChat group . In order to prevent the problem of a team registering multiple accounts, this competition requires all members of the participating team to join the WeChat group . And the old account cannot be used, you need to re-register a new account . Download \u00b6 All data download addresses can be found in this directory \uff1a \ud83d\udcc1 We provide the compressed rosbag data, remember to execute the following command to decompress them. rosbag decompress 20220216_garden_day.bag Calibration Files \u00b6 Yaml Files Describtion Link body_imu extrinsics and intrinsics of the STIM300 body_imu.yaml event_cam00 extrinsics and intrinsics of the left event camera event_cam00.yaml event_cam00_imu extrinsics and intrinsics of the left event camera imu event_cam00_imu.yaml event_cam01 extrinsics and intrinsics of the right event camera event_cam01.yaml event_cam01_imu extrinsics and intrinsics of the right event camera imu event_cam01_imu.yaml frame_cam00 extrinsics and intrinsics of the left flir camera frame_cam00.yaml frame_cam01 extrinsics and intrinsics of the right flir camera frame_cam01.yaml Test Sequences \u00b6 Platform Sequence Compressed Bag Ground Truth Handheld 20220216_garden_day 20.4GB 20220216_garden_day.txt Calibration Sequences \u00b6 Platform Sequence Compressed Bag Handheld 20220209_StaticTarget_SmallCheckerBoard_9X12_30mm 6.7GB Handheld 20220215_DynamicTarget_BigCheckerBoard_7X10_68mm 2.3GB Handheld 20220209_Static_IMUs_3h20mins 894MB Challenge Sequences \u00b6 Platform Sequence Compressed Bag Handheld 20220216_canteen_night 15.9GB 20220216_canteen_day 17.0GB 20220215_garden_night 8.5GB 20220216_garden_day 20.4GB 20220216_corridor_day 27.4GB 20220216_escalator_day 31.7GB 20220225_building_day 37.5GB 20220216_MCR_slow 3.5GB 20220216_MCR_normal 2.2GB 20220216_MCR_fast 1.7GB Quadruped Robot 20220219_MCR_slow_00 9.7GB 20220219_MCR_slow_01 8.4GB 20220219_MCR_normal_00 7.1GB 20220219_MCR_normal_01 6.5GB 20220219_MCR_fast_00 7.6GB 20220219_MCR_fast_01 8.5GB Apollo Vehicle 20220226_campus_road 72.3GB FAQ \u00b6 How are the frames defined on the sensor setup? The picture below is a schematic illustration of the reference frames (red = x, green = y, blue = z): Is the ground truth available? We will provide some sample datasets along with their ground truth collected with the same sensor kit, but the ground truth for the challenge sequences is not available. However, you can submit your own results in the website evaluation system for evaluation. The ground truth for all challenge sequences will finally be announced at the PRCV WORKSHOP in October. Star History \u00b6 Publication \u00b6 When using this work in an academic context, please cite the following paper: FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms Jianhao Jiao*, Hexiang Wei*, Tianshuai Hu*, Xiangcheng Hu*, Yilong Zhu, Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia Wang, Ming Liu Presented at IROS 2022 [paper] [bibtex] Acknowledgement \u00b6 This challenge was supported by the Wireless Technology . We would like to thank the AISKYEYE Team at Lab of Machine Learning and Data Mining of Tianjin University, for hosting our challenge at the PRCV2022 workshop. Futher, this challenge would not have been possible without the assistance of Prof.Ming Liu, Prof.Lujia Wang, Prof.Pengfei Zhu, Prof.Dingwen Zhang, Dr.Zhijian He and Dr.Jianhao Jiao for the great support in organizing the challenge, verifying the data and providing the HILTI Challenge 2022 as template for this challenge. We would also like to thank Prof.Jack Chin Pang CHENG and his team for the support of dense mapping device. License \u00b6 All datasets and benchmarks on this page are copyright by us and published under the Creative Commons license (CC BY-NC-SA 3.0) , which is free for non-commercial use (including research).","title":"Prcv2022 vslam"},{"location":"challenge/prcv2022_vslam/#introduction","text":"This visual SLAM benchmark is based on the FusionPortable dataset, which covers a variety of environments in The Hong Kong University of Science and Technology campus by utilizing multiple platforms for data collection. It provides a large range of difficult scenarios for Simultaneous Localization and Mapping (SLAM). All these sequences are characterized by structure-less areas and varying illumination conditions to best represent the real-world scenarios and pose great challenges to the SLAM algorithms which were verified in confined lab environments. Sensor Characteristics 3D LiDAR ( not provided ) Ouster OS1-128, 128 channels, 120m range Frame Camera * 2 FILR BFS-U3-31S4C\uff0c resolution: 1024 \u00d7 768 Event Camera * 2 DAVIS346, resolution: 346 \u00d7 240\uff0c2 built-in imu IMU (body_imu) STIM300 GPS ZED-F9P RTK-GPS Ground Truth Leica BLK360 Imaging Laser Scanner","title":"Introduction"},{"location":"challenge/prcv2022_vslam/#latest-news","text":"[08.10]: the evaluation codes released! [08.09]: the ground thruth of 20220216_garden_day released! [08.07]: calibration dataset released. [08.01]: challenge data sequences released.","title":"Latest News"},{"location":"challenge/prcv2022_vslam/#evaluation","text":"","title":"Evaluation"},{"location":"challenge/prcv2022_vslam/#evaluation-method","text":"We provide the tools for the trajectory evaluation here . The submission will be ranked based on the completeness and frequency of the trajectory as well as on the position accuracy (ATE) . The score is based on the ATE of individual points on the trajectory. Points with the error smaller than a distance threshold are added to your final score. This evaluation scheme is inspired by HILTI Challenge . Output trajectories should be transformed into the body_imu frame, We will align the trajectory with the dense ground truth points using a rigid transformation. Then the Absolute Trajectory Error (ATE) of a set of discrete point is computed. At each ground truth point, extra penalty points are added to the final score depending on the amount of error at this point: Error Score (points) <= 5cm 10 <= 30cm 6 <= 50cm 3 <= 100cm 1 > 100cm 0 Each sequence will be evaluated over a maximum of 200 points, which leads to a maximum of $N\\times 200$ points being evaluated among $N$ sequences. Given an example:","title":"Evaluation Method"},{"location":"challenge/prcv2022_vslam/#leaderboard","text":"Sign up for an account and submit your results in the evaluation system, the live leaderboard will update your ranking.","title":"Leaderboard"},{"location":"challenge/prcv2022_vslam/#submission-guidelines","text":"Trajectory Results Please upload a .zip file consisting of a list of text files named as the sequence name shown as follows: traj/20220215_canteen_night.txt traj/20220215_garden_night.txt traj/20220219_MCR_slow_00.txt traj/20220226_campus_road_day.txt .... These text files should put in a folder of \"traj\" , and then compress as a .zip file, such as \" traj.zip *\" The text files should have the following contents(TUM format): 1644928761.036623716 0.0 0.0 0.0 0.0 0.0 0.0 1.0 .... Each row contains timestamp_s tx ty tz qx qy qz qw . The timestamps are in the unit of second which are used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting time specified above, and only poses after the starting time will be used for evaluation. The poses should specify the poses of the body IMU in the world frame. If the estimated poses are in the frame of other sensors, one should transform these poses into the world frame of the body IMU as T_bodyw_body = T_body_sensor * T_sensorw_sensor * T_body_sensor^(-1); . Do not publicly release your trajectory estimates, as we might re-use some of the datasets for future competitions. A team can only register one account. Quota can only be obtained by joining the WeChat group . In order to prevent the problem of a team registering multiple accounts, this competition requires all members of the participating team to join the WeChat group . And the old account cannot be used, you need to re-register a new account .","title":"Submission Guidelines"},{"location":"challenge/prcv2022_vslam/#download","text":"All data download addresses can be found in this directory \uff1a \ud83d\udcc1 We provide the compressed rosbag data, remember to execute the following command to decompress them. rosbag decompress 20220216_garden_day.bag","title":"Download"},{"location":"challenge/prcv2022_vslam/#calibration-files","text":"Yaml Files Describtion Link body_imu extrinsics and intrinsics of the STIM300 body_imu.yaml event_cam00 extrinsics and intrinsics of the left event camera event_cam00.yaml event_cam00_imu extrinsics and intrinsics of the left event camera imu event_cam00_imu.yaml event_cam01 extrinsics and intrinsics of the right event camera event_cam01.yaml event_cam01_imu extrinsics and intrinsics of the right event camera imu event_cam01_imu.yaml frame_cam00 extrinsics and intrinsics of the left flir camera frame_cam00.yaml frame_cam01 extrinsics and intrinsics of the right flir camera frame_cam01.yaml","title":"Calibration Files"},{"location":"challenge/prcv2022_vslam/#test-sequences","text":"Platform Sequence Compressed Bag Ground Truth Handheld 20220216_garden_day 20.4GB 20220216_garden_day.txt","title":"Test Sequences"},{"location":"challenge/prcv2022_vslam/#calibration-sequences","text":"Platform Sequence Compressed Bag Handheld 20220209_StaticTarget_SmallCheckerBoard_9X12_30mm 6.7GB Handheld 20220215_DynamicTarget_BigCheckerBoard_7X10_68mm 2.3GB Handheld 20220209_Static_IMUs_3h20mins 894MB","title":"Calibration Sequences"},{"location":"challenge/prcv2022_vslam/#challenge-sequences","text":"Platform Sequence Compressed Bag Handheld 20220216_canteen_night 15.9GB 20220216_canteen_day 17.0GB 20220215_garden_night 8.5GB 20220216_garden_day 20.4GB 20220216_corridor_day 27.4GB 20220216_escalator_day 31.7GB 20220225_building_day 37.5GB 20220216_MCR_slow 3.5GB 20220216_MCR_normal 2.2GB 20220216_MCR_fast 1.7GB Quadruped Robot 20220219_MCR_slow_00 9.7GB 20220219_MCR_slow_01 8.4GB 20220219_MCR_normal_00 7.1GB 20220219_MCR_normal_01 6.5GB 20220219_MCR_fast_00 7.6GB 20220219_MCR_fast_01 8.5GB Apollo Vehicle 20220226_campus_road 72.3GB","title":"Challenge Sequences"},{"location":"challenge/prcv2022_vslam/#faq","text":"How are the frames defined on the sensor setup? The picture below is a schematic illustration of the reference frames (red = x, green = y, blue = z): Is the ground truth available? We will provide some sample datasets along with their ground truth collected with the same sensor kit, but the ground truth for the challenge sequences is not available. However, you can submit your own results in the website evaluation system for evaluation. The ground truth for all challenge sequences will finally be announced at the PRCV WORKSHOP in October.","title":"FAQ"},{"location":"challenge/prcv2022_vslam/#star-history","text":"","title":"Star History"},{"location":"challenge/prcv2022_vslam/#publication","text":"When using this work in an academic context, please cite the following paper: FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms Jianhao Jiao*, Hexiang Wei*, Tianshuai Hu*, Xiangcheng Hu*, Yilong Zhu, Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia Wang, Ming Liu Presented at IROS 2022 [paper] [bibtex]","title":"Publication"},{"location":"challenge/prcv2022_vslam/#acknowledgement","text":"This challenge was supported by the Wireless Technology . We would like to thank the AISKYEYE Team at Lab of Machine Learning and Data Mining of Tianjin University, for hosting our challenge at the PRCV2022 workshop. Futher, this challenge would not have been possible without the assistance of Prof.Ming Liu, Prof.Lujia Wang, Prof.Pengfei Zhu, Prof.Dingwen Zhang, Dr.Zhijian He and Dr.Jianhao Jiao for the great support in organizing the challenge, verifying the data and providing the HILTI Challenge 2022 as template for this challenge. We would also like to thank Prof.Jack Chin Pang CHENG and his team for the support of dense mapping device.","title":"Acknowledgement"},{"location":"challenge/prcv2022_vslam/#license","text":"All datasets and benchmarks on this page are copyright by us and published under the Creative Commons license (CC BY-NC-SA 3.0) , which is free for non-commercial use (including research).","title":"License"},{"location":"challenge/prcv2022_vslam_evaluation/","text":"Overview \u00b6 This visual SLAM benchmark is based on the FusionPortable dataset, which covers a variety of environments in The Hong Kong University of Science and Technology campus by utilizing multiple platforms for data collection. It provides a large range of difficult scenarios for Simultaneous Localization and Mapping (SLAM). All these sequences are characterized by structure-less areas and varying illumination conditions to best represent the real-world scenarios and pose great challenges to the SLAM algorithms which were verified in confined lab environments. Accurate centimeter-level ground truth of each sequence is provided for algorithm verification. Sensor data contained in the dataset includes 10Hz LiDAR point clouds, 20Hz stereo frame images, high-rate and asynchronous events from stereo event cameras, 200Hz acceleration and angular velocity readings from an IMU, and 10Hz GPS signals in the outdoor environments. Sensors are spatially and temporally calibrated. For more information, we can visit the following websits: Github Repo for FusionPortable-VSLAM Challenge homepage of FusionPortable Dataset homepage of FusionPortable-VSLAM Challenge homepage of PRCV Aerial-Ground Intelligent Unmanned System Environment Perception Challenge Introduction of PRCV challenge on Wexin Official Accounts Platform # Hardware The sensors are mounted rigidly on an aluminium platform for handheld operation. An FPGA is utilized to generate an external signal trigger to synchronize clocks of all sensors. We install the sensor rig on various platforms to simulate distinguishable motions of different equipments, including a handheld device with a gimbal stabilizer, a quadruped robot, and an autonomous vehicle. Sensor Characteristics 3D LiDAR ( not provided ) Ouster OS1-128, 128 channels, 120m range Frame Camera * 2 FILR BFS-U3-31S4C\uff0c resolution: 1024 \u00d7 768 Event Camera * 2 DAVIS346, resolution: 346 \u00d7 240\uff0c2 built-in imu IMU (body_imu) STIM300 GPS ZED-F9P RTK-GPS Ground Truth Leica BLK 360 Calibration: The calibration file in yaml format can be downloaded here . We provide intrinsics & extrinsics of cameras as well as noise parameters of the IMU and also the raw calibration data. Intriniscs are calibrated using the MATLAB tool, and the extrinsics are calibrated using the Kalibr . Taking the frame_cam00.yaml as an example, parameters are provided in the form as follows: yaml image_width: 1024 image_height: 768 camera_name: stereo_left_flir_bfsu3 camera_matrix: !!opencv-matrix rows: 3 cols: 3 dt: f data: [ 6.05128601e+02, 0., 5.21453430e+02, 0., 6.04974060e+02, 3.94878479e+02, 0., 0., 1. ] ... # extrinsics from the sensor (reference) to bodyimu (target) quaternion_sensor_bodyimu: !!opencv-matrix rows: 1 cols: 4 dt: f data: [0.501677, 0.491365, -0.508060, 0.498754] # (qw, qx, qy, qz) translation_sensor_bodyimu: !!opencv-matrix rows: 1 cols: 3 dt: f data: [0.066447, -0.019381, -0.077907] timeshift_sensor_bodyimu: 0.03497752745342453 Rotational and translational calibration parameters from the camera (reference frame) to the IMU (target frame) are presented in the form of the Hamilton quaternion ( [qw, qx, qy, qz] ) and the translation vector ( [tx, ty, tz] ). The timeshift is obtained by the Kalibr. Evaluation \u00b6 The submission will be ranked based on the completeness and frequency of the trajectory as well as on the position accuracy (ATE) . The score is based on the ATE of individual points on the trajectory. Points with the error smaller than a distance threshold are added to your final score. This evaluation scheme is inspired by HILTI Challenge . Output trajectories should be transformed into the body_imu frame, We will align the trajectory with the dense ground truth points using a rigid transformation. Then the Absolute Trajectory Error (ATE) of a set of discrete point is computed. At each ground truth point, extra penalty points are added to the final score depending on the amount of error at this point: Error Score (points) <= 5cm 10 <= 30cm 6 <= 50cm 3 <= 100cm 1 > 100cm 0 Each sequence will be evaluated over a maximum of 200 points, which leads to a maximum of $N\\times 200$ points being evaluated among $N$ sequences. Given an example: Submission Guidelines \u00b6 Trajectory Results Please upload a .zip file consisting of a list of text files named as the sequence name shown as follows: 20220215_canteen_night.txt 20220215_garden_night.txt 20220219_MCR_slow_00.txt 20220226_campus_road_day.txt .... The text files should have the following contents: 1644928761.036623716 0.0 0.0 0.0 0.0 0.0 0.0 1.0 .... Each row contains timestamp_s tx ty tz qx qy qz qw . The timestamps are in the unit of second which are used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting time specified above, and only poses after the starting time will be used for evaluation. The poses should specify the poses of the body IMU in the world frame. If the estimated poses are in the frame of other sensors, one should transform these poses into the world frame of the body IMU as T_bodyw_body = T_body_sensor * T_sensorw_sensor * T_body_sensor^(-1); . Do not publicly release your trajectory estimates, as we might re-use some of the datasets for future competitions. Download \u00b6 We provide the compressed rosbag data, remember to execute the following command to decompress them. # example: 20220216_garden_day_ref_compressed rosbag decompress 20220216_garden_day.bag Calibration files \u00b6 Yaml Files Describtion Link body_imu extrinsics and intrinsics of STIM300 body_imu.yaml event_cam00 extrinsics and intrinsics of the left event camera event_cam00.yaml event_cam00_imu extrinsics and intrinsics of the left event camera imu event_cam00_imu.yaml event_cam01 extrinsics and intrinsics of the right event camera event_cam01.yaml event_cam01_imu extrinsics and intrinsics of the right event camera imu event_cam01_imu.yaml frame_cam00 extrinsics and intrinsics of the left flir camera frame_cam00.yaml frame_cam01 extrinsics and intrinsics of the right flir camera frame_cam01.yaml Test Sequences \u00b6 Platform Sequence Compressed Bag Handheld 20220216_garden_day 20.4GB Calibration Sequences \u00b6 Platform Sequence Compressed Bag Handheld comming soon!!!! Challenge Sequences \u00b6 Platform Sequence Compressed Bag Handheld 20220216_canteen_night 15.9GB 20220216_canteen_day 17.0GB 20220215_garden_night 8.5GB 20220216_garden_day 20.4GB 20220216_corridor_day 27.4GB 20220216_escalator_day 31.7GB 20220225_building_day 37.5GB 20220216_MCR_slow 3.5GB 20220216_MCR_normal 2.2GB 20220216_MCR_fast 1.7GB Quadruped Robot 20220219_MCR_slow_00 9.7GB 20220219_MCR_slow_01 8.4GB 20220219_MCR_normal_00 7.1GB 20220219_MCR_normal_01 6.5GB 20220219_MCR_fast_00 7.6GB 20220219_MCR_fast_01 8.5GB Apollo Vehicle 20220226_campus_road 72.3GB Detailed statistics are shown: Download link can be found here FAQ \u00b6 How are the frames defined on the sensor setup? The picture below is a schematic illustration of the reference frames (red = x, green = y, blue = z): How are the results scored? The results submitted by each team will be scored based on the completeness and ATE accuracy of the trajectories. All the results will be displayed in the live leaderboard. Each trajectory will be scored based on the standard evaluation points, the accumulation of the scores of all these evaluation points is normalized to 100 points to get the final score of the sequence. Each evaluation point can get 0-10 points according to its accuracy. Will the organizer provide the calibration datasets of the IMU and camera? Of course, we will provide the calibration data of IMU and cameras. Is the ground truth available? We will provide some sample datasets along with their ground truth collected with the same sensor kit, but the ground truth for the challenge sequences is not available. However, you can submit your own results in the website evaluation system for evaluation. Reference \u00b6 [1] Jianhao Jiao, Hexiang Wei, Tianshuai Hu, Xiangcheng Hu, etc., Lujia Wang, Ming Liu, FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, Kyoto, Japan. [2] HILTI Challenge .","title":"Prcv2022 vslam evaluation"},{"location":"challenge/prcv2022_vslam_evaluation/#overview","text":"This visual SLAM benchmark is based on the FusionPortable dataset, which covers a variety of environments in The Hong Kong University of Science and Technology campus by utilizing multiple platforms for data collection. It provides a large range of difficult scenarios for Simultaneous Localization and Mapping (SLAM). All these sequences are characterized by structure-less areas and varying illumination conditions to best represent the real-world scenarios and pose great challenges to the SLAM algorithms which were verified in confined lab environments. Accurate centimeter-level ground truth of each sequence is provided for algorithm verification. Sensor data contained in the dataset includes 10Hz LiDAR point clouds, 20Hz stereo frame images, high-rate and asynchronous events from stereo event cameras, 200Hz acceleration and angular velocity readings from an IMU, and 10Hz GPS signals in the outdoor environments. Sensors are spatially and temporally calibrated. For more information, we can visit the following websits: Github Repo for FusionPortable-VSLAM Challenge homepage of FusionPortable Dataset homepage of FusionPortable-VSLAM Challenge homepage of PRCV Aerial-Ground Intelligent Unmanned System Environment Perception Challenge Introduction of PRCV challenge on Wexin Official Accounts Platform # Hardware The sensors are mounted rigidly on an aluminium platform for handheld operation. An FPGA is utilized to generate an external signal trigger to synchronize clocks of all sensors. We install the sensor rig on various platforms to simulate distinguishable motions of different equipments, including a handheld device with a gimbal stabilizer, a quadruped robot, and an autonomous vehicle. Sensor Characteristics 3D LiDAR ( not provided ) Ouster OS1-128, 128 channels, 120m range Frame Camera * 2 FILR BFS-U3-31S4C\uff0c resolution: 1024 \u00d7 768 Event Camera * 2 DAVIS346, resolution: 346 \u00d7 240\uff0c2 built-in imu IMU (body_imu) STIM300 GPS ZED-F9P RTK-GPS Ground Truth Leica BLK 360 Calibration: The calibration file in yaml format can be downloaded here . We provide intrinsics & extrinsics of cameras as well as noise parameters of the IMU and also the raw calibration data. Intriniscs are calibrated using the MATLAB tool, and the extrinsics are calibrated using the Kalibr . Taking the frame_cam00.yaml as an example, parameters are provided in the form as follows: yaml image_width: 1024 image_height: 768 camera_name: stereo_left_flir_bfsu3 camera_matrix: !!opencv-matrix rows: 3 cols: 3 dt: f data: [ 6.05128601e+02, 0., 5.21453430e+02, 0., 6.04974060e+02, 3.94878479e+02, 0., 0., 1. ] ... # extrinsics from the sensor (reference) to bodyimu (target) quaternion_sensor_bodyimu: !!opencv-matrix rows: 1 cols: 4 dt: f data: [0.501677, 0.491365, -0.508060, 0.498754] # (qw, qx, qy, qz) translation_sensor_bodyimu: !!opencv-matrix rows: 1 cols: 3 dt: f data: [0.066447, -0.019381, -0.077907] timeshift_sensor_bodyimu: 0.03497752745342453 Rotational and translational calibration parameters from the camera (reference frame) to the IMU (target frame) are presented in the form of the Hamilton quaternion ( [qw, qx, qy, qz] ) and the translation vector ( [tx, ty, tz] ). The timeshift is obtained by the Kalibr.","title":"Overview"},{"location":"challenge/prcv2022_vslam_evaluation/#evaluation","text":"The submission will be ranked based on the completeness and frequency of the trajectory as well as on the position accuracy (ATE) . The score is based on the ATE of individual points on the trajectory. Points with the error smaller than a distance threshold are added to your final score. This evaluation scheme is inspired by HILTI Challenge . Output trajectories should be transformed into the body_imu frame, We will align the trajectory with the dense ground truth points using a rigid transformation. Then the Absolute Trajectory Error (ATE) of a set of discrete point is computed. At each ground truth point, extra penalty points are added to the final score depending on the amount of error at this point: Error Score (points) <= 5cm 10 <= 30cm 6 <= 50cm 3 <= 100cm 1 > 100cm 0 Each sequence will be evaluated over a maximum of 200 points, which leads to a maximum of $N\\times 200$ points being evaluated among $N$ sequences. Given an example:","title":"Evaluation"},{"location":"challenge/prcv2022_vslam_evaluation/#submission-guidelines","text":"Trajectory Results Please upload a .zip file consisting of a list of text files named as the sequence name shown as follows: 20220215_canteen_night.txt 20220215_garden_night.txt 20220219_MCR_slow_00.txt 20220226_campus_road_day.txt .... The text files should have the following contents: 1644928761.036623716 0.0 0.0 0.0 0.0 0.0 0.0 1.0 .... Each row contains timestamp_s tx ty tz qx qy qz qw . The timestamps are in the unit of second which are used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting time specified above, and only poses after the starting time will be used for evaluation. The poses should specify the poses of the body IMU in the world frame. If the estimated poses are in the frame of other sensors, one should transform these poses into the world frame of the body IMU as T_bodyw_body = T_body_sensor * T_sensorw_sensor * T_body_sensor^(-1); . Do not publicly release your trajectory estimates, as we might re-use some of the datasets for future competitions.","title":"Submission Guidelines"},{"location":"challenge/prcv2022_vslam_evaluation/#download","text":"We provide the compressed rosbag data, remember to execute the following command to decompress them. # example: 20220216_garden_day_ref_compressed rosbag decompress 20220216_garden_day.bag","title":"Download"},{"location":"challenge/prcv2022_vslam_evaluation/#calibration-files","text":"Yaml Files Describtion Link body_imu extrinsics and intrinsics of STIM300 body_imu.yaml event_cam00 extrinsics and intrinsics of the left event camera event_cam00.yaml event_cam00_imu extrinsics and intrinsics of the left event camera imu event_cam00_imu.yaml event_cam01 extrinsics and intrinsics of the right event camera event_cam01.yaml event_cam01_imu extrinsics and intrinsics of the right event camera imu event_cam01_imu.yaml frame_cam00 extrinsics and intrinsics of the left flir camera frame_cam00.yaml frame_cam01 extrinsics and intrinsics of the right flir camera frame_cam01.yaml","title":"Calibration files"},{"location":"challenge/prcv2022_vslam_evaluation/#test-sequences","text":"Platform Sequence Compressed Bag Handheld 20220216_garden_day 20.4GB","title":"Test Sequences"},{"location":"challenge/prcv2022_vslam_evaluation/#calibration-sequences","text":"Platform Sequence Compressed Bag Handheld comming soon!!!!","title":"Calibration Sequences"},{"location":"challenge/prcv2022_vslam_evaluation/#challenge-sequences","text":"Platform Sequence Compressed Bag Handheld 20220216_canteen_night 15.9GB 20220216_canteen_day 17.0GB 20220215_garden_night 8.5GB 20220216_garden_day 20.4GB 20220216_corridor_day 27.4GB 20220216_escalator_day 31.7GB 20220225_building_day 37.5GB 20220216_MCR_slow 3.5GB 20220216_MCR_normal 2.2GB 20220216_MCR_fast 1.7GB Quadruped Robot 20220219_MCR_slow_00 9.7GB 20220219_MCR_slow_01 8.4GB 20220219_MCR_normal_00 7.1GB 20220219_MCR_normal_01 6.5GB 20220219_MCR_fast_00 7.6GB 20220219_MCR_fast_01 8.5GB Apollo Vehicle 20220226_campus_road 72.3GB Detailed statistics are shown: Download link can be found here","title":"Challenge Sequences"},{"location":"challenge/prcv2022_vslam_evaluation/#faq","text":"How are the frames defined on the sensor setup? The picture below is a schematic illustration of the reference frames (red = x, green = y, blue = z): How are the results scored? The results submitted by each team will be scored based on the completeness and ATE accuracy of the trajectories. All the results will be displayed in the live leaderboard. Each trajectory will be scored based on the standard evaluation points, the accumulation of the scores of all these evaluation points is normalized to 100 points to get the final score of the sequence. Each evaluation point can get 0-10 points according to its accuracy. Will the organizer provide the calibration datasets of the IMU and camera? Of course, we will provide the calibration data of IMU and cameras. Is the ground truth available? We will provide some sample datasets along with their ground truth collected with the same sensor kit, but the ground truth for the challenge sequences is not available. However, you can submit your own results in the website evaluation system for evaluation.","title":"FAQ"},{"location":"challenge/prcv2022_vslam_evaluation/#reference","text":"[1] Jianhao Jiao, Hexiang Wei, Tianshuai Hu, Xiangcheng Hu, etc., Lujia Wang, Ming Liu, FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, Kyoto, Japan. [2] HILTI Challenge .","title":"Reference"},{"location":"dataset/bag_rename_fusionportable_v2/","text":"8 Sequences for Evaluation \u00b6 handheld 20230526_1241_starbucks -> starbucks00 20230520_1515_room_dynamic_pedestrain -> room00 mini_hercules 20230609_1245_parking_jerkymotion -> parking00 20230609_1253_campus -> campus00 quadrupedal_robot 20230802_1854_dynamic_pedestrain -> room02 20230802_1901_grass -> grass00 vehicle 20230620_1612_hongkong_campus_road -> campus00 20230621_1536_hongkong_highway_bridge -> highway00 N Sequences for Release \u00b6 handheld 20230520_1515_room_dynamic_pedestrain -> room00 20230520_1519_room_dynamic_pedestrain -> room01 20230526_1036_grass -> grass00 20230526_1241_starbucks -> starbucks00 20230526_1249_starbucks -> starbucks01 20230802_2224_tunnel -> tunnel00 **purple: unpublish** 20230518_2207_white_corridor -> unpublish_corridor00 20230520_1231_room_static -> unpublish_room02 20230526_1040_grass -> unpublish_grass01 20230526_1119_bridge -> unpublish_bridge00 20230526_1137_bridge -> unpublish_bridge01 mini_hercules 20230609_1213_parking -> parking00 20230609_1216_parking -> parking01 20230609_1245_parking_jerkymotion -> parking02 20230609_1247_parking_jerkymotion -> parking03 20230609_1253_campus -> campus00 20230610_0809_hybrid -> hybrid00 20230613_0826_hybrid -> hybrid01 20230614_1714_building_jerky_motion -> campus01 **purple: late publish** 20230608_0756_parking_wo_encoder -> unpublish_parking04 20230608_0801_parking_wo_encoder -> unpublish_parking05 20230608_0815_parking_wo_encoder -> unpublish_parking06 20230608_0830_parking_jerkymotion_wo_encoder -> unpublish_parking07 20230608_0906_campus_wo_encoder -> unpublish_campus02 20230609_1242_parking -> unpublish_parking08 20230614_1739_building_pedestrain_wo_encoder -> unpublish_campus03 20230614_1745_building_eight_shape -> unpublish_campus04 quadrupedal_robot 20230731_1742_hybrid -> hybrid00 20230802_1547_grass -> grass00 20230802_1854_dynamic_pedestrain -> room00 20230802_1901_grass -> grass01 20230802_2150_tunnel -> tunnel00 **purple: late publish** 20230802_1646_dynamic_pedestrain -> room01 20230731_1751_grass -> grass01 20230731_2224_tunnel -> tunnel01 20230731_2250_tunnel_poor_event_joint -> tunnel02 20230802_1633_dynamic_pedestrain -> room02 20230802_1646_dynamic_pedestrain_notcomplete -> room03 20230802_1917_grass -> grass02 vehicle 20230620_1612_hongkong_campus_road -> campus00 20230620_1622_hongkong_campus_road -> campus00 20230620_1634_hongkong_downhill_road -> downhill00 20230620_1738_hongkong_multilayer_parking -> multilayer_parking00 20230621_1251_hongkong_street -> street00 20230621_1536_hongkong_highway_bridge -> highway00 20230621_1819_hongkong_highway_congestion -> highway01 20230621_1825_hongkong_tunnel -> tunnel00 purple: late publish 20230620_1700_hongkong_street -> street01 20230620_1858_hongkong_street_wrong_gnss -> street02 20230620_1906_hongkong_street_wrong_gnss -> street03 20230620_1918_hongkong_street_poor_gnss -> street04 20230621_1221_hongkong_downhill_road -> downhill01 20230621_1242_downhill_road_day_wrong_gnss -> downhill01 20230621_1329_hongkong_street -> street05 20230621_1621_hongkong_highway_bridge -> highway02 20230621_1704_hongkong_highway_bridge -> highway03 20230621_1725_hongkong_highway_bridge_smallpc -> highway04 20230621_1809_hongkong_street_poor_gnss -> street06 20230621_1813_hongkong_street_congestion -> street07 20230621_1839_hongkong_tunnel -> tunnel01 20230621_1842_hongkong_tunnel -> tunnel02 20230621_1843_hongkong_street -> street08 20230621_1933_hanghau_uphill_road_poor_gnss -> uphill00","title":"Bag rename fusionportable v2"},{"location":"dataset/bag_rename_fusionportable_v2/#8-sequences-for-evaluation","text":"handheld 20230526_1241_starbucks -> starbucks00 20230520_1515_room_dynamic_pedestrain -> room00 mini_hercules 20230609_1245_parking_jerkymotion -> parking00 20230609_1253_campus -> campus00 quadrupedal_robot 20230802_1854_dynamic_pedestrain -> room02 20230802_1901_grass -> grass00 vehicle 20230620_1612_hongkong_campus_road -> campus00 20230621_1536_hongkong_highway_bridge -> highway00","title":"8 Sequences for Evaluation"},{"location":"dataset/bag_rename_fusionportable_v2/#n-sequences-for-release","text":"handheld 20230520_1515_room_dynamic_pedestrain -> room00 20230520_1519_room_dynamic_pedestrain -> room01 20230526_1036_grass -> grass00 20230526_1241_starbucks -> starbucks00 20230526_1249_starbucks -> starbucks01 20230802_2224_tunnel -> tunnel00 **purple: unpublish** 20230518_2207_white_corridor -> unpublish_corridor00 20230520_1231_room_static -> unpublish_room02 20230526_1040_grass -> unpublish_grass01 20230526_1119_bridge -> unpublish_bridge00 20230526_1137_bridge -> unpublish_bridge01 mini_hercules 20230609_1213_parking -> parking00 20230609_1216_parking -> parking01 20230609_1245_parking_jerkymotion -> parking02 20230609_1247_parking_jerkymotion -> parking03 20230609_1253_campus -> campus00 20230610_0809_hybrid -> hybrid00 20230613_0826_hybrid -> hybrid01 20230614_1714_building_jerky_motion -> campus01 **purple: late publish** 20230608_0756_parking_wo_encoder -> unpublish_parking04 20230608_0801_parking_wo_encoder -> unpublish_parking05 20230608_0815_parking_wo_encoder -> unpublish_parking06 20230608_0830_parking_jerkymotion_wo_encoder -> unpublish_parking07 20230608_0906_campus_wo_encoder -> unpublish_campus02 20230609_1242_parking -> unpublish_parking08 20230614_1739_building_pedestrain_wo_encoder -> unpublish_campus03 20230614_1745_building_eight_shape -> unpublish_campus04 quadrupedal_robot 20230731_1742_hybrid -> hybrid00 20230802_1547_grass -> grass00 20230802_1854_dynamic_pedestrain -> room00 20230802_1901_grass -> grass01 20230802_2150_tunnel -> tunnel00 **purple: late publish** 20230802_1646_dynamic_pedestrain -> room01 20230731_1751_grass -> grass01 20230731_2224_tunnel -> tunnel01 20230731_2250_tunnel_poor_event_joint -> tunnel02 20230802_1633_dynamic_pedestrain -> room02 20230802_1646_dynamic_pedestrain_notcomplete -> room03 20230802_1917_grass -> grass02 vehicle 20230620_1612_hongkong_campus_road -> campus00 20230620_1622_hongkong_campus_road -> campus00 20230620_1634_hongkong_downhill_road -> downhill00 20230620_1738_hongkong_multilayer_parking -> multilayer_parking00 20230621_1251_hongkong_street -> street00 20230621_1536_hongkong_highway_bridge -> highway00 20230621_1819_hongkong_highway_congestion -> highway01 20230621_1825_hongkong_tunnel -> tunnel00 purple: late publish 20230620_1700_hongkong_street -> street01 20230620_1858_hongkong_street_wrong_gnss -> street02 20230620_1906_hongkong_street_wrong_gnss -> street03 20230620_1918_hongkong_street_poor_gnss -> street04 20230621_1221_hongkong_downhill_road -> downhill01 20230621_1242_downhill_road_day_wrong_gnss -> downhill01 20230621_1329_hongkong_street -> street05 20230621_1621_hongkong_highway_bridge -> highway02 20230621_1704_hongkong_highway_bridge -> highway03 20230621_1725_hongkong_highway_bridge_smallpc -> highway04 20230621_1809_hongkong_street_poor_gnss -> street06 20230621_1813_hongkong_street_congestion -> street07 20230621_1839_hongkong_tunnel -> tunnel01 20230621_1842_hongkong_tunnel -> tunnel02 20230621_1843_hongkong_street -> street08 20230621_1933_hanghau_uphill_road_poor_gnss -> uphill00","title":"N Sequences for Release"},{"location":"dataset/fusionportable/","text":"FusionPortable A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms Introduction \u00b6 We consider that a desirable dataset should fulfill the following four requirements: Various sensors (LiDARs, cameras, IMU, etc.). Various robotic platforms with diverse motion patterns. Sequences cover from room-scale (meter-level) to large-scale (kilometer-level). Benchmarking for different tasks. We are motivated to propose the FusionPortable dataset , which is initially intended to support odometry, localization, mapping, and some perception tasks. We advance a self-contained, portable, and versatile multi-sensor suite. We construct a dataset that covers a variety of environments on the campus by exploiting multiple robot platforms for data collection. We also provide ground truth for the decouple localization and mapping performance evaluation. Data Collection Platforms \u00b6 Sensors \u00b6 128-beam Ouster LiDAR (OS1, 120m range) resolution: (128x2048) FILR BFS-U3-31S4C stereo cameras resolution: (1024x768) DAVIS346 stereo cameras resolution: (346\u00d7240) STIM300 IMU ZED-F9P RTK-GPS Various Platforms \u00b6 Third-View of Data Collection \u00b6 Environment Platform Preview Garden Handheld Motion Capture Room Quadrupled Robot Download \u00b6 Data Organization \u00b6 FusionPortable/ \u251c\u2500\u2500 calibration_files/ // Intrinsics & extrinsics of sensors \u2514\u2500\u2500 20220209_calib/ \u2514\u2500\u2500 .yaml // e.g., ouster00.yaml, frame_cam00.yaml \u251c\u2500\u2500 groundtruth/ \u2514\u2500\u2500 map/ // Ground-truth maps \u2514\u2500\u2500 / \u251c\u2500\u2500 scan/ \u2514\u2500\u2500 .pcd // Individual scan \u251c\u2500\u2500 merged_scan.pcd // Merged scan (resolution: 1cm) \u2514\u2500\u2500 transformation.yaml // Transformation of each scan \u2514\u2500\u2500 traj/ // Ground-truth trajectories \u2514\u2500\u2500 .txt // e.g., 20220215_canteen_night.txt \u2514\u2500\u2500 sensor_data/ \u2514\u2500\u2500 / // Platforms, e.g., handheld \u2514\u2500\u2500 // e.g., 20220215_canteen_night \u251c\u2500\u2500 .bag \u251c\u2500\u2500 .bag.7z \u251c\u2500\u2500 data/ \u2514\u2500\u2500 data_ref_kitti/ Note: 1. .bag raw rosbag. 2. .7z compressed rosbag. 3. data/ stores indivisual sensor data files with timestamps from timestamps.txt . 4. data_ref_kitti/ follows the KITTI format to store sensor data files from data/ . Download \u00b6 Please click these below links to download: Option 1 (recommended, long-term maintenance) : download data from Google Drive Please click this link to download all the data Or use this link: https://drive.google.com/drive/folders/17asiPqNyudKR-VCqCnjd0Z0v5sS0f7qI?usp=drive_link Option 2 : download data from the server in Hong Kong 1. sensor_data - pwd: fusionportable 2. ground-truth trajectories and maps - pwd: fusionportable 3. calibration_files - pwd: fusionportable Note: Extract the ROS bag from .7z files in the terminal: 7z l .7z Sequences \u00b6 Type Platform Picture Sequence Preview Calibration Handheld 20220209_StaticTarget_SmallCheckerBoard_9X12_30mm Calibration Handheld 20220215_DynamicTarget_BigCheckerBoard_7X10_68mm Calibration Handheld 20220209_Static_IMUs_3h20mins Handheld 20220216_canteen_night preview Handheld 20220216_canteen_day preview Handheld 20220215_garden_night preview Handheld 20220216_garden_day preview Handheld 20220216_corridor_day preview Handheld 20220216_escalator_day preview Handheld 20220225_building_day preview Handheld 20220216_MCR_slow preview Handheld 20220216_MCR_normal preview Handheld 20220216_MCR_fast preview Quadruped Robot 20220219_MCR_slow_00 preview Quadruped Robot 20220219_MCR_slow_01 preview Quadruped Robot 20220219_MCR_normal_00 preview Quadruped Robot 20220219_MCR_normal_01 preview Quadruped Robot 20220219_MCR_fast_00 preview Quadruped Robot 20220219_MCR_fast_01 preview Apollo Vehicle 20220226_campus_road preview Some High-Resolution GT Maps \u00b6 Environment Platform Garden Escalator Building Tools \u00b6 The development tool can be used by clicking the button below Development Tools Evaluation \u00b6 Evalaution of Trajectories \u00b6 Issues \u00b6 If you have any issues with the theme, please report them on the repository: Report Issues Publications \u00b6 FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms Jianhao Jiao*, Hexiang Wei*, Tianshuai Hu*, Xiangcheng Hu*, Yilong Zhu, Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia Wang, Ming Liu Presented at IROS 2022 [Arxiv] [bibtex]","title":"FusionPortable"},{"location":"dataset/fusionportable/#introduction","text":"We consider that a desirable dataset should fulfill the following four requirements: Various sensors (LiDARs, cameras, IMU, etc.). Various robotic platforms with diverse motion patterns. Sequences cover from room-scale (meter-level) to large-scale (kilometer-level). Benchmarking for different tasks. We are motivated to propose the FusionPortable dataset , which is initially intended to support odometry, localization, mapping, and some perception tasks. We advance a self-contained, portable, and versatile multi-sensor suite. We construct a dataset that covers a variety of environments on the campus by exploiting multiple robot platforms for data collection. We also provide ground truth for the decouple localization and mapping performance evaluation.","title":"Introduction"},{"location":"dataset/fusionportable/#data-collection-platforms","text":"","title":"Data Collection Platforms"},{"location":"dataset/fusionportable/#sensors","text":"128-beam Ouster LiDAR (OS1, 120m range) resolution: (128x2048) FILR BFS-U3-31S4C stereo cameras resolution: (1024x768) DAVIS346 stereo cameras resolution: (346\u00d7240) STIM300 IMU ZED-F9P RTK-GPS","title":"Sensors"},{"location":"dataset/fusionportable/#various-platforms","text":"","title":"Various Platforms"},{"location":"dataset/fusionportable/#third-view-of-data-collection","text":"Environment Platform Preview Garden Handheld Motion Capture Room Quadrupled Robot","title":"Third-View of Data Collection"},{"location":"dataset/fusionportable/#download","text":"","title":"Download"},{"location":"dataset/fusionportable/#data-organization","text":"FusionPortable/ \u251c\u2500\u2500 calibration_files/ // Intrinsics & extrinsics of sensors \u2514\u2500\u2500 20220209_calib/ \u2514\u2500\u2500 .yaml // e.g., ouster00.yaml, frame_cam00.yaml \u251c\u2500\u2500 groundtruth/ \u2514\u2500\u2500 map/ // Ground-truth maps \u2514\u2500\u2500 / \u251c\u2500\u2500 scan/ \u2514\u2500\u2500 .pcd // Individual scan \u251c\u2500\u2500 merged_scan.pcd // Merged scan (resolution: 1cm) \u2514\u2500\u2500 transformation.yaml // Transformation of each scan \u2514\u2500\u2500 traj/ // Ground-truth trajectories \u2514\u2500\u2500 .txt // e.g., 20220215_canteen_night.txt \u2514\u2500\u2500 sensor_data/ \u2514\u2500\u2500 / // Platforms, e.g., handheld \u2514\u2500\u2500 // e.g., 20220215_canteen_night \u251c\u2500\u2500 .bag \u251c\u2500\u2500 .bag.7z \u251c\u2500\u2500 data/ \u2514\u2500\u2500 data_ref_kitti/","title":"Data Organization"},{"location":"dataset/fusionportable/#download_1","text":"","title":"Download"},{"location":"dataset/fusionportable/#sequences","text":"Type Platform Picture Sequence Preview Calibration Handheld 20220209_StaticTarget_SmallCheckerBoard_9X12_30mm Calibration Handheld 20220215_DynamicTarget_BigCheckerBoard_7X10_68mm Calibration Handheld 20220209_Static_IMUs_3h20mins Handheld 20220216_canteen_night preview Handheld 20220216_canteen_day preview Handheld 20220215_garden_night preview Handheld 20220216_garden_day preview Handheld 20220216_corridor_day preview Handheld 20220216_escalator_day preview Handheld 20220225_building_day preview Handheld 20220216_MCR_slow preview Handheld 20220216_MCR_normal preview Handheld 20220216_MCR_fast preview Quadruped Robot 20220219_MCR_slow_00 preview Quadruped Robot 20220219_MCR_slow_01 preview Quadruped Robot 20220219_MCR_normal_00 preview Quadruped Robot 20220219_MCR_normal_01 preview Quadruped Robot 20220219_MCR_fast_00 preview Quadruped Robot 20220219_MCR_fast_01 preview Apollo Vehicle 20220226_campus_road preview","title":"Sequences"},{"location":"dataset/fusionportable/#some-high-resolution-gt-maps","text":"Environment Platform Garden Escalator Building","title":"Some High-Resolution GT Maps"},{"location":"dataset/fusionportable/#tools","text":"The development tool can be used by clicking the button below Development Tools","title":"Tools"},{"location":"dataset/fusionportable/#evaluation","text":"","title":"Evaluation"},{"location":"dataset/fusionportable/#evalaution-of-trajectories","text":"","title":"Evalaution of Trajectories"},{"location":"dataset/fusionportable/#issues","text":"If you have any issues with the theme, please report them on the repository: Report Issues","title":"Issues"},{"location":"dataset/fusionportable/#publications","text":"FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms Jianhao Jiao*, Hexiang Wei*, Tianshuai Hu*, Xiangcheng Hu*, Yilong Zhu, Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia Wang, Ming Liu Presented at IROS 2022 [Arxiv] [bibtex]","title":"Publications"},{"location":"dataset/fusionportable_v2/","text":"FusionPortable V2 From Campus to Highway: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments News \u00b6 (20240408) The development tool has been initially released. (20240407) Data of FusionPortable can be downloaed from Google Drive . Overview \u00b6 Sensors \u00b6 Handheld Sensor : 128-beam Ouster LiDAR (OS1, 120m range) Handheld Sensor : Stereo FILR BFS-U3-31S4C cameras Handheld Sensor : Stereo DAVIS346 cameras Handheld Sensor : STIM300 IMU Handheld Sensor : 3DM-GQ7-GNSS/INS UGV Sensor : Omron E6B2-CWZ6C wheel encoder Legged Robot Sensor : Built-in joint encoders, contact sensors, and IMU of the Unitree A1 Various Platforms and Scenarios \u00b6 Ground-Truth Devices \u00b6 Third-View of Data Collection \u00b6 Environment Platform Preview Escalator Handheld Corridor Handheld Underground Parking Lot Legged Robot Campus UGV Outdoor Parking Lot UGV Details \u00b6 Organization \u00b6 Note: 1. .yaml store intrinsics and extrinsics of a specific sensor 2. .pcd ground-truth map 3. .bag rosbag 4. .7z compressed rosbag Trajectories of Sequences \u00b6 Download Sequence \u00b6 Please click these below links to download: Option 1 (recommend, long-term maintenance): Google Drive Or copy the linke https://drive.google.com/drive/folders/1PYhnf3PlY5r0hbyzWDGTUTPxRMl6SYa-?usp=sharing Option 2 (unavailable now): Link to Baidu Pan (unavailabel now) Note: extract compressed ROSBag in the terminal: 7z l .7z Calibration Sequences \u00b6 Hanheld Sequences \u00b6 Picture Sequence Features Preview handheld_grass00 Textureless preview handheld_room00 Dynmaic preview handheld_room01 Dynmaic preview handheld_escalator00 Non-inertial preview handheld_escalator01 Non-inertial preview handheld_underground00 Structureless preview Legged Robot Sequences \u00b6 Picture Sequence Features Preview legged_grass00 Structureless, Deformable preview legged_grass01 Structureless, Deformable preview legged_room00 Dynamic preview legged_transition00 Illumination, GNSS-deined preview legged_underground00 Structureless preview UGV Sequences \u00b6 Picture Sequence Features Preview ugv_parking00 Structureless preview ugv_parking01 Structureless preview ugv_parking02 Structureless preview ugv_parking03 Structureless preview ugv_campus00 Large-Scale preview ugv_campus01 Fast Motion preview ugv_transition00 GNSS-Denied preview ugv_transition01 GNSS-Denied preview Vehicle Sequences \u00b6 Picture Sequence Features Preview vehicle_campus00 Large-Scale preview vehicle_campus01 Large-Scale preview vehicle_street00 Large-Scale, Dynmaic preview vehicle_tunnel00 Low Texture and Structure preview vehicle_downhill00 Illumination preview vehicle_highway00 Structureless preview vehicle_highway01 Structureless preview vehicle_multilayer00 Perceptual Aliasing preview Some High-Resolution GT Maps \u00b6 Environment Area Preview UGV Campus 0.36km^2 Underground Parking 0.037km^2 Experiments \u00b6 Calibration \u00b6 Projected Point Cloud with Camera-LiDAR Calibration ( LCE-Calib ) Localization Evaluation \u00b6 Running FAST-LIO2 : handheld_room00, legged_grass00, ugv_campus00, vehicle_highway00 Mapping Evaluation \u00b6 Monocular Depth Estimation \u00b6 Tools \u00b6 The development tool can be used by clicking the button below Development Tools Issues \u00b6 If you have any issues with the theme, please report them on the repository: Report Issues Related Works \u00b6 FusionPortable-release works were used in the following papers. Please checkout these workds if you are interested. (Please contact us if you would like your work mentioned here). LiDAR Only Neural Representations for Real-Time SLAM , IEEE RAL 2023 TBD Publications \u00b6 FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments Hexiang Wei*, Jianhao Jiao*, Xiangcheng Hu, Jingwen Yu, Xupeng Xie, Jin Wu, Yilong Zhu, Yuxuan Liu, etc. Under Review [Arxiv] Contact \u00b6 Dr. Jianhao Jiao (jiaojh1994 at gmail dot com): General problems of the dataset Mr. Hexiang Wei (cranefly88 at gmail dot com): Problems related to hardware Contributors \u00b6","title":"FusionPortableV2"},{"location":"dataset/fusionportable_v2/#news","text":"(20240408) The development tool has been initially released. (20240407) Data of FusionPortable can be downloaed from Google Drive .","title":"News"},{"location":"dataset/fusionportable_v2/#overview","text":"","title":"Overview"},{"location":"dataset/fusionportable_v2/#sensors","text":"Handheld Sensor : 128-beam Ouster LiDAR (OS1, 120m range) Handheld Sensor : Stereo FILR BFS-U3-31S4C cameras Handheld Sensor : Stereo DAVIS346 cameras Handheld Sensor : STIM300 IMU Handheld Sensor : 3DM-GQ7-GNSS/INS UGV Sensor : Omron E6B2-CWZ6C wheel encoder Legged Robot Sensor : Built-in joint encoders, contact sensors, and IMU of the Unitree A1","title":"Sensors"},{"location":"dataset/fusionportable_v2/#various-platforms-and-scenarios","text":"","title":"Various Platforms and Scenarios"},{"location":"dataset/fusionportable_v2/#ground-truth-devices","text":"","title":"Ground-Truth Devices"},{"location":"dataset/fusionportable_v2/#third-view-of-data-collection","text":"Environment Platform Preview Escalator Handheld Corridor Handheld Underground Parking Lot Legged Robot Campus UGV Outdoor Parking Lot UGV","title":"Third-View of Data Collection"},{"location":"dataset/fusionportable_v2/#details","text":"","title":"Details"},{"location":"dataset/fusionportable_v2/#organization","text":"","title":"Organization"},{"location":"dataset/fusionportable_v2/#trajectories-of-sequences","text":"","title":"Trajectories of Sequences"},{"location":"dataset/fusionportable_v2/#download-sequence","text":"","title":"Download Sequence"},{"location":"dataset/fusionportable_v2/#calibration-sequences","text":"","title":"Calibration Sequences"},{"location":"dataset/fusionportable_v2/#hanheld-sequences","text":"Picture Sequence Features Preview handheld_grass00 Textureless preview handheld_room00 Dynmaic preview handheld_room01 Dynmaic preview handheld_escalator00 Non-inertial preview handheld_escalator01 Non-inertial preview handheld_underground00 Structureless preview","title":"Hanheld Sequences"},{"location":"dataset/fusionportable_v2/#legged-robot-sequences","text":"Picture Sequence Features Preview legged_grass00 Structureless, Deformable preview legged_grass01 Structureless, Deformable preview legged_room00 Dynamic preview legged_transition00 Illumination, GNSS-deined preview legged_underground00 Structureless preview","title":"Legged Robot Sequences"},{"location":"dataset/fusionportable_v2/#ugv-sequences","text":"Picture Sequence Features Preview ugv_parking00 Structureless preview ugv_parking01 Structureless preview ugv_parking02 Structureless preview ugv_parking03 Structureless preview ugv_campus00 Large-Scale preview ugv_campus01 Fast Motion preview ugv_transition00 GNSS-Denied preview ugv_transition01 GNSS-Denied preview","title":"UGV Sequences"},{"location":"dataset/fusionportable_v2/#vehicle-sequences","text":"Picture Sequence Features Preview vehicle_campus00 Large-Scale preview vehicle_campus01 Large-Scale preview vehicle_street00 Large-Scale, Dynmaic preview vehicle_tunnel00 Low Texture and Structure preview vehicle_downhill00 Illumination preview vehicle_highway00 Structureless preview vehicle_highway01 Structureless preview vehicle_multilayer00 Perceptual Aliasing preview","title":"Vehicle Sequences"},{"location":"dataset/fusionportable_v2/#some-high-resolution-gt-maps","text":"Environment Area Preview UGV Campus 0.36km^2 Underground Parking 0.037km^2","title":"Some High-Resolution GT Maps"},{"location":"dataset/fusionportable_v2/#experiments","text":"","title":"Experiments"},{"location":"dataset/fusionportable_v2/#calibration","text":"Projected Point Cloud with Camera-LiDAR Calibration ( LCE-Calib )","title":"Calibration"},{"location":"dataset/fusionportable_v2/#localization-evaluation","text":"Running FAST-LIO2 : handheld_room00, legged_grass00, ugv_campus00, vehicle_highway00","title":"Localization Evaluation"},{"location":"dataset/fusionportable_v2/#mapping-evaluation","text":"","title":"Mapping Evaluation"},{"location":"dataset/fusionportable_v2/#monocular-depth-estimation","text":"","title":"Monocular Depth Estimation"},{"location":"dataset/fusionportable_v2/#tools","text":"The development tool can be used by clicking the button below Development Tools","title":"Tools"},{"location":"dataset/fusionportable_v2/#issues","text":"If you have any issues with the theme, please report them on the repository: Report Issues","title":"Issues"},{"location":"dataset/fusionportable_v2/#related-works","text":"FusionPortable-release works were used in the following papers. Please checkout these workds if you are interested. (Please contact us if you would like your work mentioned here). LiDAR Only Neural Representations for Real-Time SLAM , IEEE RAL 2023 TBD","title":"Related Works"},{"location":"dataset/fusionportable_v2/#publications","text":"FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments Hexiang Wei*, Jianhao Jiao*, Xiangcheng Hu, Jingwen Yu, Xupeng Xie, Jin Wu, Yilong Zhu, Yuxuan Liu, etc. Under Review [Arxiv]","title":"Publications"},{"location":"dataset/fusionportable_v2/#contact","text":"Dr. Jianhao Jiao (jiaojh1994 at gmail dot com): General problems of the dataset Mr. Hexiang Wei (cranefly88 at gmail dot com): Problems related to hardware","title":"Contact"},{"location":"dataset/fusionportable_v2/#contributors","text":"","title":"Contributors"},{"location":"perception/tbd/","text":"","title":"Tbd"},{"location":"slam/fl2sam/","text":"FL2SAM","title":"Fl2sam"}]}
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 8bfd772..9846d69 100644
Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ