-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
207 lines (178 loc) · 18.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
<!DOCTYPE HTML>
<!-- Personal website! By Helen! -->
<html>
<head>
<meta charset="UTF-8">
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400,300,italic,700,400,600' rel='stylesheet' type='text/css'>
<link rel="stylesheet" type="text/css" href="style.css" />
<title>Helen Oleynikova</title>
</head>
<body>
<div class="page">
<h1>Helen Oleynikova</h1>
<div class="intro">
<div class="portrait">
<img src="images/portrait_small.jpg">
</div>
<div class="intro_text">
<p>Hey, I'm Helen, and I'm currently a Senior Researcher at the <a href="http://www.asl.ethz.ch">Autonomous Systems Lab</a> at <a href="http://www.ethz.ch">ETH Zürich</a>.
<p>Previously I worked as a Senior Software Engineer on the Isaac 3D Perception team at <a href="https://developer.nvidia.com/isaac-ros">Nvidia</a>, and before that, as a Senior Scientist at the <a href="https://www.microsoft.com/en-us/research/lab/mixed-reality-ai-zurich/">Microsoft Mixed Reality and AI Lab</a> in Zürich, Switzerland.</p>
<p>I finished a PhD with the rotary wing team at the <a href="http://www.asl.ethz.ch">Autonomous Systems Lab</a> at <a href="http://www.ethz.ch">ETH Zürich</a> in February 2019 with Prof. Roland Siegwart, after completing a Masters with the same group.</p>
<p>Before that, I worked as a Software Engineer on StreetView at <a href="http://goo.gl/maps/UWCPy">Google</a> for 2 years, where I worked on automatic detection of street numbers and learning for improvement of imagery.</p>
<p>I finished my Bachelor at <a href="http://www.olin.edu">Olin College of Engineering</a> in 2011, also studying Robotics.</p>
<p>[<a href="mailto:[email protected]">[email protected]</a> | <a href="https://github.com/helenol">github</a> | <a href="https://scholar.google.com/citations?user=aeJGZxIAAAAJ">google scholar</a>]</p>
</div>
</div>
</div>
<div class="page">
<h1>Publications</h1>
<div class="content">
<h2>Thesis</h2>
<div class="pub_list">
<p>Helen Oleynikova, “<b>Mapping and Planning for Safe Collision Avoidance On-board Micro-Aerial Vehicles</b>”. <i>Doctoral Thesis, ETH Zurich</i>, 2019.<br>
[<a href="publications/thesis_2019_oleynikova.pdf">pdf</a> | <a href="https://www.research-collection.ethz.ch/handle/20.500.11850/335330">ETH research collection</a> | <a href="https://www.youtube.com/watch?v=xQTokwr1le0">video</a>]
</p>
</div>
<h2>Journal</h2>
<div class="pub_list">
<p>Alexander Millane, Helen Oleynikova, Christian Lanegger, Jeff Delmerico, Juan Nieto, Roland Siegwart, Marc Pollefeys, and Cesar Cadena, “<b>Freetures: Localization in Signed Distance Function Maps</b>”. <i>IEEE Robotics and Automation Letters</i>, 2021.<br>
[<a href="https://arxiv.org/pdf/2010.09378.pdf">pdf</a> | <a href="publications/ral_2021_freetures_bibtex.txt">bibtex</a>]
</p>
<p>Helen Oleynikova, Christian Lanegger, Zachary Taylor, Michael Pantic, Alexander Millane, Roland Siegwart, and Juan Nieto, “<b>An Open‐Source System for Vision‐Based Micro‐Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments</b>”. <i>Journal of Field Robotics</i>, 2020.<br>
[<a href="publications/jfr_2020_system.pdf">pdf</a> | <a href="publications/jfr_2020_system_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=xQTokwr1le0">video</a> | <a href="https://arxiv.org/abs/1812.03892">arxiv</a> | <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21950">publisher link</a>]
</p>
<p>Victor Reijgwart, Alexander Millane, Helen Oleynikova, Roland Siegwart, Cesar Cadena, Juan Nieto, “<b>Voxgraph: Globally Consistent, Volumetric Mapping
using Signed Distance Function Submaps</b>”. In <i>IEEE Robotics and Automation Letters</i>, 2019.<br>
[<a href="publications/ral_2019_voxgraph.pdf">pdf</a> | <a href="publications/ral_2019_voxgraph_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=N9p1_Fkxxro">video</a> | <a href="https://arxiv.org/abs/2004.13154">arxiv</a>]
</p>
<p>Helen Oleynikova, Zachary Taylor, Roland Siegwart, and Juan Nieto, “<b>Safe Local Exploration for Replanning in Cluttered Unknown Environments for Micro-Aerial Vehicles</b>”. <i>IEEE Robotics and Automation Letters</i>, 2018.<br>
[<a href="publications/ral_2018_local_exploration.pdf">pdf</a> | <a href="publications/ral_2018_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=rAJwD2kr7c0">video</a> | <a href="https://arxiv.org/abs/1710.00604">arxiv</a>]
</p>
<p>Andreas Bircher, Mina Kamel, Kostas Alexis, Helen Oleynikova, and Roland Siegwart “<b>Receding Horizon Path Planning for 3D Exploration and Surface Inspection</b>”. In <i>Autonomous Robots</i>, 2016.<br>
[<a href="http://link.springer.com/article/10.1007/s10514-016-9610-0">publisher page</a> | <a href="publications/ar_2016_exploration_bibtex.txt">bibtex</a>]</p>
</div>
<h2>Conference</h2>
<div class="pub_list">
<p>Patrick Pfreundschuh, Helen Oleynikova, Cesar Cadena, Roland Siegwart, Olov Andersson, “<b>COIN-LIO: Complementary Intensity-Augmented LiDAR Inertial Odometry
</b>”. In <b>Submission To</b> <i>IEEE Int. Conf. on Robotics and Automation (ICRA)</i>, 2024.<br>
[<a href="https://arxiv.org/pdf/2310.01235.pdf">pdf</a> | <a href="publications/arxiv_2023_coinlio_bibtex.txt">bibtex</a> | <a href="https://arxiv.org/abs/2310.01235">arxiv</a>]
</p>
<p>Balakumar Sundaralingam, Siva Kumar Sastry Hari, Adam Fishman, Caelan Garrett, Karl Van Wyk, Valts Blukis, Alexander Millane, Helen Oleynikova, Ankur Handa, Fabio Ramos, Nathan Ratliff, Dieter Fox. “<b>CuRobo: Parallelized Collision-Free Robot Motion Generation</b>”. In <i>IEEE Int. Conf. on Robotics and Automation (ICRA)</i>, May 2023.<br>
[<a href="https://arxiv.org/pdf/2310.17274.pdf">pdf</a> | <a href="publications/icra_2023_curobo_bibtex.txt">bibtex</a> | <a href="https://arxiv.org/abs/2310.17274">arxiv</a>]
</p>
<p>Alexander Millane, Helen Oleynikova, Juan Nieto, Roland Siegwart, Cesar Cadena. “<b>Free-Space Features: Global Localization in 2D Laser SLAM Using
Distance Function Maps</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, November 2019.<br>
[<a href="publications/iros_2019_freespace.pdf">pdf</a> | <a href="publications/iros_2019_freespace_bibtex.txt">bibtex</a> | <a href="https://arxiv.org/abs/1908.01863">arxiv</a>]
</p>
<p>Helen Oleynikova, Zachary Taylor, Roland Siegwart, and Juan Nieto, “<b>Sparse 3D Topological Graphs for Micro-Aerial Vehicle Planning</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, October 2018.<br>
[<a href="publications/iros_2018_skeleton.pdf">pdf</a> | <a href="publications/iros_2018_skeleton_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=U_6rk-SF0Nw">video</a> | <a href="https://arxiv.org/abs/1803.04345">arxiv</a>]
</p>
<p>Helen Oleynikova, Zachary Taylor, Marius Fehr, Roland Siegwart, and Juan Nieto, “<b>Voxblox: Incremental 3D Euclidean Signed Distance Fields for On-Board MAV Planning</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, October 2017.<br>
[<a href="publications/iros_2017_voxblox.pdf">pdf</a> | <a href="publications/iros_2017_voxblox_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=ZGvnGFnTVR8">video</a> | <a href="https://arxiv.org/abs/1611.03631">arxiv</a>]
</p>
<p>Helen Oleynikova, Michael Burri, Zachary Taylor, Juan Nieto, Roland Siegwart, and Enric Galceran, “<b>Continuous-Time Trajectory Optimization for Online UAV Replanning</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, October 2016.<br>
[<a href="publications/iros_2016_replanning.pdf">pdf</a> | <a href="publications/iros_2016_replanning_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=-cm-HkTI8vw">video</a>]
</p>
<p>Andreas Bircher, Mina Kamel, Kostas Alexis, Helen Oleynikova, and Roland Siegwart, “<b>Receding Horizon “Next–Best–View” Planner for 3D Exploration</b>”. In <i>IEEE Int. Conf. on Robotics and Automation (ICRA)</i>, May 2016.<br>
[<a href="http://www.kamel-mina.com/uploads/7/1/4/9/71498289/nbvp_icra2016.pdf">pdf</a> | <a href="publications/icra_2016_nbvp_bibtex.txt">bibtex</a>]
</p>
<a name="localization"></a>
<p>Helen Oleynikova, Michael Burri, Simon Lynen, and Roland Siegwart, “<b>Real-Time Visual-Inertial Localization for Aerial and Ground Robots</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, September 2015.<br>
[<a href="publications/iros_2015_localization.pdf">pdf</a> | <a href="publications/iros_2015_localization_bibtex.txt">bibtex</a>]
</p>
<p>Michael Burri, Helen Oleynikova, Markus Achtelik, and Roland Siegwart, “<b>Real-Time Visual-Inertial Mapping, Re-localization and Planning Onboard MAVs in Previously Unknown Environments</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, September 2015.<br>
[<a href="publications/iros_2015_mav.pdf">pdf</a> | <a href="publications/iros_2015_mav_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=tuUMwcTJx8s">video</a>]
</p>
<a name="reactive"></a>
<p>Helen Oleynikova, Dominik Honegger, and Marc Pollefeys. “<b>Reactive Avoidance Using Embedded Stereo Vision for MAV Flight</b>”. In <i>IEEE Int. Conf. on Robotics and Automation (ICRA)</i>, May 2015.<br>
[<a href="publications/icra_2015_reactive_avoidance.pdf">pdf</a> | <a href="publications/icra_2015_bibtex.txt">bibtex</a>]
</p>
<a name="fpga"></a>
<p>Dominik Honegger, Helen Oleynikova, and Marc Pollefeys. “<b>Real-time and Low Latency Embedded Computer Vision Hardware Based on a Combination of FPGA and Mobile CPU</b>”. In <i>IEEE Int. Conf. on Intelligent Robots and Systems (IROS)</i>, September 2014.<br>
[<a href="publications/iros_2014_realtime_lowlatency_hardware.pdf">pdf</a> | <a href="publications/iros_2014_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=pM9fyV4e6lw">video</a>]</p>
<a name="oceans"></a>
<p>Elena Oleynikova, Nicole Lee, Andrew J. Barry, Joseph Holler, David Barrett, “<b>Perimeter Patrol On Autonomous Surface Vehicles Using Marine Radar</b>”. In <i>Proceedings of IEEE Oceans 2010</i>, May 2010.<br>
[<a href="publications/oceans_2010_perimeter_patrol.pdf">pdf</a> | <a href="publications/oceans_2010_bibtex.txt">bibtex</a> | <a href="https://www.youtube.com/watch?v=pTDT7HkfwJM">video</a>]</p>
</div>
<h2>Workshop & Magazine</h2>
<div class="pub_list">
<p>Jeffrey Delmerico, Roi Poranne, Federica Bogo, Helen Oleynikova, Eric Vollenweider, Stelian Coros, Juan Nieto, Marc Pollefeys, “<b>Spatial computing and intuitive interaction: Bringing mixed reality and robotics together</b>”. In <i>IEEE Robotics & Automation Magazine</i>, 2022.<br>
[<a href="https://arxiv.org/pdf/2202.01493.pdf">pdf</a> | <a href="publications/ram_2022_spatial_bibtex.txt">bibtex</a> | <a href="https://arxiv.org/abs/2202.01493">arxiv</a>]</p>
<p>Helen Oleynikova, Alex Millane, Zachary Taylor, Enric Galceran, Juan Nieto, and Roland Siegwart “<b>Signed Distance Fields: A Natural Representation for Both Mapping and Planning</b>”. In <i>Geometry and Beyond</i>, RSS Workshop, 2016.<br>
[<a href="publications/rss_2016_workshop.pdf">pdf</a> | <a href="publications/rss_2016_workshop_bibtex.txt">bibtex</a>]</p>
<p>Andrew J. Barry, Helen Oleynikova, Dominik Honegger, Marc Pollefeys, and Russ Tedrake. “<b>FPGA vs. pushbroom stereo vision for MAVs</b>”. In <i>Vision-based Control and Navigation of Small Lightweight UAVs</i>, IROS Workshop, 2015.<br>
[<a href="publications/iros_2015_workshop.pdf">pdf</a>]</p>
</div>
</div>
</div>
<!-- <div class="page">
<h1>Videos</h1>
<center>
<iframe width="560" height="315" src="https://www.youtube.com/embed/-cm-HkTI8vw" frameborder="0" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/tuUMwcTJx8s" frameborder="0" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/pM9fyV4e6lw" frameborder="0" allowfullscreen></iframe>
</center>
</div> -->
<div class="page">
<h1>Selected Projects</h1>
<h2>ETH Zürich</h2>
<div class="content">
<div class="project-left">
<img src="images/heli_pioneer_small.jpg">
<p class="title_row"><b class="title">Online Visual-Inertial Based Localization for Robots</b> [<a href="publications/iros_2015_localization.pdf">paper 1</a> | <a href="publications/iros_2015_mav.pdf">paper 2</a>]</p>
<p>I worked on integrating embedded stereo camera and IMU combination and visual-inertial odometry into a sparse mapping framework. The main goal of this work was to enable real-time on-board localization against a sparse map. I then investigated building dense volumetric maps (used for path planning) from keyframes in the sparse map. We showed this working on-board an MAV and ground robot localizing against the same map, allowing the MAV to autonomously land on top of the ground robot given its location from localization.</p>
</div>
<div class="project-left">
<img src="images/fpga_vision_small.png">
<p class="title_row"><b class="title">High-Speed Vision for Quadrotors</b> [<a href="publications/iros_2014_realtime_lowlatency_hardware.pdf">paper 1</a> | <a href="publications/icra_2015_reactive_avoidance.pdf">paper 2</a>]</p>
<p>I integrated a high-speed vision system on FPGA onto a quadrotor system and designed a system for doing high-speed obstacle avoidance on a computationally constrained platform. I also wrote position estimators and position controllers to allow the quad to operate indoors and outdoors without GPS or Vicon using the <a href="https://pixhawk.org/modules/px4flow">PX4 optical flow</a> sensor. I also wrote calibration and processing for the Android-based mobile platform on the FPGA vision system.</p>
</div>
</div>
<h2>Google</h2>
<div class="content">
<div class="project-left">
<img src="images/sv_small.jpg">
<p class="title_row"><b class="title">Street Number Detection from Street View Imagery</b></p>
<p>I worked on the pipeline used to automatically detect and transcribe street numbers from imagery in order to improve maps data.</p>
</div>
<div class="project-left">
<img src="images/sv2_small.jpg">
<p class="title_row"><b class="title">Automatic Image Enhancement for Older Imagery</b></p>
<p>We created a system for automatically enhancing images using single-image HDR techniques. Learned ideal parameters for the algorithm by training a classifier based on data from a user study of preferences.</p>
</div>
<div class="project-left">
<img src="images/sv3_small.jpg">
<p class="title_row"><b class="title">Classification of Landmark Imagery</b></p>
<p>Designed features and trained classifiers for automatically detecting several types of attractive or important landmark imagery. For example, collected data and chose features for classifying imagery that looks over water or shows a city skyline.</p>
</div>
</div>
<h2>Willow Garage</h2>
<div class="content">
<div class="project-left">
<img src="images/turtlebot_arm_small.png">
<p class="title_row"><b class="title">TurtleBot Arm - Calibration and Applications</b> [<a href="http://wiki.ros.org/turtlebot_arm?distro=electric">website</a>]</p>
<p>Worked on a low-cost robot arm for hobbyist robotics. Built a system for robust camera-to-arm calibration, created documentation and assembly instructions for hobbyists, and created tools and demos.</p>
</div>
<div class="project-left">
<img src="images/vslam_small.jpg">
<p class="title_row"><b class="title">Visual SLAM</b> [<a href="http://wiki.ros.org/vslam">website</a> | <a href="https://www.youtube.com/watch?v=TR8BMZj-Udc">video</a>]</p>
<p>Worked on implementation of Visual SLAM (Simultaneous Localization and Mapping) using stereo image data in ROS (Robot Operating System). Developed and refined API, wrote documentation, and implemented new features such as integration of pointcloud matches from laser-based sensors into Visual SLAM.</p>
</div>
</div>
<h2>Olin College</h2>
<div class="content">
<div class="project-left">
<img src="images/scope_small.jpg">
<p class="title_row"><b class="title">Ping-Pong Playing Robot</b> Senior Consulting Project for ADSYS [<a href="https://www.youtube.com/watch?v=sxqWVtp4Vgo">video</a>]</p>
<p>Technical lead on team of 4 that designed, built, and programmed robot capable of playing ping-pong against human player. This was a technical demo for a high-speed vision system developed by a sponsor company. Was responsible for vision, detection, modeling, and state estimation of the ping-pong ball.</p>
</div>
<div class="project-left">
<img src="images/medea_small.jpg">
<p class="title_row"><b class="title">Autonomous Shore Navigation Using Marine Radar</b> [<a href="#oceans"oceans>paper</a> | <a href="https://www.youtube.com/watch?v=pTDT7HkfwJM">video</a>]</p>
<p>Worked on an autonomoous marine surface vehicle, based on a 12 foot Catamaran. Developed overall software architecture, and worked on integrating position and state data from various sensors, writing motor controllers, and waypoint following, in addition fabricating and testing mechanical and electrical systems on autonomous surface vehicle. Designed a method to autonomously navigate around a shore using marine radar.</p>
</div>
</div>
</div>
</div>
</body>
</html>