-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 13 KB
/
index.json
1
[{"authors":null,"categories":null,"content":"Welcome to my personal webpage! I am a third-year PhD student at the Robotics, Perception and Real Time (RoPeRT) group at the University of Zaragoza (Unizar) under the supervision of Prof. Juan D. Tardós. Previously I studied a Bachelors’s degree in Computer Science and a Master’s degree in Biomedical Engineering, both at Unizar where I started my career as a Computer Vision researcher.\nMy work revolves how computers perceive and understand their surroundings by means of Visual Simultaneous Localization and Mapping techniques (V-SLAM). I am especially interested and motivated by those challenging situations that impairs the use of these technologies like deformable V-SLAM which has direct application to many other fields like Minimal Intrusive Surgery.\nDownload my resumé.\n","date":1668816000,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1668816000,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"","publishdate":"0001-01-01T00:00:00Z","relpermalink":"","section":"authors","summary":"Welcome to my personal webpage! I am a third-year PhD student at the Robotics, Perception and Real Time (RoPeRT) group at the University of Zaragoza (Unizar) under the supervision of Prof.","tags":null,"title":"Juan José Gómez Rodríguez","type":"authors"},{"authors":["Julio A. Placed","Juan José Gómez Rodríguez","Juan D Tardós","José A. Castellanos","","","",""],"categories":null,"content":"","date":1668816000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1668816000,"objectID":"6a2570a7ee7b4a704386956231dd26bc","permalink":"https://jj-gomez.github.io/publication/explora/","publishdate":"2022-11-19T00:00:00Z","relpermalink":"/publication/explora/","section":"publication","summary":"Deploying autonomous robots capable of exploring unknown environments has long been a topic of great relevance to the robotics community. In this work, we take a further step in that direction by presenting an open-source active visual SLAM framework that leverages the accuracy of a state-of-the-art graph-SLAM system and takes advantage of the fast utility computation that exploiting the structure of the underlying pose-graph offers. We achieve fast decision making through careful estimation of a posteriori weighted pose-graphs and by employing a utility function that balances exploration and exploitation principles.","tags":["active SLAM"],"title":"ExplORB-SLAM: Active Visual SLAMExploiting the Pose-graph Topology","type":"publication"},{"authors":["Pablo Azagra","Carlos Sostres","Ángel Ferrandez","Luis Riazuelo","Clara Tomasini","Oscar León Barbed","Javier Morlana","David Recasens","Victor M Batlle","Juan José Gómez Rodríguez","Richard Elvira","Julia López","Cristina Oriol","Javier Civera","Juan D Tardós","Ana Cristina Murillo","Angel Lanas","José MM Montiel"],"categories":null,"content":"","date":1651363200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1651363200,"objectID":"78fffad3b1a17435d722972699f68635","permalink":"https://jj-gomez.github.io/publication/endomapper/","publishdate":"2022-05-01T00:00:00Z","relpermalink":"/publication/endomapper/","section":"publication","summary":"Computer-assisted systems are becoming broadly used in medicine. In endoscopy, most research focuses on automatic detection of polyps or other pathologies, but localization and navigation of the endoscope is completely performed manually by physicians. To broaden this research and bring spatial Artificial Intelligence to endoscopies, data from complete procedures are needed. This data will be used to build a 3D mapping and localization systems that can perform special task like, for example, detect blind zones during exploration, provide automatic polyp measurements, guide doctors to a polyp found in a previous exploration and retrieve previous images of the same area aligning them for easy comparison. These systems will provide an improvement in the quality and precision of the procedures while lowering the burden on the physicians. This paper introduces the Endomapper dataset, the first collection of complete endoscopy sequences acquired during regular medical practice, including slow and careful screening explorations, making secondary use of medical data. Its original purpose is to facilitate the development and evaluation of VSLAM (Visual Simultaneous Localization and Mapping) methods in real endoscopy data. The first release of the dataset is composed of 59 sequences with more than 15 hours of video. It is also the first endoscopic dataset that includes both the computed geometric and photometric endoscope calibration with the original calibration videos. Meta-data and annotations associated to the dataset varies from anatomical landmark and description of the procedure labeling, tools segmentation masks, COLMAP 3D reconstructions, simulated sequences with groundtruth and meta-data related to special cases, such as sequences from the same patient. This information will improve the research in endoscopic VSLAM, as well as other research lines, and create new research lines.","tags":["endoscopy","dataset"],"title":"EndoMapper dataset of complete calibrated endoscopy procedures","type":"publication"},{"authors":["Juan José Gómez Rodríguez","José MM Montiel","Juan D Tardós"],"categories":null,"content":"","date":1646092800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1646092800,"objectID":"e278c88545c5b9891de5ed98fcebbdc8","permalink":"https://jj-gomez.github.io/publication/bodytracker/","publishdate":"2022-03-01T00:00:00Z","relpermalink":"/publication/bodytracker/","section":"publication","summary":"Monocular SLAM in deformable scenes will open the way to multiple medical applications like computer-assisted navigation in endoscopy, automatic drug delivery or autonomous robotic surgery. In this paper we propose a novel method to simultaneously track the camera pose and the 3D scene deformation, without any assumption about environment topology or shape. The method uses an illumination-invariant photometric method to track image features and estimates camera motion and deformation combining reprojection error with spatial and temporal regularization of deformations. Our results in simulated colonoscopies show the method's accuracy and robustness in complex scenes under increasing levels of deformation. Our qualitative results in human colonoscopies from Endomapper dataset show that the method is able to successfully cope with the challenges of real endoscopies like deformations, low texture and strong illumination changes. We also compare with previous tracking methods in simpler scenarios from Hamlyn dataset where we obtain competitive performance, without needing any topological assumption.","tags":["endoscopy","camera tracking","monocular"],"title":"Tracking monocular camera pose and deformation for SLAM inside the human body","type":"publication"},{"authors":["Jose Lamarca","Juan José Gómez Rodríguez","José MM Montiel","Juan D Tardós"],"categories":null,"content":"","date":1631664000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1631664000,"objectID":"6be824f0faaa9c916ad8bb74313e2318","permalink":"https://jj-gomez.github.io/publication/dsdt/","publishdate":"2021-09-15T00:00:00Z","relpermalink":"/publication/dsdt/","section":"publication","summary":"Deformable Monocular SLAM algorithms recover the localization of a camera in an unknown deformable environment. Current approaches use a template-based deformable tracking to recover the camera pose and the deformation of the map. These template-based methods use an underlying global deformation model. In this paper, we introduce a novel deformable camera tracking method with a local deformation model for each point. Each map point is defined as a single textured surfel that moves independently of the other map points. Thanks to a direct photometric error cost function, we can track the position and orientation of the surfel without an explicit global deformation model. In our experiments, we validate the proposed system and observe that our local deformation model estimates more accurately and robustly the targeted deformations of the map in both laboratory-controlled experiments and in-body scenarios undergoing non-isometric deformations, with changing topology or discontinuities.","tags":["endoscopy","camera tracking","monocular"],"title":"Direct and Sparse Deformable Tracking","type":"publication"},{"authors":["Juan José Gómez Rodríguez","Jose Lamarca","Javier Morlana","José MM Montiel","Juan D Tardós","Equal contribution","Equal contribution","","",""],"categories":null,"content":"","date":1602720000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1602720000,"objectID":"4d3e8bdf34640061b255946f5911e1d8","permalink":"https://jj-gomez.github.io/publication/sd/","publishdate":"2020-10-15T00:00:00Z","relpermalink":"/publication/sd/","section":"publication","summary":"Conventional SLAM techniques strongly rely on scene rigidity to solve data association, ignoring dynamic parts of the scene. In this work we present Semi-Direct DefSLAM (SD-DefSLAM), a novel monocular deformable SLAM method able to map highly deforming environments, built on top of DefSLAM. To robustly solve data association in challenging deforming scenes, SD-DefSLAM combines direct and indirect methods':' an enhanced illumination-invariant Lucas-Kanade tracker for data association, geometric Bundle Adjustment for pose and deformable map estimation, and bag-of-words based on feature descriptors for camera relocation. Dynamic objects are detected and segmented-out using a CNN trained for the specific application domain. We thoroughly evaluate our system in two public datasets. The mandala dataset is a SLAM benchmark with increasingly aggressive deformations. The Hamlyn dataset contains intracorporeal sequences that pose serious real-life challenges beyond deformation like weak texture, specular reflections, surgical tools and occlusions. Our results show that SD-DefSLAM outperforms DefSLAM in point tracking, reconstruction accuracy and scale drift thanks to the improvement in all the data association steps, being the first system able to robustly perform SLAM inside the human body.","tags":["endoscopy","camera tracking","monocular","optical flow"],"title":"SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes","type":"publication"},{"authors":["Carlos Campos","Richard Elvira","Juan José Gómez Rodríguez","José MM Montiel","Juan D Tardós","Equal contribution","Equal contribution","","",""],"categories":null,"content":"","date":1593561600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1593561600,"objectID":"1852f25a60ca34c8c985d6493175f33c","permalink":"https://jj-gomez.github.io/publication/orbslam/","publishdate":"2020-07-01T00:00:00Z","relpermalink":"/publication/orbslam/","section":"publication","summary":"This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. The first main novelty is a feature-based tightly-integrated visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization phase. The result is a system that operates robustly in real-time, in small and large, indoor and outdoor environments, and is 2 to 5 times more accurate than previous approaches. The second main novelty is a multiple map system that relies on a new place recognition method with improved recall. Thanks to it, ORB-SLAM3 is able to survive to long periods of poor visual information':'' when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting mapped areas. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information. This allows to include in bundle adjustment co-visible keyframes, that provide high parallax observations boosting accuracy, even if they are widely separated in time or if they come from a previous mapping session. Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.6 cm on the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting representative of AR/VR scenarios. For the benefit of the community we make public the source code.","tags":["visual SLAM","inertial SLAM","fisheye"],"title":"ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM","type":"publication"},{"authors":null,"categories":null,"content":"","date":1461715200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1461715200,"objectID":"e8f8d235e8e7f2efd912bfe865363fc3","permalink":"https://jj-gomez.github.io/project/example/","publishdate":"2016-04-27T00:00:00Z","relpermalink":"/project/example/","section":"project","summary":"Real-time mapping from endoscopic videos","tags":["Deep Learning"],"title":"EndoMapper","type":"project"}]