-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproduce #8
Comments
Hi @zhangjd1029, Thank you for your interest in my work. Due to the (in retrospective) quite complicated setup, I would - however - recommend to do the offline processing of your data. You could calculate a Visual SLAM trajectory using DSO, export that one into a CSV-File (see https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/IOWrapper/OutputWrapper/SampleOutputWrapper.h#L210 for my implementation). Then you can run the fusion algorithm within Blender using my own Addon, described here: The answer to your question in my view depends on what your final goal is. Are you more interested in how the integration in the DSO works or do you want to develop your fusion algorithm? I doubt that I can provide a one-size-fits all solution for you, as I am not very satisfied with the work I did here. I drew a few very important conclusions from this work, for example that I first need to build a pipeline from data capture to final fusion algorithm which I am still working on at the moment. So, depending on your personal goal, I might need to tell you that you should not waste your time with my code but instead develop your own solution. So, please elaborate a bit more on what goal you want to achieve. Kind regards, |
Thank you for your detailed reply. I have benefited greatly. |
Hi @zhangjd1029, Thank you for detailing out what you have done so far. Ok, about the online operation: In order to send the GPS location via UDP from the smartphone I have developed my own smartphone app via Java (in Android Studio). I noticed that I have not published the code of the app, so I wouldnt recommend to use the online variant as there is a lot of work connected to get everything running. And given how many years have passed since I have developed the code in this repo, this was just too complicated compared to a plug and play solution. Nowadays, I would have just used a ros integrated GPS which could be connected to a Jetson board and processed the corresponding ROS topic within the DSO. No extra hardware and networking problems to solve :) Therefore I would suggest you look into the offline version for now. Just open the .blend-file, which should work with the most recent Blender version (tested today with Blender 3.5.0 Alpha). You will see the "Scripting" workspace appearing with many pre-imported scenarios for different datasets. It is not important how the datasets look, here we are only interested in the individual trajectories which we want to fuse. Scroll down to line 602 of the python script: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/FusionLinearKalmanFilter/01_LinearKalmanFilter_allEvaluations.py#L602 Hope this helped you. Feel free to reach out if there is still something unclear. Kind regards, |
Thank you very much. I have learned a lot from your reply. |
You are very welcome, @zhangjd1029 :) Btw., in case you are interested: I am currently doing a live stream series about the LDSO. Yesterday I have uploaded the most recent recordings. Here, in this part I am showing how I am implementing an importer for the LDSO trajectories to be used within Blender. As this is a video with explanation, you might be able to find something helpful for your project there - here ist the link to the most recent video recording: I will soon close your issue. Feel free to open up a new one if you have any further questions or remarks. I am also happy about any criticism. Kind regards, |
That is great! Recently, I was also researching LDSO and there were some areas in the code that I couldn't understand. I will watch your video carefully! I am writing a code that runs LDSO in real time and can run Pangolin, but the received image is gray with stripes, which is obviously wrong. Have you conducted any research in this area? Can you give some suggestions? thank you |
Hi @zhangjd1029, sounds interesting. Do you have any online ressources on your project? And yes, the LDSO does have a lot of areas which are not easy to understand. In fact, I think some are even impossible to understand for other developers. But I am trying to make the code easier to understand. I am therefore highly interested in what code parts you couldn't understand. Please feel free to comment on my videos in case anything in the code should be explained a bit more. Maybe we can clarify that code part together? ;) Kind regards, |
Sorry, I don't have any open source resources online. If you want to see it, I can send it to your email. Would you like to help me check? |
Hi @zhangjd1029,
Ok, I see. No worries, I would also just try out fixing the byte order. Good luck with your project. Kind regards, |
Hello @GSORF
I think so, I don't know if it's right. I can write a ROS program that can receive such GPS signals ($GPGGA,092725.00,4717.11399,N,00833.91590,E,1,08,1.01,499.6,M,48.0,M,*5B), and then convert it into GPS#49.4144637#11.1298899#20.903, finally, create a udp to send the converted GPS signal. So, can dso_ros directly receive GPS signals? I don't know if I have expressed myself clearly. Looking forward to your reply. Thank you! |
Hi @zhangjd1029, wow, I am really happy that you have managed to get my very old code running - congratulations, the results look quite interesting and it makes me very happy to see that. About your questions:
Yes.
Yes, but keep in mind, the
Yes, but keep in mind that the actual GPS signal is received and processed in the FullSystem, see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L901
Keep in mind that
Yes, unfortunately it is. As you can see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L848 once the system is "lost", it will not recover from that state anymore. I do not want to blame the original DSO, but this is how they have implemented it. I was too busy with my work of implementing a Kalman filter that "sort of" is working and didnt have the time to perform a proper re-localization. So there is a lot more work to do here. For starters, you could click on "Reset" in the User Interface which would then reset the FullSystem. Then the fusion should restart (but I think I am not deleting the history in the Kalman filter, so you would have jumps if you move too far away from your last position). This is definitely a point where you may want to continue with your work? I am happy to help you with this, in case you are writing a research paper and want to cite my work. Actually, I am still interested in the fusion myself, of course :) Hope this helps, thank you for trying out my implementation. I am glad you made it this far, given the many years I didnt look into my code anymore. Kind regards, |
I am really happy to hear your affirmation. Thank you for your encouragement. I suddenly realized a mistake I made. I mistakenly treated the last item of the received GPS signal as altitude, but I just realized that it refers to accuracy. I was really careless. May I know if it represents horizontal accuracy? Also, I would like to confirm again if the output I mentioned earlier is correct. Because I didn't obtain the GPS signal through a mobile phone, but received it through a serial port and then sent it to the computer's local IP on port 2016 using UDP through ROS. So, I cannot be sure if "VGPSSLAM_FindVGPSServer," "VGPSSLAM_VGPSServerIsHere," and "GPS" (https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L238) have values. I am studying your project with the intention of using it on a drone to make the drone's trajectory more accurate in high altitude. If possible, I would like to port your project to LDSO. I would like to know if you agree with this and if you have attempted such an integration before. Kind regards, |
Hi @zhangjd1029, Thanks for the update. To your questions:
Yes, it is the horizontal accuracy. This strongly depends on what your GNSS receiver / the actual smartphone API provides. In my case it was the horizontal accuracy given in meters which I used to set the measurement noise covariance matrix in the Kalman filter, see docs from Android here: https://developer.android.com/reference/android/location/Location#getAccuracy()
This is hard to comment on from my side without seeing your code. But there is an easy way to check. Just print out the received values in this function after the line that tells you that there is a new GPS measurement (this line was successfully printed in your terminal): https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L100 Keep one thing in mind, please: My Kalman filter implementation might not be to your liking. In retrospective I think that it would be a good idea to include the altitude measurement into the conversion from Geodetic to Cartesian coordinates (using the WGS84 ECEF ellipsoid model). I also merely estimate the 3D position - no orientation! Given your scenario with the drone you might need to heavily improve that Kalman filter. I remember that I have also visualized the Kalman filter trajectory using a green color in the 3D view. I couldn't see that in your screenshots. Is it displayed? And please check that you are initializing the Kalman filter using the GPS measurement by setting But all in all, I love your idea of porting this code to the LDSO! Very good plan and I sincerely encourage you to do that! I did only try the implementation in a car. Which is very well constrained to a 2D planar motion (even though, I did not enforce that in my implementation). I did not have a drone able to carry the additional payload of Jetson + Camera + Battery at that time. Nowadays things are different. However, I concentrated my research very much on monocular Visual SLAM and only touched upon the data fusion aspects. I am trying to get my VSLAMs running without any additional sensor data, this is what I personally find much more interesting. To this end I am implementing my own VSLAM architectures. In the long run, I still think that a combination of all techniques is the way to go. Ok, the end was a bit more on the philosophical side of things. I hope I have answered your questions and would love to stay updated on your progress. Maybe you could tell me a bit more about your project, specifically I would like to know if this is a student project or a commercial product? Feel free to ask if anything is still unclear. I am happy to help. Kind regards, |
Hey zhang, Thanks for letting me know about this issue. Indeed, the commandline option "udp=1" is useless - I have commented out the check for this flag on the no-ros-version, see https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/main_dso_pangolin.cpp#L486 About the visualization of the Kalman filter: I first had to find where I have implemented it - the commit can be found here: f9d561f So, TL;DR: Please remove the option "initKalmanFilter=1" from your command line. Then you should see the green Kalman filter trajectory in the 3D view once you get GPS coordinates. I know, this code is a mess. ;( There is a lot of room for improvement besides it not being coded very user friendly. You can get inspiration from my code, but as I wrote in the beginning of this issue, I would strongly suggest to use the offline version within Blender. Thats where I put most effort into and the results from that have also been published in my paper: Good success with your graduation project! Please, keep me updated on your progress. Kind regards, |
Hello, I admire your ability to do such a great job. I am very interested in your work.
I want to reproduce it, but I don't have a Zed camera. I only have a monocular camera and a GPS module. Can this reproduce your work in real time? If possible, what should I do?
The text was updated successfully, but these errors were encountered: