Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reproduce #8

Open
zhangjd1029 opened this issue May 18, 2023 · 16 comments
Open

reproduce #8

zhangjd1029 opened this issue May 18, 2023 · 16 comments

Comments

@zhangjd1029
Copy link

Hello, I admire your ability to do such a great job. I am very interested in your work.
I want to reproduce it, but I don't have a Zed camera. I only have a monocular camera and a GPS module. Can this reproduce your work in real time? If possible, what should I do?

@GSORF
Copy link
Owner

GSORF commented May 19, 2023

Hi @zhangjd1029,

Thank you for your interest in my work.
I was actually also using a monocular camera (due to the DSO working with Mono only). However, the camera needs to be publishing its images via ROS (Robot Operating System) and the GPS is published from a smartphone (via UDP Port 2016, see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h). Would this work with your setup?

Due to the (in retrospective) quite complicated setup, I would - however - recommend to do the offline processing of your data. You could calculate a Visual SLAM trajectory using DSO, export that one into a CSV-File (see https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/IOWrapper/OutputWrapper/SampleOutputWrapper.h#L210 for my implementation). Then you can run the fusion algorithm within Blender using my own Addon, described here:
https://github.com/GSORF/Visual-GPS-SLAM/tree/master#how-to-use-the-b-slam-sim-blender-addon

The answer to your question in my view depends on what your final goal is. Are you more interested in how the integration in the DSO works or do you want to develop your fusion algorithm? I doubt that I can provide a one-size-fits all solution for you, as I am not very satisfied with the work I did here. I drew a few very important conclusions from this work, for example that I first need to build a pipeline from data capture to final fusion algorithm which I am still working on at the moment. So, depending on your personal goal, I might need to tell you that you should not waste your time with my code but instead develop your own solution. So, please elaborate a bit more on what goal you want to achieve.

Kind regards,
Adam

@zhangjd1029
Copy link
Author

Thank you for your detailed reply. I have benefited greatly.
I would like to ask if you can achieve online operation by running the dsoros file that you have modified using the "rosrun dso ros dso live rosTopic:=xxx udp=xxx calib=xxx mode=1" command? If possible, how can I obtain the udp signal。
I am also prepared to try the offline operation you suggested。
After reading https://github.com/GSORF/Visual-GPS-SLAM/tree/master#how-to-use-the-b-slam-sim-blender-addon you mentioned, I have deployed Blender2.8, but I am not sure how to use it offline with DSO. Can you explain it to me?
Thank you again for your reply!

@GSORF
Copy link
Owner

GSORF commented May 21, 2023

Hi @zhangjd1029,

Thank you for detailing out what you have done so far.

Ok, about the online operation:
You need to setup a UDP-Client that will broadcast the following string to the DSO endpoint on port 2016 on the network:
GPS#49.4144637#11.1298899#20.903
You can see that the string contains four parts (1) "GPS", (2) Latitude, (3) Longitude, (4) Accuracy. For more information, here is the corresponding line in my modified DSO version:
https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L189

In order to send the GPS location via UDP from the smartphone I have developed my own smartphone app via Java (in Android Studio). I noticed that I have not published the code of the app, so I wouldnt recommend to use the online variant as there is a lot of work connected to get everything running. And given how many years have passed since I have developed the code in this repo, this was just too complicated compared to a plug and play solution. Nowadays, I would have just used a ros integrated GPS which could be connected to a Jetson board and processed the corresponding ROS topic within the DSO. No extra hardware and networking problems to solve :)

Therefore I would suggest you look into the offline version for now.
This is much easier, I have provided the .blend-file for you here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/FusionLinearKalmanFilter/01_LinearKalmanFilter_allEvaluations.blend

Just open the .blend-file, which should work with the most recent Blender version (tested today with Blender 3.5.0 Alpha). You will see the "Scripting" workspace appearing with many pre-imported scenarios for different datasets. It is not important how the datasets look, here we are only interested in the individual trajectories which we want to fuse. Scroll down to line 602 of the python script: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/FusionLinearKalmanFilter/01_LinearKalmanFilter_allEvaluations.py#L602
This will be the entry of the main function. I tried to document what I am doing. From line 624 onwards you will see what specific trajectories we will fuse. We begin with fusing a correctly scaled (but drifting) DSO trajectory with a 1m accurate GPS measurement. But the cool part is: You can use whatever trajectories you like here, they do not have to be the output of the DSO, they can also come from your own VSLAM algorithm. You can simply try out what I have prepared in this .blend file and try to get comfortable with that. But before you press the "Run script" play button below the text editor, you should make sure to comment out the remaining lines from line 631 onwards (https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/FusionLinearKalmanFilter/01_LinearKalmanFilter_allEvaluations.py#L631), as the script will save the results of fusing the data sources and this will most likely lead to a not responding Blender instance. The lines after line 631 are just different variants of the filter with different settings, so you can choose how you would like to perform sensor fusion exactly to your liking. Of course there are many other methods and approaches to do data fusion and this is only the most simple, yet powerful one can think of.

Hope this helped you. Feel free to reach out if there is still something unclear.

Kind regards,
Adam

@zhangjd1029
Copy link
Author

Thank you very much. I have learned a lot from your reply.
Next, I will try running offline and learn your method of integrating GPS.
thank you

@GSORF
Copy link
Owner

GSORF commented May 24, 2023

You are very welcome, @zhangjd1029 :)

Btw., in case you are interested: I am currently doing a live stream series about the LDSO. Yesterday I have uploaded the most recent recordings. Here, in this part I am showing how I am implementing an importer for the LDSO trajectories to be used within Blender. As this is a video with explanation, you might be able to find something helpful for your project there - here ist the link to the most recent video recording:
https://www.youtube.com/watch?v=CPxTwsZqL4Q

I will soon close your issue. Feel free to open up a new one if you have any further questions or remarks. I am also happy about any criticism.

Kind regards,
Adam

@zhangjd1029
Copy link
Author

That is great! Recently, I was also researching LDSO and there were some areas in the code that I couldn't understand. I will watch your video carefully!

I am writing a code that runs LDSO in real time and can run Pangolin, but the received image is gray with stripes, which is obviously wrong.

Have you conducted any research in this area? Can you give some suggestions?

thank you

@GSORF
Copy link
Owner

GSORF commented May 26, 2023

Hi @zhangjd1029,

sounds interesting. Do you have any online ressources on your project?
Given your description I would assume that the byte order in the image data is erroneous. I had a similar issue when reading in a ROS image topic via raw Python and then tried to reconstruct that image from the byte data. The byte order was off in such a way that I got those (diagonal) stripes. My solution was to not bother with ROS in my case but instead save the images on file disk. But of course, I do not know anything about your implementation, so your situation will probably be different.

And yes, the LDSO does have a lot of areas which are not easy to understand. In fact, I think some are even impossible to understand for other developers. But I am trying to make the code easier to understand. I am therefore highly interested in what code parts you couldn't understand. Please feel free to comment on my videos in case anything in the code should be explained a bit more. Maybe we can clarify that code part together? ;)

Kind regards,
Adam

@zhangjd1029
Copy link
Author

Sorry, I don't have any open source resources online. If you want to see it, I can send it to your email. Would you like to help me check?
I think your suggestion is correct. The pixels of the image may not have been extracted correctly, and I will try to modify it in this direction.
Thank you for the last invitation you mentioned, but my abilities are limited.
Thank you very much

@GSORF
Copy link
Owner

GSORF commented May 27, 2023

Hi @zhangjd1029,

Sorry, I don't have any open source resources online. If you want to see it, I can send it to your email. Would you like to help me check?

Ok, I see. No worries, I would also just try out fixing the byte order. Good luck with your project.

Kind regards,
Adam

@zhangjd1029
Copy link
Author

Hello @GSORF
I am still trying to run your project in real-time. I have two questions about this command(rosrun dso_ros dso_live topic=xxx calib=xxx sampleoutput=1 udp=1 http=1).

  1. Is ‘topic=’ receiving image topics published by ROS?
  2. By setting udp to 1, it indicates that the GPS signal will be fused?

I think so, I don't know if it's right. I can write a ROS program that can receive such GPS signals ($GPGGA,092725.00,4717.11399,N,00833.91590,E,1,08,1.01,499.6,M,48.0,M,*5B), and then convert it into GPS#49.4144637#11.1298899#20.903, finally, create a udp to send the converted GPS signal. So, can dso_ros directly receive GPS signals?
Finally, I would like to ask how udpserver. h only sets the port number to 2016 and does not set an IP address. How does it determine the IP address? How should I set the IP address of udp in the ROS program I mentioned above?

I don't know if I have expressed myself clearly. Looking forward to your reply. Thank you!

@zhangjd1029
Copy link
Author

Hello!
I think I have solved the above problem. That's great! I sincerely admire the project you have made once again.
But I don't seem to have successfully run the dso_ros you modified, because when I was turning, the trajectory drifted, as shown in the following figure
dso_gps轨迹漂移
When dso_ros is running normally, its output is as follows
2023-06-02 22-07-42屏幕截图
When the tracking is lost, it will appear “Received from UDP: GPS#40.00#116.35#52.00000",but at this point, the dso is no longer functioning properly.
2023-06-02 22-08-39屏幕截图
May I ask if this is normal? Looking forward to your reply.
Thank you!

@GSORF
Copy link
Owner

GSORF commented Jun 3, 2023

Hi @zhangjd1029,

wow, I am really happy that you have managed to get my very old code running - congratulations, the results look quite interesting and it makes me very happy to see that.

About your questions:

1.) Is ‘topic=’ receiving image topics published by ROS?

Yes.

2.) By setting udp to 1, it indicates that the GPS signal will be fused?

Yes, but keep in mind, the udp=1 flag only is related to getting UDP packages via the network. The fusion itself is implemented based on whether GPS signals are received via UDP here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L885 (or alternatively given as a .csv-file which is exported using my BlenderAddon in this repository, but this is not related to your question!)

So, can dso_ros directly receive GPS signals?

Yes, but keep in mind that the actual GPS signal is received and processed in the FullSystem, see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L901

Finally, I would like to ask how udpserver. h only sets the port number to 2016 and does not set an IP address. How does it determine the IP address? How should I set the IP address of udp in the ROS program I mentioned above?

Keep in mind that udpserver.h is only reacting to a physical smartphone which is measuring the GPS coordinates. It usually only listens via UDP and only in one specific case will send a packet as I will describe now: I am actually using a UDP broadcast from the smartphone (the physical GPS receiver) to the whole local network (address 0.0.0.0) to ask for the server ip. The broadcast message (using this message here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L238) is then received by the computer that is running the DSO (a Jetson TX2 in my case) which will send the IP address (using this message here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L239) to the phone. This way the communication between smartphone and computer is established. Here ist the actual implementation for the UDP broadcast reply: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L160

When the tracking is lost, it will appear “Received from UDP: GPS#40.00#116.35#52.00000",but at this point, the dso is no longer functioning properly. May I ask if this is normal?

Yes, unfortunately it is. As you can see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L848 once the system is "lost", it will not recover from that state anymore. I do not want to blame the original DSO, but this is how they have implemented it. I was too busy with my work of implementing a Kalman filter that "sort of" is working and didnt have the time to perform a proper re-localization. So there is a lot more work to do here. For starters, you could click on "Reset" in the User Interface which would then reset the FullSystem. Then the fusion should restart (but I think I am not deleting the history in the Kalman filter, so you would have jumps if you move too far away from your last position).

This is definitely a point where you may want to continue with your work? I am happy to help you with this, in case you are writing a research paper and want to cite my work. Actually, I am still interested in the fusion myself, of course :)

Hope this helps, thank you for trying out my implementation. I am glad you made it this far, given the many years I didnt look into my code anymore.
Please keep me posted on your progress! Good success with your project!

Kind regards,
Adam

@zhangjd1029
Copy link
Author

I am really happy to hear your affirmation. Thank you for your encouragement.

I suddenly realized a mistake I made. I mistakenly treated the last item of the received GPS signal as altitude, but I just realized that it refers to accuracy. I was really careless. May I know if it represents horizontal accuracy?

Also, I would like to confirm again if the output I mentioned earlier is correct. Because I didn't obtain the GPS signal through a mobile phone, but received it through a serial port and then sent it to the computer's local IP on port 2016 using UDP through ROS. So, I cannot be sure if "VGPSSLAM_FindVGPSServer," "VGPSSLAM_VGPSServerIsHere," and "GPS" (https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L238) have values.

I am studying your project with the intention of using it on a drone to make the drone's trajectory more accurate in high altitude. If possible, I would like to port your project to LDSO. I would like to know if you agree with this and if you have attempted such an integration before.

Kind regards,
zhang

@GSORF
Copy link
Owner

GSORF commented Jun 4, 2023

Hi @zhangjd1029,

Thanks for the update. To your questions:

I suddenly realized a mistake I made. I mistakenly treated the last item of the received GPS signal as altitude, but I just realized that it refers to accuracy. I was really careless. May I know if it represents horizontal accuracy?

Yes, it is the horizontal accuracy. This strongly depends on what your GNSS receiver / the actual smartphone API provides. In my case it was the horizontal accuracy given in meters which I used to set the measurement noise covariance matrix in the Kalman filter, see docs from Android here: https://developer.android.com/reference/android/location/Location#getAccuracy()

Also, I would like to confirm again if the output I mentioned earlier is correct. Because I didn't obtain the GPS signal through a mobile phone, but received it through a serial port and then sent it to the computer's local IP on port 2016 using UDP through ROS. So, I cannot be sure if "VGPSSLAM_FindVGPSServer," "VGPSSLAM_VGPSServerIsHere," and "GPS" (https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L238) have values.

This is hard to comment on from my side without seeing your code. But there is an easy way to check. Just print out the received values in this function after the line that tells you that there is a new GPS measurement (this line was successfully printed in your terminal): https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L100
However, the first step would be to check whether or not you are actually receiving anything via UDP (specifically the UDP broadcast for IP address detection). I have already implemented a respective debug print line - check if this is printed on your side: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L156
You should then be able to see if the message is correctly asking for the VGPS Server ;) . You already demonstrated that your communication seems to be working flawlessly. If you change the altitude to the accuracy, you should be good to go.

Keep one thing in mind, please: My Kalman filter implementation might not be to your liking. In retrospective I think that it would be a good idea to include the altitude measurement into the conversion from Geodetic to Cartesian coordinates (using the WGS84 ECEF ellipsoid model). I also merely estimate the 3D position - no orientation! Given your scenario with the drone you might need to heavily improve that Kalman filter. I remember that I have also visualized the Kalman filter trajectory using a green color in the 3D view. I couldn't see that in your screenshots. Is it displayed? And please check that you are initializing the Kalman filter using the GPS measurement by setting initKalmanFilter=1 via the command line:
https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso_ros/src/main.cpp#L143 (compare https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso_ros/runDSOOnline.sh)

But all in all, I love your idea of porting this code to the LDSO! Very good plan and I sincerely encourage you to do that! I did only try the implementation in a car. Which is very well constrained to a 2D planar motion (even though, I did not enforce that in my implementation). I did not have a drone able to carry the additional payload of Jetson + Camera + Battery at that time. Nowadays things are different. However, I concentrated my research very much on monocular Visual SLAM and only touched upon the data fusion aspects. I am trying to get my VSLAMs running without any additional sensor data, this is what I personally find much more interesting. To this end I am implementing my own VSLAM architectures. In the long run, I still think that a combination of all techniques is the way to go.

Ok, the end was a bit more on the philosophical side of things. I hope I have answered your questions and would love to stay updated on your progress. Maybe you could tell me a bit more about your project, specifically I would like to know if this is a student project or a commercial product? Feel free to ask if anything is still unclear. I am happy to help.

Kind regards,
Adam

@zhangjd1029
Copy link
Author

Thank you very much for your detailed answer.
Let's answer your final question first. This is my graduation project, so it is academic in nature.

Today, I went to test again and changed the information to accuracy. After adding initKalmanFilter=1 to the command, the results still showed drift, and there is no visual Kalman filter filter.
无回环

The command I entered is "rosrun dso_ros dso_live topic=/usb_cam/image_raw calib=xxx udp=1 initKalmanFilter=1", is this correct? I also tried not to write 'udp=1' in the command, but the output still shows that GPS was received from udp. This is strange

I don't know why such a result occurred. Perhaps I didn't set it correctly in one step. Next, I want to read your modified DSO code and learn about GPS fusion methods. I hope to gain some insights. Thank you very much!

Kind regards,
zhang

@GSORF
Copy link
Owner

GSORF commented Jun 13, 2023

Hey zhang,

Thanks for letting me know about this issue.

Indeed, the commandline option "udp=1" is useless - I have commented out the check for this flag on the no-ros-version, see https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/main_dso_pangolin.cpp#L486
Same with the ROS-version, see https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso_ros/src/main.cpp#L291
Sorry for the confusion, seems like during my tests I needed to comment that out and forgot about it when uploading the code.

About the visualization of the Kalman filter: I first had to find where I have implemented it - the commit can be found here: f9d561f
The display is dependent on whether a "frame" object has a "measurement" (to display the GPS measurement points in 3D) or has a "prediction" (to display the Kalman Filter trajectory in 3D), see here for drawing Kalman filter trajectory: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/IOWrapper/Pangolin/KeyFrameDisplay.cpp#L436
The flags are set for each frame in (1) the UDP-Server when a GPS measurement is received via UDP, see here: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/util/UDPServer.h#L204
And (2) in the FullSystem (function "AddActiveFrame()") when the Kalman filter is sucessfully initialized (that is when (a) "udp=1" is set via the command line, (b) a GPS coordinate is received and converted to ECEF. The setting "initKalmanFilter=1" will set the predicted trajectory start to ECEF coordinates, so you will probably not be able to find / see the trajectory in the 3D view: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L929) is a prediction:
https://github.com/GSORF/Visual-GPS-SLAM/blob/master/03_Application/dso/src/FullSystem/FullSystem.cpp#L1082

So, TL;DR: Please remove the option "initKalmanFilter=1" from your command line. Then you should see the green Kalman filter trajectory in the 3D view once you get GPS coordinates.

I know, this code is a mess. ;( There is a lot of room for improvement besides it not being coded very user friendly. You can get inspiration from my code, but as I wrote in the beginning of this issue, I would strongly suggest to use the offline version within Blender. Thats where I put most effort into and the results from that have also been published in my paper:
https://www.scitepress.org/Papers/2019/73753/73753.pdf
For a more detailed explanation about my Kalman filter method (again, I want to stress that there are better ways!!!) you can read my master thesis if you want: https://github.com/GSORF/Visual-GPS-SLAM/tree/master#master-thesis

Good success with your graduation project! Please, keep me updated on your progress.

Kind regards,
Adam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants