Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to launch qsr and Velocity Costmaps #3

Open
Pei-Yachao opened this issue Oct 11, 2016 · 4 comments
Open

How to launch qsr and Velocity Costmaps #3

Pei-Yachao opened this issue Oct 11, 2016 · 4 comments

Comments

@Pei-Yachao
Copy link

Hi
I read the project Qualitative Constraints for Human-aware Robot Navigation and watched the video.I'm very interest about this project and want to test on my own robot.
I've installed all the packages include the modified navigation meta pack,but I don't know how to launch this project.
Could you tell me the steps to start this project?

http://lcas.lincoln.ac.uk/cdondrup/software.html

@cdondrup
Copy link
Member

Hi,

Always happy when someone reads the papers ;) In theory it should be as easy as launching the hrsi.launch file. You want to start it with:

roslaunch hrsi_launch hrsi.launch with_qsr_lib:=true

Currently the system relies on the output produced by our people perception module.

We have a deployment of our robot in the end of November where the system is supposed to be used outside the lab for the first time. For this we will collect more training data (currently only the two scenarios shown in the video are available) and also make it easier to use. However, if you just run the launch file above and use the tracker we provide it should work given that you are using the modified navigation stack.

If you are interested, I can keep you in the loop regarding those upgrades that should hopefully make it more useful.

@Pei-Yachao
Copy link
Author

Hi,
Thank you for responding.

I got progress.^_^

My robot is kobuki-like robot.It only receives twists (cmd_vel)from navigation stack,and converts two wheels encoder data to odom. The LRF is hokuyo UST20 laser.The RGB-D camera is xtion(I didn't use it in gazebo but my real robot has one ).xtion is used to detect object.

I read almost all your lab papers, now focus on the AAAI2016 paper(Unsupervised Learning of Qualitative Motion Behaviours by a Mobile Robot).Just Little confused about how the learning model works in detail.
So I want to recreate the experiment first, then progressively understand how this whole project works.
But I stuck on how to Import this project into my robot.

I have three questions at the present stage:
1.Do I have to roslaunch mongodb .....(The setting up instructions says the system relies on mongodb)
because my robot just work in the Local network.
2.In that video the robot uses the xtion,right?
3.After I launched my robot in gazebo and the other two commands
roslaunch hrsi_launch hrsi.launch with_qsr_lib:=true
roslaunch perception_people_launch people_tracker_robot.launch
but in rviz,I cannot get velocity_costmap.I posted the screenshot below.I don't know where is wrong.

problem

I'm looking forward to haering from you.

@cdondrup
Copy link
Member

Hi,

The problem with the velocity costmap is that it is only published when a human is detected and the interaction type has been classified.
Regarding the people perception, this launch file uses a few assumptions that are based on the robot we use in the project like for the upper body detector it computes a Region of Interest based on a predefined height and also assumes that there is a Pan Tilt Unit that publishes angles. So this will not work properly. In theory you can just change that height in a config file and there is also a version where it does not need a PTU but given your robot, it will most likely never see upper bodies any way because of the height of the camera.
The leg detector on the other hand might work if you make sure that your laser scans are published on the /scan topic or change the parameter for that topic in the perception_people launch file. But I am not sure how well that works in Gazebo and if they have a good human model.

I will create a simulator set-up using our robot and simulator at some point rather soon-ish and make that available for testing as well.

Regarding mongodb the people perception (given that you do not want to use the logging feature) and the velocity costmaps work without the mongodb. For most other parts of the system you will need it though. It doesn't matter if it is just a local network. The mongodb is a database that we use to store all kinds of important information like topological maps and for data collection.

@Pei-Yachao
Copy link
Author

hi Dondrup,
Thank you for responding.

I'm still working on it right now.
I set up my PC yesterday.when I compile the this navigation package,there is error about: cannot find rosjava lib...(I didn't remember the details).But I fixed it.
Maybe this could help someone who wants to try this project.I post my setting up steps here:

step1: Install ros-indigo-ros-base (Ubuntu 14.04)
step2: follow the instructions on setting up the STRANDS repositories.(sudo apt-get install ros-indigo-
strands-robot)
step3: sudo apt-get install ros-indigo-hrsi-launch
step4: Install this (modified navigation)from source.
step5: install rviz,rqt,hector mapping,gamapping,openni2 gazebo,hokuyo-node,urg-node and kobuki
packages.(I use these basic packages on my project)
step6:setting up mongodb.

At least I can install compile th navigation by following these steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants