-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I use Kinect v2 for acquiring the data? #94
Comments
Hello,@patmarion #93 1. Firstlyyou need to use lcm logger record your camera's rgbd stream. in lcm_republisher.cpp. crop and resize if you are not 640 * 480
2. Secondlychange the intrinsics in bot_frames.cfg ( I'm not sure for the necessary) like this. using the pre-calculate intrinsics
3.Thirdlyyou need to write a camera.cfg using your new intrinsics. make sure elasticfuion run with the following param # call ElasticFusion
cameraConfig_arg = ""
if os.path.isfile(path+"/camera.cfg"):
cameraConfig_arg += " -cal "+ path+"/camera.cfg"
print 'cameraConfig_arg', cameraConfig_arg
os.system(path_to_ElasticFusion_executable + " -l ./" + lcmlog_filename + cameraConfig_arg) 4. Fourthlyjust run the pipeline as the author's routine. if you want to label the 3d bounding box like me like the following picture. In rendertrainingimages.py implement the projection operation. def getProjectionPixel(self,verterx3d):
vertex3dList=[]
for i in range(verterx3d.GetNumberOfPoints()):
point = verterx3d.GetPoint(i)
vertex3dList.append(point)
vertex3dArray=np.array(vertex3dList)
vertex3dArray=vertex3dArray.transpose()
projectPixels=np.matmul(self.projectMatrix,vertex3dArray)
projectPixels=projectPixels/projectPixels[-1,:]
return projectPixels[:-1,:]
def setProjectMatrix():
principalX = 321.543
principalY = 242.333
fx=406.340666
fy=406.3886
return (np.array([[fx,0,principalX],[0,fy,principalY],[0,0,1]])) |
Just leaving my commets for getting results with a ZED camera (HD720: 1280x720 or VGA: 672x376).
|
I have gone through the steps of making LabelFusion work with Kinect v2 (aka Kinect One) using ianre657's nvidia-docker2 fork on Ubuntu 20.04 LTS. By default, OpenNI2, which is used to capture the data in LabelFusion, does not support Kinect v2. However, if you build and install libfreenect2, it will create drivers for OpenNI2 that will support Kinect v2. This means that you need to install libfreenect2 and set the udev rules inside the docker. Here is someone who has already done most of the work for you. Copy their dockerfile and paste the relevant parts inside the LabelFusion dockerfile. If you have trouble with CUDA in building libfreenect2, add -DENABLE-CUDA=OFF to the cmake parameters to disable it. I also had to correct the cp path for the udev rules since libfreenect is downloaded and built in the home (/root) directory (confusingly named "root") instead of the root (/) directory. Assuming your system is the same as mine, just that was enough to run the sample Protonect from freenect2 but running the NiViewer2 from OpenNI2 failed to find the Kinect v2 device. What I had to do was find the OpenNI.ini file (/etc/openni2/OpenNI.ini) and specify the driver directory as the ones provided by freenect2 (Repository=~/freenect2/lib/OpenNI2/Drivers). (For whatever reason, this was not a required step when installing it natively on my Ubuntu 20.04 system). You can probably add this line after building everything else in the dockerfile Finally, rebuild the LabelFusion image using docker_build.sh I am attaching my dockerfile for reference Adjust the camera intrinsics and resolution similar to instructed above. Run your docker container with the rebuilt image, mounting your data folder:
Inside the docker, run |
If yes, how so?
Thank you.
The text was updated successfully, but these errors were encountered: