Skip to content

Commit

Permalink
Fix compat with OpenCV 2.4.9.1 coming with Ubuntu 16.04
Browse files Browse the repository at this point in the history
- allow to use cv::VideoCapture
  • Loading branch information
fspindle committed Dec 13, 2024
1 parent 7d89de6 commit 4aa365f
Show file tree
Hide file tree
Showing 54 changed files with 1,132 additions and 664 deletions.
16 changes: 8 additions & 8 deletions doc/tutorial/tracking/tutorial-tracking-blob.dox
Original file line number Diff line number Diff line change
Expand Up @@ -32,40 +32,40 @@ a v4l2 live camera that can be an usb camera, or a Raspberry Pi camera module.

\subsection live-firewire From a firewire live camera

The following code also available in tutorial-blob-tracker-live-firewire.cpp file provided in ViSP source code tree
The following code also available in tutorial-blob-tracker-live.cpp file provided in ViSP source code tree
allows to grab images from a firewire camera and track a blob. The initialisation is done with a user mouse click on
a pixel that belongs to the blob.

To acquire images from a firewire camera we use vp1394TwoGrabber class on unix-like systems or vp1394CMUGrabber class
under Windows. These classes are described in the \ref tutorial-grabber.

\include tutorial-blob-tracker-live-firewire.cpp
\include tutorial-blob-tracker-live.cpp

From now, we assume that you have successfully followed the \ref tutorial-getting-started and the \ref tutorial-grabber.
Here after we explain the new lines that are introduced.

\snippet tutorial-blob-tracker-live-firewire.cpp Construction
\snippet tutorial-blob-tracker-live.cpp Construction

Then we are modifying some default settings to allow drawings in overlay the contours pixels and the position of the
center of gravity with a thickness of 2 pixels.
\snippet tutorial-blob-tracker-live-firewire.cpp Setting
\snippet tutorial-blob-tracker-live.cpp Setting

Then we are waiting for a user initialization throw a mouse click event in the blob to track.
\snippet tutorial-blob-tracker-live-firewire.cpp Init
\snippet tutorial-blob-tracker-live.cpp Init

The tracker is now initialized. The tracking can be performed on new images:
\snippet tutorial-blob-tracker-live-firewire.cpp Track
\snippet tutorial-blob-tracker-live.cpp Track

\subsection live-v4l2 From a v4l2 live camera

The following code also available in tutorial-blob-tracker-live-v4l2.cpp file provided in ViSP source code tree allows
The following code also available in tutorial-blob-tracker-live.cpp file provided in ViSP source code tree allows
to grab images from a camera compatible with video for linux two driver (v4l2) and track a blob. Webcams or more
generally USB cameras, but also the Raspberry Pi Camera Module can be considered.

To acquire images from a v4l2 camera we use vpV4l2Grabber class on unix-like systems. This class is described in the
\ref tutorial-grabber.

\include tutorial-blob-tracker-live-v4l2.cpp
\include tutorial-blob-tracker-live.cpp

The code is the same than the one presented in the previous subsection, except that here we use the vpV4l2Grabber
class to grab images from usb cameras. Here we have also modified the while loop in order to catch an exception when
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ tutorials given in \ref tutorial_install_src.
Once build, to see the options that are available, just run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-webcam --help
Usage: ./tutorial-mb-generic-tracker-apriltag-webcam [--input <camera id>] [--cube_size <size in m>] [--tag-size <size in m>] [--quad-decimate <decimation>] [--nthreads <nb>] [--intrinsic <xml intrinsic file>] [--camera-name <camera name in xml file>] [--tag-family <0: TAG_36h11, 1: TAG_36h10, 2: TAG_36ARTOOLKIT, 3: TAG_25h9, 4: TAG_25h7, 5: TAG_16h5>] [--display-off] [--texture] [--projection_error <30 - 100>] [--help]
Usage: ./tutorial-mb-generic-tracker-apriltag-webcam [--input <camera id>] [--cube-size <size in m>] [--tag-size <size in m>] [--quad-decimate <decimation>] [--nthreads <nb>] [--intrinsic <xml intrinsic file>] [--camera-name <camera name in xml file>] [--tag-family <0: TAG_36h11, 1: TAG_36h10, 2: TAG_36ARTOOLKIT, 3: TAG_25h9, 4: TAG_25h7, 5: TAG_16h5>] [--display-off] [--texture] [--projection-error <30 - 100>] [--help]
\endcode

To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, and enable moving-edges and
Expand All @@ -126,7 +126,7 @@ $ ./tutorial-mb-generic-tracker-apriltag-webcam --input 1
\endcode
- The default size of the cube is 0.125 meter large. To use rather a 0.20 meter large cube, run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-webcam --cube_size 0.20
$ ./tutorial-mb-generic-tracker-apriltag-webcam --cube-size 0.20
\endcode
- The AprilTag size is 0.08 by 0.08 meters. To change the tag size to let say 0.10 meter square, use:
\code
Expand Down Expand Up @@ -160,7 +160,7 @@ $ ./tutorial-mb-generic-tracker-apriltag-webcam --display_off
The default value of this threshold is set to 40 degrees. To decrease this threshold to 30 degrees (meaning that we
accept less projection error and thus trigger a new AprilTag detection more often) you may run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-webcam --projection_error 30
$ ./tutorial-mb-generic-tracker-apriltag-webcam --projection-error 30
\endcode

\subsection mb_generic_apriltag_webcam_result Expected results
Expand Down Expand Up @@ -202,7 +202,7 @@ provided in the tutorials available from \ref tutorial_install_src.
Once build, to see the options that are available, just run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --help
Usage: ./tutorial-mb-generic-tracker-apriltag-rs2 [--cube_size <size in m>] [--tag-size <size in m>] [--quad-decimate <decimation>] [--nthreads <nb>] [--tag-family <0: TAG_36h11, 1: TAG_36h10, 2: TAG_36ARTOOLKIT, 3: TAG_25h9, 4: TAG_25h7, 5: TAG_16h5>] [--display-off] [--texture] [--depth] [--projection_error <30 - 100>] [--help]
Usage: ./tutorial-mb-generic-tracker-apriltag-rs2 [--cube-size <size in m>] [--tag-size <size in m>] [--quad-decimate <decimation>] [--nthreads <nb>] [--tag-family <0: TAG_36h11, 1: TAG_36h10, 2: TAG_36ARTOOLKIT, 3: TAG_25h9, 4: TAG_25h7, 5: TAG_16h5>] [--display-off] [--texture] [--depth] [--projection-error <30 - 100>] [--help]
\endcode

To test the tracker on a 12.5 cm wide cube that has an AprilTag of size 8 by 8 cm, and enable moving-edges, keypoints
Expand All @@ -224,7 +224,7 @@ $ ./tutorial-mb-generic-tracker-apriltag-rs2
By default, the following settings are used: <br>
- Default size of the cube is 0.125 meter large. To use rather a 0.20 meter large cube, run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --cube_size 0.20
$ ./tutorial-mb-generic-tracker-apriltag-rs2 --cube-size 0.20
\endcode
- The AprilTag size is 0.08 by 0.08 meters. To change the tag size to let say 0.10 meter square, use:
\code
Expand All @@ -251,7 +251,7 @@ $ ./tutorial-mb-generic-tracker-apriltag-rs2 --display_off
The default value of this threshold is set to 40 degrees. To decrease this threshold to 30 degrees (meaning that
we accept less projection error and thus trigger a new AprilTag detection more often) you may run:
\code
$ ./tutorial-mb-generic-tracker-apriltag-webcam --projection_error 30
$ ./tutorial-mb-generic-tracker-apriltag-webcam --projection-error 30
\endcode

\subsection mb_generic_apriltag_realsense_result Expected results
Expand Down
5 changes: 5 additions & 0 deletions example/calibration/calibrate-camera.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,11 @@
#elif defined(HAVE_OPENCV_CALIB)
#include <opencv2/calib.hpp>
#endif

#if defined(HAVE_OPENCV_CONTRIB)
#include <opencv2/contrib/contrib.hpp> // Needed on Ubuntu 16.04 with OpenCV 2.4.9.1
#endif

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
Expand Down
26 changes: 19 additions & 7 deletions example/manual/ogre/HelloWorldOgre.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,21 +36,31 @@
\example HelloWorldOgre.cpp
\brief Example that shows how to exploit the vpAROgre class.
*/

#include <iostream>

#include <visp3/core/vpConfig.h>

//! [Undef grabber]
// Comment / uncomment following lines to use the specific 3rd party compatible with your camera
// #undef VISP_HAVE_V4L2
// #undef VISP_HAVE_DC1394
// #undef HAVE_OPENCV_HIGHGUI
// #undef HAVE_OPENCV_VIDEOIO
//! [Undef grabber]

#include <visp3/ar/vpAROgre.h>
#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpConfig.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/sensor/vp1394TwoGrabber.h>
#include <visp3/sensor/vpV4l2Grabber.h>

#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#if (VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)
#include <opencv2/highgui/highgui.hpp> // for cv::VideoCapture
#elif (VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio/videoio.hpp> // for cv::VideoCapture
#endif

int main()
Expand All @@ -61,7 +71,9 @@ int main()

try {
#if defined(VISP_HAVE_OGRE)
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394) || defined(HAVE_OPENCV_VIDEOIO)
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394) || \
((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)) || \
((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))

// Image to stock gathered data
// Here we acquire a color image. The consequence will be that
Expand All @@ -79,7 +91,7 @@ int main()
vp1394TwoGrabber grabber;
grabber.open(I);
grabber.acquire(I);
#elif defined(HAVE_OPENCV_VIDEOIO)
#elif ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
// OpenCV to gather images
cv::VideoCapture grabber(0); // open the default camera
if (!grabber.isOpened()) { // check if we succeeded
Expand Down Expand Up @@ -142,7 +154,7 @@ int main()
// Acquire a new image
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394)
grabber.acquire(I);
#elif defined(HAVE_OPENCV_VIDEOIO)
#elif ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
grabber >> frame;
vpImageConvert::convert(frame, I);
#endif
Expand Down
25 changes: 19 additions & 6 deletions example/manual/ogre/HelloWorldOgreAdvanced.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -41,16 +41,27 @@

#include <iostream>

#include <visp3/core/vpConfig.h>

//! [Undef grabber]
// Comment / uncomment following lines to use the specific 3rd party compatible with your camera
// #undef VISP_HAVE_V4L2
// #undef VISP_HAVE_DC1394
// #undef HAVE_OPENCV_HIGHGUI
// #undef HAVE_OPENCV_VIDEOIO
//! [Undef grabber]

#include <visp3/ar/vpAROgre.h>
#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpConfig.h>
#include <visp3/core/vpHomogeneousMatrix.h>
#include <visp3/core/vpImage.h>
#include <visp3/sensor/vp1394TwoGrabber.h>
#include <visp3/sensor/vpV4l2Grabber.h>

#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#if (VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)
#include <opencv2/highgui/highgui.hpp> // for cv::VideoCapture
#elif (VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio/videoio.hpp> // for cv::VideoCapture
#endif

#ifdef ENABLE_VISP_NAMESPACE
Expand Down Expand Up @@ -114,7 +125,9 @@ int main()
{
try {
#if defined(VISP_HAVE_OGRE)
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394) || defined(HAVE_OPENCV_VIDEOIO)
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394) || \
((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)) || \
((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))

// Image to store gathered data
// Here we acquire a grey level image. The consequence will be that
Expand All @@ -139,7 +152,7 @@ int main()
// the image size
grabber.open(I);
grabber.acquire(I);
#elif defined(HAVE_OPENCV_VIDEOIO)
#elif ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
// OpenCV to gather images
cv::VideoCapture grabber(0); // open the default camera
if (!grabber.isOpened()) { // check if we succeeded
Expand Down Expand Up @@ -171,7 +184,7 @@ int main()
// Acquire a new image
#if defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_DC1394)
grabber.acquire(I);
#elif defined(HAVE_OPENCV_VIDEOIO)
#elif ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
grabber >> frame;
vpImageConvert::convert(frame, I);
#endif
Expand Down
64 changes: 37 additions & 27 deletions example/servo-pioneer/servoPioneerPoint2DDepth.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,15 @@

#include <visp3/core/vpConfig.h>

//! [Undef grabber]
// Comment / uncomment following lines to use the specific 3rd party compatible with your camera
// #undef VISP_HAVE_V4L2
// #undef VISP_HAVE_DC1394
// #undef VISP_HAVE_CMU1394
// #undef HAVE_OPENCV_HIGHGUI
// #undef HAVE_OPENCV_VIDEOIO
//! [Undef grabber]

#include <visp3/blob/vpDot2.h>
#include <visp3/core/vpCameraParameters.h>
#include <visp3/core/vpHomogeneousMatrix.h>
Expand All @@ -53,43 +62,44 @@
#include <visp3/visual_features/vpFeaturePoint.h>
#include <visp3/vs/vpServo.h>

#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#if (VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)
#include <opencv2/highgui/highgui.hpp> // for cv::VideoCapture
#elif (VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio/videoio.hpp> // for cv::VideoCapture
#endif

#if defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_CMU1394) || defined(HAVE_OPENCV_VIDEOIO)
#if defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_V4L2) || defined(VISP_HAVE_CMU1394) || \
((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI)) || \
((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
#if defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI)
#if defined(VISP_HAVE_PIONEER)
#define TEST_COULD_BE_ACHIEVED
#endif
#endif
#endif

#undef VISP_HAVE_OPENCV // To use a firewire camera
#undef VISP_HAVE_V4L2 // To use a firewire camera

/*!
\example servoPioneerPoint2DDepth.cpp
/*!
\example servoPioneerPoint2DDepth.cpp
Example that shows how to control the Pioneer mobile robot by IBVS visual
servoing with respect to a blob. The current visual features that are used
are s = (x, log(Z/Z*)). The desired one are s* = (x*, 0), with:
- x the abscise of the point corresponding to the blob center of gravity
measured at each iteration,
- x* the desired abscise position of the point (x* = 0)
- Z the depth of the point measured at each iteration
- Z* the desired depth of the point equal to the initial one.
Example that shows how to control the Pioneer mobile robot by IBVS visual
servoing with respect to a blob. The current visual features that are used
are s = (x, log(Z/Z*)). The desired one are s* = (x*, 0), with:
- x the abscise of the point corresponding to the blob center of gravity
measured at each iteration,
- x* the desired abscise position of the point (x* = 0)
- Z the depth of the point measured at each iteration
- Z* the desired depth of the point equal to the initial one.
The degrees of freedom that are controlled are (vx, wz), where wz is the
rotational velocity and vx the translational velocity of the mobile platform
at point M located at the middle between the two wheels.
The degrees of freedom that are controlled are (vx, wz), where wz is the
rotational velocity and vx the translational velocity of the mobile platform
at point M located at the middle between the two wheels.
The feature x allows to control wy, while log(Z/Z*) allows to control vz.
The value of x is measured thanks to a blob tracker.
The value of Z is estimated from the surface of the blob that is
proportional to the depth Z.
The feature x allows to control wy, while log(Z/Z*) allows to control vz.
The value of x is measured thanks to a blob tracker.
The value of Z is estimated from the surface of the blob that is
proportional to the depth Z.
*/
*/
#ifdef TEST_COULD_BE_ACHIEVED
int main(int argc, char **argv)
{
Expand Down Expand Up @@ -137,7 +147,7 @@ int main(int argc, char **argv)
vpCameraParameters cam;

// Create the camera framegrabber
#if defined(HAVE_OPENCV_VIDEOIO)
#if ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
int device = 1;
std::cout << "Use device: " << device << std::endl;
cv::VideoCapture g(device); // open the default camera
Expand Down Expand Up @@ -181,7 +191,7 @@ int main(int argc, char **argv)
#endif

// Acquire an image from the grabber
#if defined(HAVE_OPENCV_VIDEOIO)
#if ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
g >> frame; // get a new frame from camera
vpImageConvert::convert(frame, I);
#else
Expand Down Expand Up @@ -262,7 +272,7 @@ int main(int argc, char **argv)

while (1) {
// Acquire a new image
#if defined(HAVE_OPENCV_VIDEOIO)
#if ((VISP_HAVE_OPENCV_VERSION < 0x030000) && defined(HAVE_OPENCV_HIGHGUI))|| ((VISP_HAVE_OPENCV_VERSION >= 0x030000) && defined(HAVE_OPENCV_VIDEOIO))
g >> frame; // get a new frame from camera
vpImageConvert::convert(frame, I);
#else
Expand Down
Loading

0 comments on commit 4aa365f

Please sign in to comment.