diff --git a/doc/tutorial/calibration/tutorial-calibration-extrinsic.dox b/doc/tutorial/calibration/tutorial-calibration-extrinsic.dox index 7f6bfeee73..af8ac9b803 100644 --- a/doc/tutorial/calibration/tutorial-calibration-extrinsic.dox +++ b/doc/tutorial/calibration/tutorial-calibration-extrinsic.dox @@ -31,10 +31,10 @@ The calibration process described in this tutorial consists in 3 steps: to estimate the \f$^e{\bf M}_c\f$ transformation. Note that all the material (source code) described in this tutorial is part of ViSP source code -(in `tutorial/calibration` folder) and could be downloaded using the following command: +(in `apps/calibration` folder) and could be downloaded using the following command: \verbatim -$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/calibration +$ svn export https://github.com/lagadic/visp.git/trunk/apps/calibration \endverbatim \section calib_ext_recommendation Recommendations @@ -102,7 +102,7 @@ If you want the parameters with distortion, you need to achieve a calibration as As an example, in ViSP source code you will find a dataset corresponding to data acquired with a real robot: \verbatim -$ cd $VISP_WS/visp-build/tutorial/calibration +$ cd $VISP_WS/visp-build/apps/calibration $ ls *.{xml,png,yaml} camera.xml image-3.png image-6.png pose_fPe_1.yaml pose_fPe_4.yaml pose_fPe_7.yaml image-1.png image-4.png image-7.png pose_fPe_2.yaml pose_fPe_5.yaml pose_fPe_8.yaml @@ -286,10 +286,10 @@ vpRealSense2 class usage. Step 1: Acquire robot poses and images Connect the Realsense D435 camera to the computer, put the chessboard in the camera field of view, enter in -`tutorial/calibration` folder and run `visp-acquire-franka-calib-data` binary to acquire the images and the +`apps/calibration` folder and run `visp-acquire-franka-calib-data` binary to acquire the images and the corresponding robot end-effector positions: - $ cd tutorial/calibration + $ cd apps/calibration $ ./visp-acquire-franka-calib-data By default the robot controller IP is `192.168.1.1`. If your Franka has an other IP (let say 10.0.0.2) use @@ -392,11 +392,11 @@ vpRealSense2 class usage. Step 1: Acquire robot poses and images Connect the Realsense camera to the computer, put the chessboard in the camera field of view, enter in -`tutorial/calibration` folder and run `visp-acquire-universal-robots-calib-data` binary to acquire the images and +`apps/calibration` folder and run `visp-acquire-universal-robots-calib-data` binary to acquire the images and the corresponding robot end-effector positions: \verbatim -$ cd tutorial/calibration +$ cd apps/calibration $ ./visp-acquire-universal-robots-calib-data \endverbatim diff --git a/doc/tutorial/visual-servo/tutorial-franka-ibvs.dox b/doc/tutorial/visual-servo/tutorial-franka-ibvs.dox index 1f67bc27ca..3e8014e71b 100644 --- a/doc/tutorial/visual-servo/tutorial-franka-ibvs.dox +++ b/doc/tutorial/visual-servo/tutorial-franka-ibvs.dox @@ -16,7 +16,7 @@ An example of image-based visual servoing using Panda robot equipped with a Real - Attach your Realsense camera to the robot end-effector - Put an Apriltag in the camera field of view -- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$ the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `tutorial/calibration/eMc.yaml`. +- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$ the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `apps/calibration/eMc.yaml`. Now enter in `example/servo-franka folder` and run `servoFrankaIBVS` binary using `--eMc` to locate the file containing the \f$^e{\bf M}_c\f$ transformation. Other options are available. Using `--help` show them: @@ -26,7 +26,7 @@ Now enter in `example/servo-franka folder` and run `servoFrankaIBVS` binary usin Run the binary activating the plot and using a constant gain: - $ ./servoFrankaIBVS --eMc ../../tutorial/calibration/eMc.yaml --plot + $ ./servoFrankaIBVS --eMc ../../apps/calibration/eMc.yaml --plot Use the left mouse click to enable the robot controller, and the right click to quit the binary. @@ -40,15 +40,15 @@ At this point the behaviour that you should observe is the following: You can also activate an adaptive gain that will make the convergence faster: - $ ./servoFrankaIBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --adaptive_gain + $ ./servoFrankaIBVS --eMc ../../apps/calibration/eMc.yaml --plot --adaptive_gain You can also start the robot with a zero velocity at the beginning introducing task sequencing option: - $ ./servoFrankaIBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --task_sequencing + $ ./servoFrankaIBVS --eMc ../../apps/calibration/eMc.yaml --plot --task_sequencing And finally you can activate the adaptive gain and task sequencing: - $ ./servoFrankaIBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --adaptive_gain --task_sequencing + $ ./servoFrankaIBVS --eMc ../../apps/calibration/eMc.yaml --plot --adaptive_gain --task_sequencing To learn more about adaptive gain and task sequencing see \ref tutorial-boost-vs. @@ -61,6 +61,6 @@ from a real Franka robot, like in the next video, we recommend to make a tour on
\endhtmlonly -You can also follow \ref tutorial-ibvs that will give some hints on image-based visual servoing in simulation with a free flying camera. +You can also follow \ref tutorial-ibvs that will give some hints on image-based visual servoing in simulation with a free flying camera. */ diff --git a/doc/tutorial/visual-servo/tutorial-franka-pbvs.dox b/doc/tutorial/visual-servo/tutorial-franka-pbvs.dox index 9d3a4c6e4f..40d2999005 100644 --- a/doc/tutorial/visual-servo/tutorial-franka-pbvs.dox +++ b/doc/tutorial/visual-servo/tutorial-franka-pbvs.dox @@ -258,7 +258,7 @@ An example of position-based visual servoing using Panda robot equipped with a R - Attach your Realsense camera to the robot end-effector - Put an Apriltag in the camera field of view -- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$ the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `tutorial/calibration/eMc.yaml`. +- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$ the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `apps/calibration/eMc.yaml`. Now enter in `example/servo-franka folder` and run `servoFrankaPBVS` binary using `--eMc` to locate the file containing the \f$^e{\bf M}_c\f$ transformation. Other options are available. Using `--help` show them: @@ -268,7 +268,7 @@ Now enter in `example/servo-franka folder` and run `servoFrankaPBVS` binary usin Run the binary activating the plot and using a constant gain: - $ ./servoFrankaPBVS --eMc ../../tutorial/calibration/eMc.yaml --plot + $ ./servoFrankaPBVS --eMc ../../apps/calibration/eMc.yaml --plot \note If you encounter the following error message: \verbatim @@ -287,15 +287,15 @@ Now you should see new window that shows the image from the camera like in the n You can also activate an adaptive gain that will make the convergence faster: - $ ./servoFrankaPBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --adaptive_gain + $ ./servoFrankaPBVS --eMc ../../apps/calibration/eMc.yaml --plot --adaptive_gain You can also start the robot with a zero velocity at the beginning introducing task sequencing option: - $ ./servoFrankaPBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --task_sequencing + $ ./servoFrankaPBVS --eMc ../../apps/calibration/eMc.yaml --plot --task_sequencing And finally you can activate the adaptive gain and task sequencing: - $ ./servoFrankaPBVS --eMc ../../tutorial/calibration/eMc.yaml --plot --adaptive_gain --task_sequencing + $ ./servoFrankaPBVS --eMc ../../apps/calibration/eMc.yaml --plot --adaptive_gain --task_sequencing To learn more about adaptive gain and task sequencing see \ref tutorial-boost-vs. diff --git a/doc/tutorial/visual-servo/tutorial-universal-robot-ibvs.dox b/doc/tutorial/visual-servo/tutorial-universal-robot-ibvs.dox index e07a847f87..715f2dad90 100644 --- a/doc/tutorial/visual-servo/tutorial-universal-robot-ibvs.dox +++ b/doc/tutorial/visual-servo/tutorial-universal-robot-ibvs.dox @@ -128,7 +128,7 @@ An example of image-based visual servoing using a robot from Universal Robots eq - Attach your Realsense camera to the robot end-effector. To this end, we provide a CAD model of a support that could be 3D printed. The FreeCAD model is available [here](https://github.com/lagadic/visp/tree/master/example/servo-universal-robots). - Put an Apriltag in the camera field of view -- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$, the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `tutorial/calibration/ur_eMc.yaml`. +- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$, the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `apps/calibration/ur_eMc.yaml`. Now enter in `example/servo-universal-robots folder` and run `servoUniversalRobotsIBVS` binary using `--eMc` to locate the file containing the \f$^e{\bf M}_c\f$ transformation. Other options are available. Using `--help` show them: @@ -144,7 +144,7 @@ $ ./servoUniversalRobotsIBVS --help Run the binary activating the plot and using a constant gain: \verbatim -$ ./servoUniversalRobotsIBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot +$ ./servoUniversalRobotsIBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot \endverbatim Use the left mouse click to enable the robot controller, and the right click to quit the binary. @@ -160,19 +160,19 @@ At this point the behaviour that you should observe is the following: You can also activate an adaptive gain that will make the convergence faster: \verbatim -$ ./servoUniversalRobotsIBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --adaptive_gain +$ ./servoUniversalRobotsIBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --adaptive_gain \endverbatim You can also start the robot with a zero velocity at the beginning introducing task sequencing option: \verbatim -$ ./servoUniversalRobotsIBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --task_sequencing +$ ./servoUniversalRobotsIBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --task_sequencing \endverbatim And finally you can activate the adaptive gain and task sequencing: \verbatim -$ ./servoUniversalRobotsIBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --adaptive_gain --task_sequencing +$ ./servoUniversalRobotsIBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --adaptive_gain --task_sequencing \endverbatim \section ur_ibvs_next Next tutorial diff --git a/doc/tutorial/visual-servo/tutorial-universal-robot-pbvs.dox b/doc/tutorial/visual-servo/tutorial-universal-robot-pbvs.dox index 673f369442..e388bdde50 100644 --- a/doc/tutorial/visual-servo/tutorial-universal-robot-pbvs.dox +++ b/doc/tutorial/visual-servo/tutorial-universal-robot-pbvs.dox @@ -11,7 +11,7 @@ An example of position-based visual servoing using a robot from Universal Robots - Attach your Realsense camera to the robot end-effector. To this end, we provide a CAD model of a support that could be 3D printed. The FreeCAD model is available [here](https://github.com/lagadic/visp/tree/master/example/servo-universal-robots). - Put an Apriltag in the camera field of view -- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$, the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `tutorial/calibration/eMc.yaml`. +- If not already done, follow \ref tutorial-calibration-extrinsic to estimate \f$^e{\bf M}_c\f$, the homogeneous transformation between robot end-effector and camera frame. We suppose here that the file is located in `apps/calibration/eMc.yaml`. Now enter in `example/servo-universal-robots folder` and run `servoUniversalRobotsPBVS` binary using `--eMc` to locate the file containing the \f$^e{\bf M}_c\f$ transformation. Other options are available. Using `--help` show them: @@ -27,7 +27,7 @@ $ ./servoUniversalRobotsPBVS --help Run the binary activating the plot and using a constant gain: \verbatim -$ ./servoUniversalRobotsPBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot +$ ./servoUniversalRobotsPBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot \endverbatim Use the left mouse click to enable the robot controller, and the right click to quit the binary. @@ -43,19 +43,19 @@ At this point the behaviour that you should observe is the following: You can also activate an adaptive gain that will make the convergence faster: \verbatim -$ ./servoUniversalRobotsPBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --adaptive_gain +$ ./servoUniversalRobotsPBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --adaptive_gain \endverbatim You can also start the robot with a zero velocity at the beginning introducing task sequencing option: \verbatim -$ ./servoUniversalRobotsPBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --task_sequencing +$ ./servoUniversalRobotsPBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --task_sequencing \endverbatim And finally you can activate the adaptive gain and task sequencing: \verbatim -$ ./servoUniversalRobotsPBVS --eMc ../../tutorial/calibration/ur_eMc.yaml --plot --adaptive_gain --task_sequencing +$ ./servoUniversalRobotsPBVS --eMc ../../apps/calibration/ur_eMc.yaml --plot --adaptive_gain --task_sequencing \endverbatim \section ur_pbvs_next Next tutorial