Skip to content

Commit

Permalink
Improve code quality by fixing all the warnings when -Wsuggest-destru…
Browse files Browse the repository at this point in the history
…ctor-override is used

- Changes done to add override keyword when necessary, removed useless destructors
- Doc improvement in the same commit
  • Loading branch information
fspindle committed Oct 24, 2023
1 parent 4842e54 commit 44ec8e9
Show file tree
Hide file tree
Showing 209 changed files with 10,114 additions and 10,585 deletions.
210 changes: 105 additions & 105 deletions 3rdparty/pthreads4w/ChangeLog

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ stating that this tracker uses both edge (see the vpMe class) and KLT (see vpKlt
The next important definition is:
\snippet realsense-color-and-depth.json.example Transformation
Describing the transformation between this camera and the reference camera. It can also be given as a vpPoseVector JSON representation.
If the current camera is the reference, then "camTref" may be ommitted or set as the identity transformation.
If the current camera is the reference, then "camTref" may be omitted or set as the identity transformation.

Next, we must define the camera intrinsics (see vpCameraParameters):
\snippet realsense-color-and-depth.json.example Camera
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorial/visual-servo/tutorial-pixhawk-vs.dox
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ In order to do this part, make sure you add a camera to your drone. We added a i
Jetson through USB.

The code servoPixhawkDroneIBVS.cpp is an example that needs to be run on the Jetson and that allows to do visual servoing with the drone.
This program establishes a rigid link between the drone (equiped with a camera) and an Apriltag.
This program establishes a rigid link between the drone (equipped with a camera) and an Apriltag.
Depending on where the camera is placed, the matrices expressing the transformation between the FLU body frame of the drone and the
camera frame need to be modified. Here is a picture of the drone showing where the D405 camera was attached.

Expand Down
2 changes: 1 addition & 1 deletion example/math/BSpline.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
* WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*
* Description:
* Exemple of a B-Spline curve.
* Example of a B-Spline curve.
*
*****************************************************************************/
/*!
Expand Down
4 changes: 2 additions & 2 deletions example/servo-afma6/servoAfma6FourPoints2DArtVelocity.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ int main()

try {
// Define the square CAD model
// Square dimention
// Square dimension
#define L 0.075
// Distance between the camera and the square at the desired
// position after visual servoing convergence
Expand Down Expand Up @@ -154,7 +154,7 @@ int main()
std::cout << " Test program for vpServo " << std::endl;
std::cout << " Eye-in-hand task control, velocity computed in the joint space" << std::endl;
std::cout << " Use of the Afma6 robot " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ int main()
std::cout << " Eye-in-hand task control, velocity computed in the camera frame" << std::endl;
std::cout << " Use of the Afma6 robot " << std::endl;
std::cout << " Interaction matrix computed with the current features " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ int main()
std::cout << " Use of the Afma6 robot " << std::endl;
std::cout << " Interaction matrix computed with the desired features " << std::endl;

std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ int main()
std::cout << " Test program for vpServo " << std::endl;
std::cout << " Eye-in-hand task control, velocity computed in the joint space" << std::endl;
std::cout << " Use of the Afma6 robot " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ int main()

try {
// Define the square CAD model
// Square dimention
// Square dimension
// #define L 0.075
#define L 0.05
// Distance between the camera and the square at the desired
Expand Down Expand Up @@ -150,7 +150,7 @@ int main()
std::cout << " Test program for vpServo " << std::endl;
std::cout << " Eye-in-hand task control, velocity computed in the joint space" << std::endl;
std::cout << " Use of the Afma6 robot " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ int main()
std::cout << " Test program for vpServo " << std::endl;
std::cout << " Eye-in-hand task control, velocity computed in the camera space" << std::endl;
std::cout << " Use of the Viper850 robot " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
2 changes: 1 addition & 1 deletion example/servo-viper850/servoViper850FourPointsKinect.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ int main()
std::cout << " Test program for vpServo " << std::endl;
std::cout << " Eye-in-hand task control, velocity computed in the camera space" << std::endl;
std::cout << " Use of the Viper850 robot " << std::endl;
std::cout << " task : servo 4 points on a square with dimention " << L << " meters" << std::endl;
std::cout << " task : servo 4 points on a square with dimension " << L << " meters" << std::endl;
std::cout << "-------------------------------------------------------" << std::endl;
std::cout << std::endl;

Expand Down
2 changes: 1 addition & 1 deletion modules/ar/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ if(USE_COIN3D)
endif()
endif()

vp_add_module(ar visp_core OPTIONAL visp_io)
vp_add_module(ar visp_core)
vp_glob_module_sources()

if(USE_OGRE)
Expand Down
130 changes: 65 additions & 65 deletions modules/core/include/visp3/core/vpCameraParameters.h
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ class VISP_EXPORT vpCameraParameters
\return True if the fov has been computed, False otherwise.
*/
inline bool isFovComputed() const { return isFov; }
inline bool isFovComputed() const { return m_isFov; }

void computeFov(const unsigned int &w, const unsigned int &h);

Expand All @@ -354,7 +354,7 @@ class VISP_EXPORT vpCameraParameters
*/
inline double getHorizontalFovAngle() const
{
if (!isFov) {
if (!m_isFov) {
vpTRACE("Warning: The FOV is not computed, getHorizontalFovAngle() "
"won't be significant.");
}
Expand All @@ -370,7 +370,7 @@ class VISP_EXPORT vpCameraParameters
*/
inline double getVerticalFovAngle() const
{
if (!isFov) {
if (!m_isFov) {
vpTRACE("Warning: The FOV is not computed, getVerticalFovAngle() won't "
"be significant.");
}
Expand All @@ -391,24 +391,24 @@ class VISP_EXPORT vpCameraParameters
*/
inline std::vector<vpColVector> getFovNormals() const
{
if (!isFov) {
if (!m_isFov) {
vpTRACE("Warning: The FOV is not computed, getFovNormals() won't be "
"significant.");
}
return fovNormals;
return m_fovNormals;
}

inline double get_px() const { return px; }
inline double get_px_inverse() const { return inv_px; }
inline double get_py_inverse() const { return inv_py; }
inline double get_py() const { return py; }
inline double get_u0() const { return u0; }
inline double get_v0() const { return v0; }
inline double get_kud() const { return kud; }
inline double get_kdu() const { return kdu; }
inline double get_px() const { return m_px; }
inline double get_px_inverse() const { return m_inv_px; }
inline double get_py_inverse() const { return m_inv_py; }
inline double get_py() const { return m_py; }
inline double get_u0() const { return m_u0; }
inline double get_v0() const { return m_v0; }
inline double get_kud() const { return m_kud; }
inline double get_kdu() const { return m_kdu; }
inline std::vector<double> getKannalaBrandtDistortionCoefficients() const { return m_dist_coefs; }

inline vpCameraParametersProjType get_projModel() const { return projModel; }
inline vpCameraParametersProjType get_projModel() const { return m_projModel; }

vpMatrix get_K() const;
vpMatrix get_K_inverse() const;
Expand All @@ -425,22 +425,22 @@ class VISP_EXPORT vpCameraParameters
static const double DEFAULT_KDU_PARAMETER;
static const vpCameraParametersProjType DEFAULT_PROJ_TYPE;

double px, py; //!< Pixel size
double u0, v0; //!< Principal point
double kud; //!< Radial distortion (from undistorted to distorted)
double kdu; //!< Radial distortion (from distorted to undistorted)
std::vector<double> m_dist_coefs; //!< Coefficients for Kannala-Brandt distortion model
double m_px, m_py; //!< Pixel size
double m_u0, m_v0; //!< Principal point
double m_kud; //!< Radial distortion (from undistorted to distorted)
double m_kdu; //!< Radial distortion (from distorted to undistorted)
std::vector<double> m_dist_coefs; //!< Coefficients for Kannala-Brandt distortion model

unsigned int width; //!< Width of the image used for the fov computation
unsigned int height; //!< Height of the image used for the fov computation
bool isFov; //!< Boolean to specify if the fov has been computed
double m_hFovAngle; //!< Field of view horizontal angle
double m_vFovAngle; //!< Field of view vertical angle
std::vector<vpColVector> fovNormals; //!< Normals of the planes describing the fov
unsigned int m_width; //!< Width of the image used for the fov computation
unsigned int m_height; //!< Height of the image used for the fov computation
bool m_isFov; //!< Boolean to specify if the fov has been computed
double m_hFovAngle; //!< Field of view horizontal angle
double m_vFovAngle; //!< Field of view vertical angle
std::vector<vpColVector> m_fovNormals; //!< Normals of the planes describing the fov

double inv_px, inv_py;
double m_inv_px, m_inv_py;

vpCameraParametersProjType projModel; //!< used projection model
vpCameraParametersProjType m_projModel; //!< used projection model
#ifdef VISP_HAVE_NLOHMANN_JSON
friend void to_json(nlohmann::json &j, const vpCameraParameters &cam);
friend void from_json(const nlohmann::json &j, vpCameraParameters &cam);
Expand All @@ -454,26 +454,26 @@ NLOHMANN_JSON_SERIALIZE_ENUM(vpCameraParameters::vpCameraParametersProjType, {
{vpCameraParameters::perspectiveProjWithDistortion, "perspectiveWithDistortion"},
{vpCameraParameters::ProjWithKannalaBrandtDistortion, "kannalaBrandtDistortion"}
});

/**
* \brief Converts camera parameters into a JSON representation.
* \sa from_json for more information on the content
* \param j the resulting JSON object
* \param cam the camera to serialize
*
* \sa from_json() for more information on the content.
* \param j The resulting JSON object.
* \param cam The camera to serialize.
*/
inline void to_json(nlohmann::json &j, const vpCameraParameters &cam)
{
j["px"] = cam.px;
j["py"] = cam.py;
j["u0"] = cam.u0;
j["v0"] = cam.v0;
j["model"] = cam.projModel;
j["px"] = cam.m_px;
j["py"] = cam.m_py;
j["u0"] = cam.m_u0;
j["v0"] = cam.m_v0;
j["model"] = cam.m_projModel;

switch (cam.projModel) {
switch (cam.m_projModel) {
case vpCameraParameters::perspectiveProjWithDistortion:
{
j["kud"] = cam.kud;
j["kdu"] = cam.kdu;
j["kud"] = cam.m_kud;
j["kdu"] = cam.m_kdu;
break;
}
case vpCameraParameters::ProjWithKannalaBrandtDistortion:
Expand All @@ -487,33 +487,33 @@ inline void to_json(nlohmann::json &j, const vpCameraParameters &cam)
break;
}
}
/*!
\brief Deserialize a JSON object into camera parameters.
The minimal required properties are:
- Pixel size: px, py
- Principal point: u0, v0
If a projection model (\ref vpCameraParameters::vpCameraParametersProjType) is supplied, then other parameters may be expected:
- In the case of perspective projection with distortion, ku, and kud must be supplied.
- In the case of Kannala-Brandt distortion, the list of coefficients must be supplied.
An example of a JSON object representing a camera is:
\code{.json}
{
"px": 300.0,
"py": 300.0,
"u0": 120.5,
"v0": 115.0,
"model": "perspectiveWithDistortion", // one of ["perspectiveWithoutDistortion", "perspectiveWithDistortion", "kannalaBrandtDistortion"]. If ommitted, camera is assumed to have no distortion
"kud": 0.5, // required since "model" == perspectiveWithDistortion
"kdu": 0.5
}
\endcode
\param j The json object to deserialize.
\param cam The modified camera.

*/
/*!
* \brief Deserialize a JSON object into camera parameters.
* The minimal required properties are:
* - Pixel size: px, py
* - Principal point: u0, v0
*
* If a projection model (\ref vpCameraParameters::vpCameraParametersProjType) is supplied, then other parameters may be expected:
* - In the case of perspective projection with distortion, ku, and kud must be supplied.
* - In the case of Kannala-Brandt distortion, the list of coefficients must be supplied.
*
* An example of a JSON object representing a camera is:
* \code{.json}
* {
* "px": 300.0,
* "py": 300.0,
* "u0": 120.5,
* "v0": 115.0,
* "model": "perspectiveWithDistortion", // one of ["perspectiveWithoutDistortion", "perspectiveWithDistortion", "kannalaBrandtDistortion"]. If omitted, camera is assumed to have no distortion
* "kud": 0.5, // required since "model" == perspectiveWithDistortion
* "kdu": 0.5
* }
* \endcode
*
* \param j The json object to deserialize.
* \param cam The modified camera.
*/
inline void from_json(const nlohmann::json &j, vpCameraParameters &cam)
{
const double px = j.at("px").get<double>();
Expand Down
Loading

0 comments on commit 44ec8e9

Please sign in to comment.