You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added verbosity in case of self-calibration issue (a specific error code will be added in the future minor update).
Removed some GNSS verbose.
Bug fixes
Fixed Positional tracking initial position scale for tracking GEN_2.
Fixed SVO recording regression leading to oversized file.
4.2.0
SDK
Added a new InitParameters::async_image_retrieval parameter that enables the ZED SDK to stream or record SVO2 files at a different framerate than the one of the depth computation.
Added ZED One compatibility with ZED SDK. ZED One can now be created with sl::CameraOne objects and the API is the same as with other ZED cameras. The available modules are Capture, Recording, and Streaming. Samples are available too.
Added support for HDR modes for ZED X One 4k cameras for two resolutions: 1290x1200 and 3200x1800. You can enable HDR with the boolean sl::InitParametersOne::enable_hdr, or within ZED Media Server.
Added a Health Check module: the status of the camera can now be retrieved with sl::Camera::getHealthStatus. The status will detect and report issues if the camera is down, the image looks occulted or corrupted, depending on the parameter set in sl::InitParameters::enable_image_validity_check.
Improved the speed of the NEURAL depth mode, especially when running on several cameras at the same time, by reducing internal data copy and improving computation parallelism.
Added a new custom ONNX Object detection model input for YOLO models. This allows users to provide an ONNX file directly to the ZED SDK, without further coding. The ZED SDK will take care of running the inference using an optimized workflow with TensorRT. The Custom Object Detection box input option is still available for users who need flexibility.
Improved initial gravity estimation.
Fixed the mixing of the cameras when using multiple ZED X cameras and a unified driver (on JetPack 6.0).
Improved ZED X One stability.
Added a new way of serializing ZED SDK parameters using JSON to easily load and save.
Added a semantic mask input in the Object Detection module, similar to bounding box input using the Camera::ingestCustomMaskObjects function. The instance mask is used to compute the object's 3D position in addition to the previous way when the instance mask is not available.
Improved the Positional Tracking GEN2 initialization with the IMU data.
Improved sl::Mat memory handling safety by switching to smart pointers.
Fusion
The Fusion is now compatible with the Object Detection module. It can be enabled with Fusion::enableObjectDetection and objects are retrieved with Fusion::retrieveObjects. A fused_objects_group_name at the sender level can be set to group the objects from different detection models.
Improved the Fusion data synchronization quality when the sender has low or irregular framerates.
Fixed incorrect application of Regions of interest within the Fusion module.
Fixed the retrieved position in rare cases where the IMU orientations are corrupted.
Added a FusedPositionalTrackingStatus in the Fusion module when retrieving the position. This new object contains information on the status of the different modules acting in the fusion of the positional tracking.
Updated the FUSION_ERROR_CODE to fit the ZED SDK standard: negative values are a warning, and positive values are errors.
Tools
Added ZED One compatibility with ZED Explorer.
Added ZED One compatibility with ZED Sensor Viewer.
Fixed IMU recording at full frame rate for ZED Sensor Viewer.
Improved ZED Depth Viewer opening reliability.
Added accelerometer bias calibration for Sensor Viewer, see --help.
Wrappers
Added ZED One compatibility with Python.
Fixed the Fusion implementation of the C# wrapper.