diff --git a/docs/migration_1.0/rust/index.html b/docs/migration_1.0/rust/index.html index b613e8bb..e1e703f1 100644 --- a/docs/migration_1.0/rust/index.html +++ b/docs/migration_1.0/rust/index.html @@ -1,6 +1,6 @@ Rust · Zenoh - pub/sub, geo distributed storage, query

Rust

Module reorganization

We reorganized the module tree, so import paths are not the same as before. The main difference is that everything should be imported via the root path zenoh::. Here are some examples, but you can look into zenoh/src/lib.rs for the complete list of changes.

// common use
+

Rust

Module reorganization

We reorganized the module tree, so import paths are not the same as before. The main difference is that everything should be imported via the root path zenoh::. Here are some examples, but you can look into zenoh/src/lib.rs for the complete list of changes.

// common use
 use zenoh::config::*;
 use zenoh::{Config, Error, Result};
 
@@ -213,37 +213,37 @@
 
  • Zenoh 1.0.0
let session: Session = zenoh::open(config);
 // If the `timestamping` configuration is disabled, this call will return `None`.
 let timestamp: Option<Timestamp> = session::new_timestamp();
-

This will affect user-created plugins and applications that need to generate timestamps.

Feature Flags

Removed:

  • complete_n: due to a Legacy code cleanup

Storage

Required option: timestamping enabled

Zenoh 1.0.0 introduced the possibility for Zenoh nodes configured in a mode other than router to load plugins.

A, somehow, implicit assumption that dictated the behaviour of storages is that the Zenoh node loading them has to add a timestamp to any received publication that did not have one. This functionality is controlled by the timestamping configuration option.

Until Zenoh 1.0.0 this assumption held true as only a router could load storage and the default configuration for a router enables timestamping. However, in Zenoh 1.0.0 nodes configured in client & peer mode can load storage and their default configuration disables timestamping.

⚠️ The storage-manager will fail to launch if the timestamping configuration option is disabled.

Rewrite of the Replication

We have completely rewritten the Replication functionality in Zenoh 1.0.0. The core of the algorithm did not change, hence if you are interested in its inner workings, our blog post unveiling this functionality still provides an accurate overview.

This rewrite was an opportunity to integrate many of the improvements we introduced in Zenoh since this feature was first developed. In particular, the older version was not leveraging Queryable as, at the time, they did not allow carrying attachments or payloads.

We also used this rewrite to slightly rework the configuration thus, if you were using this functionality before Zenoh 1.0.0, you will have to update the configuration of all your replicated Storage. The following configuration summarises the changes:

"plugins": {
-  "storage_manager": {
-    "storages": {
-      "replication-test": {
-        "volume": "memory",
-        "key_expr": "test/replication/*",
-        
-        // ⚠️ This field must be identical for all Replicated Storage.
-        "strip_prefix": "test/replication",
-
-        // ⚠️ This field was previously called `replica_config`.
-        "replication": {
-
-          // ⚠️ This field was previously called `publication_interval`.
-          "interval": 10,
-
-          // ⚠️ This field replaces `delta`.
-          "sub_intervals": 5,
-
-          // This field did not change.
-          "propagation_delay": 250,
-
-          // ⚠️ These fields are new.
-          "hot": 6,
-          "warm": 30,
-        }
-      }
-    }
-  }
-}
-

The new hot and warm fields expose parts of the Replication algorithm. They express how many intervals are included in the Eras with the same name. These values control how much information is included in the Replication Digest: the higher these values are, the more information are included in the Digest and thus the more bandwidth is consumed.

To be precise, if an interval is in the Hot Era, at most 64 bits + 128 bits × sub_intervals will be sent (empty sub-intervals are ignored) for that interval. Hence, if the Hot Era contains 10 intervals, then, at most, 10 × sub_intervals × 128 bits + 64 bits will be sent with each Digest. Similarly, a non-empty interval in the Warm Era will occupy 128 bits in the Digest (empty intervals are ignored).

Finally, in 1.0.0, only Replicas configured with exactly the same parameters will interact. This is to avoid burdening the network for no reason: if two Storage, active on the same key expression, have different replication configuration then every time they exchange their Digest, they will have to retrieve all the metadata in order to assess if they are aligned or not. Indeed, they do not “sort” their data in the same buckets (i.e. intervals and sub-intervals) and thus cannot compare the associated “fingerprints”.

Note that configuring Storage slightly differently is equivalent to creating Replication groups: only Replicas with exactly the same configuration belong to the same group.

Shared Memory

Shared Memory subsystem is heavily reworked and improved. The key functionality changes:

  • Buffer reference counting is now robust across abnormal process termination
  • Support plugging of user-defined SHM implementations
  • Dynamic SHM transport negotiation: Sessions are interoperable with any combination of SHM configuration and physical location
  • Support aligned allocations
  • Manual buffer invalidation
  • Buffer write access
  • Rich buffer allocation interface

⚠️ Please note that SHM API is still unstable and will be improved in the future.

SharedMemoryManager → ShmProvider + ShmProviderBackend

  • Zenoh 0.11.x
let id = session.zid().to_string();
+

This will affect user-created plugins and applications that need to generate timestamps.

Feature Flags

Removed:

  • complete_n: due to a Legacy code cleanup

Storage

Required option: timestamping enabled

Zenoh 1.0.0 introduced the possibility for Zenoh nodes configured in a mode other than router to load plugins.

A, somehow, implicit assumption that dictated the behaviour of storages is that the Zenoh node loading them has to add a timestamp to any received publication that did not have one. This functionality is controlled by the timestamping configuration option.

Until Zenoh 1.0.0 this assumption held true as only a router could load storage and the default configuration for a router enables timestamping. However, in Zenoh 1.0.0 nodes configured in client & peer mode can load storage and their default configuration disables timestamping.

⚠️ The storage-manager will fail to launch if the timestamping configuration option is disabled.

Rewrite of the Replication

We have completely rewritten the Replication functionality in Zenoh 1.0.0. The core of the algorithm did not change, hence if you are interested in its inner workings, our blog post unveiling this functionality still provides an accurate overview.

This rewrite was an opportunity to integrate many of the improvements we introduced in Zenoh since this feature was first developed. In particular, the older version was not leveraging Queryable as, at the time, they did not allow carrying attachments or payloads.

We also used this rewrite to slightly rework the configuration thus, if you were using this functionality before Zenoh 1.0.0, you will have to update the configuration of all your replicated Storage. The following configuration summarises the changes:

"plugins": {
+  "storage_manager": {
+    "storages": {
+      "replication-test": {
+        "volume": "memory",
+        "key_expr": "test/replication/*",
+        
+        // ⚠️ This field must be identical for all Replicated Storage.
+        "strip_prefix": "test/replication",
+
+        // ⚠️ This field was previously called `replica_config`.
+        "replication": {
+
+          // ⚠️ This field was previously called `publication_interval`.
+          "interval": 10,
+
+          // ⚠️ This field replaces `delta`.
+          "sub_intervals": 5,
+
+          // This field did not change.
+          "propagation_delay": 250,
+
+          // ⚠️ These fields are new.
+          "hot": 6,
+          "warm": 30,
+        }
+      }
+    }
+  }
+}
+

The new hot and warm fields expose parts of the Replication algorithm. They express how many intervals are included in the Hot and Warm Eras respectively. These values control how much information is included in the Replication Digest: the higher these values are, the more information are included in the Digest (consuming more bandwidth for each Digest) but, at the same time, a misalignment will be detected and resolved faster (consuming less bandwidth when aligning).

Finally, in 1.0.0, only Replicas configured with exactly the same parameters will interact. This is to avoid burdening the network for no reason: if Replicas with a different configuration were to interact, they would always assume they are misaligned as, because of their different configuration, they would compute different Digests — even when they are aligned.

The parameters that must be identical are: key_expr, strip_prefix and all the fields in replication (i.e. interval, sub_intervals, propagation_delay, hot and warm).

Note that configuring Replicas differently is equivalent to creating Replication groups: only Replicas with exactly the same configuration belong to the same group.

Shared Memory

Shared Memory subsystem is heavily reworked and improved. The key functionality changes:

  • Buffer reference counting is now robust across abnormal process termination
  • Support plugging of user-defined SHM implementations
  • Dynamic SHM transport negotiation: Sessions are interoperable with any combination of SHM configuration and physical location
  • Support aligned allocations
  • Manual buffer invalidation
  • Buffer write access
  • Rich buffer allocation interface

⚠️ Please note that SHM API is still unstable and will be improved in the future.

SharedMemoryManager → ShmProvider + ShmProviderBackend

  • Zenoh 0.11.x
let id = session.zid().to_string();
 let shmem_size = 1024*1024;
 let mut manager = SharedMemoryManager::make(id, shmem_size).unwrap();
 
  • Zenoh 1.0.0
// Create an SHM backend...