Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raw RFC topic sharding: alternative approaches and update requests #174

Open
Tracked by #154
kaiserd opened this issue Jan 30, 2023 · 9 comments
Open
Tracked by #154

raw RFC topic sharding: alternative approaches and update requests #174

kaiserd opened this issue Jan 30, 2023 · 9 comments
Assignees

Comments

@kaiserd
Copy link
Contributor

kaiserd commented Jan 30, 2023

This issue is par of #154 and tracks alternative approaches to decisions taken in 51/WAKU2-RELAY-SHARDING
and 52/WAKU2-RELAY-STATIC-SHARD-ALLOC.

Previous discussion and comments may be found in this PR.
This issue carries over open points.

Note: The focus is on the remainder of tasks on the "Simple scaling to 1 mio users" milestone in #154
We will only adjust 51/WAKU2-RELAY-SHARDING and 52/WAKU2-RELAY-STATIC-SHARD-ALLOC,
if there are consensus and strong arguments to do so.

Index Allocation

Currently, index allocation is done via the informational RFC 52/WAKU2-RELAY-STATIC-SHARD-ALLOC.
Index allocation could be managed in another (publicly available) document, or be handled in an un-managed way.
Inspiration for the chosen method is the IANA port allocation.

Levels of Network Segregation

We could mention various levels of network segregation in this RFC (or in another document).
In the current version of the RFC, apps have segregated shard clusters.

Levels of segregation comprise (non-comprehensive, off the cuff)

  • a single shard cluster (similar to Ethereum) shared by all apps.
    This takes less space in ENRs, and makes static shard discovery (a bit) simpler.
    However, this limits growth of Waku. At some point, which can be soon, apps will not find dedicated free static shards anymore.
    Automatic sharding can be used, but I assume some apps still want to manage shard mappings themselves,
    and some of them want segregated shards.

  • separate segregated Waku networks.
    Apps can run their own segregated Waku networks.
    This means, they completely segregate the gossipsub network (with the current version of the RFC specific control messages are shared beyond shard boundaries).
    This would also make the above solution of having only a single shard cluster scale,
    but comes at the (potential) disadvantage of not sharing control messages at all (negative effect on robustness).

  • To get fully segregated Waku networks: segregate the discv5 discovery network (at the cost of connectivity)
    Imo, at least after introducing efficient capability discovery, this does not make sense.
    Still, apps that really want that, can achieve it by using a different protocol ID.

Imo, having a single Waku network offering the three sharding types specified in the current version of 51/WAKU2-RELAY-SHARDING is the most promising solution.
It allows app protocols and apps to operate in joint shards (increase k-anonymity) and have access to segregated shared clusters (manage own shard mapping), while they can still share control messages to improve resilience and increase robustness.
Especially with future advances in incentivization, apps can help each other making the whole Waku ecosytem more resilient.
If this is not fitting for an app, despite the wide variety it offers, apps can still opt to setup their own segregated Waku network.
(In any case, this segregation is not enforced. (Limited) enforcement can happen on the app layer. Waku, at its core, is permissionless.)

@alrevuelta has a solution alternative to what is currently in the RFC (see in comments soon).

@alrevuelta
Copy link

Not an alternative approach per se, but some thoughts:

  • Really like the automatic sharding approach + consistenet hashing and I think its the way to go.
  • However I think we should limit sharding to just one type: automatic sharding. More types of sharding involve more design, more implementation, more dogfooding, more issues, and more complexity in general. And I don't see the benefits of having them all. We should aim for a simple solution to understand, where all magic happens on the protocol level and is abstracted away.
  • Automatic sharding also implies abstracting away gossip sub topics from operators/devs building on top, which is great. Content topic is the perfect abstraction, and we get this with automatic sharding but not with the other types.
  • In my opinion, named sharding and static sharding doesnt help with scaling. If tons of content topics are broadcasted in a given gossip sub topic (static shard) the network cant do anything to balance these content topics over multiple shards.
  • Having automatic sharding allows us to enforce some traffic contraints equally to all shards. For example, we can say that each shard should have a max traffic of 10Mbps and rate limit what exceeds it. With 64 shards we would have 64*10Mbps of total bandswitdh, and each node will have that enforced on a protocol level. This allows us to set the requirements of the nodes, something very important imho. Ofc if you subscrube to multiple content topics, you may end up subscribed to multiple shards, but imho its fair since you will be using the network more, so you must give something in return. With the birthday paradox we can calculate the amount of content topics you would need to subscribe in order to end up subscibed to all gossipsub topics.
  • I also don't see (perhaps I'm missing something) use case for named and static sharding. Lets say I chose a pubsub topic that no one uses (static-rshard/0/2). If so, no one is relaying traffic in that topic so I'm not getting anything from the network. So why using waku then?

Some of the details that I think we should revisit:

  • Use a lower number of shards, order of magnitude around 64, 128 or so. I think its better to have less shards but used by multilpe operators and nodes, so that if you use them, you get some privacy guarantees and leverage the existing nodes gossiping in it. With this we can define a max network throughput that is splitted across al shards that we can measure and enforce (in the dos protection track)

@kaiserd kaiserd removed this from Vac Research Feb 1, 2023
@kaiserd
Copy link
Contributor Author

kaiserd commented Feb 2, 2023

@alrevuelta

Thank you for you input :).

Generally, we can reevaluate when automatic sharding has been rolled out.

Future advances we make in automatic sharding might make the other two ways obsolete,
but as it is currently planned, I expect, named and static sharding will still be useful.

I suggest automatic sharding as the default.
But app protocols that want more fine-grained control,
have this option with named and static sharding.

Having automatic sharding allows us to enforce some traffic contraints equally to all shards.

I'd not enforce this for all shards.
This would be an other reason why some app protocols want their separately managed shard cluster.

A separate shard (cluster) can be seen as a segregated network managed by an app protocol.
The advantage compared to a completely segregated network is:
The nodes can still be connected to peers in other shards for the exchange of control messages
(I'll propose this as a basis for future discovery methods),
while they keep the property of not getting overloaded by message traffic beyond shard boundaries.

I also don't see (perhaps I'm missing something) use case for named and static sharding. Lets say I chose a pubsub topic that no one uses (static-rshard/0/2).

The app protocol

  • has the choice to do that, it does not have to.
  • should only chose a separate shard, if it assumes to accumulate enough users.
  • can also transition to a static shard, once it accumulated enough users.

Use a lower number of shards, order of magnitude around 64, 128 or so. I think its better to have less shards but used by multilpe operators and nodes, so that if you use them, you get some privacy guarantees and leverage the existing nodes gossiping.

Again, this ties into the point I made above.
Some apps (e.g. Status) want to have separate shards.
I agree, automatic sharding is better for (k-)anonymity.
But, at our current state of research, it comes with trade-offs.

In the future, we could transition to automatic sharding more and more, if we have

  • ways to avoid hot / cold spots in automatic sharding
  • incentivization techniques to make bandwidth consumption on shards shared between apps fair.

But even then, there I'd expect there would be use cases where apps might want to have dedicated shards.
Imo, the number of shards in itself is not a problem, because not all shard clusters will be instantiated.
Scaling is mainly limited by the total number Waku nodes, and having more (potential) shards will help with that.

A problem is the current recommendation for indicating participation in many shards in the ENR. Ill explain my idea here in a follow up comment.

@kaiserd
Copy link
Contributor Author

kaiserd commented Feb 2, 2023

Here is an update I'd suggest to the current version of 51/WAKU2-RELAY-SHARDING:

Background

The maximum ENR size is 300 bytes, and space should be saved.

Update Suggestion

Add the following recommendation:

Nodes SHOULD only advertise a single shard cluster in the ENR

This can either be the global shard cluster (index 0 or any app assigned shard cluster.)
(two or 3 clusters should be fine, too, but imo we should explicitly mention saving space in the ENR.)

Future
(not for the Status MVP)

Nodes that are part of many shard clusters (only relevant for a small subset of nodes, e.g. strong infrastructure nodes)
further shard clusters SHOULD be advertised via other methods,
for example a separate req/resp protocol while the ENR only indicates,
the node supports this "shard discovery" protocol.

(I can elaborate more on this later.
@jm-clius mentioned we should avoid creating an implicit dependency between a libp2p req/resp protocol and the discovery mechanism.
I agree. This will be part of future research/discussion.)

@kaiserd
Copy link
Contributor Author

kaiserd commented Feb 2, 2023

One more point for future discussion:
We could move the management of static shard indices in 52/WAKU2-RELAY-STATIC-SHARD-ALLOC into a DAO-like style.

@kaiserd kaiserd changed the title raw RFC topic sharding: alternative approaches raw RFC topic sharding: alternative approaches and update requests Feb 15, 2023
@kaiserd
Copy link
Contributor Author

kaiserd commented Feb 15, 2023

The shard cluster size should be increased from 64 to at least 256.
Reason: we need 100 shards @10k users to scale 1:1 chat to 1mio users.

ENR size limit suggestion: change label rshard-x -> rsx

@alrevuelta
Copy link

Pasting this here with my concerns.

I raised my concerns here and here but as a tldr:

  • I think waku should handle discovery only on waku shards.
  • I dont think we need this hierarchy of cluster + index. just shard index.
  • I think we should cover way less shards. quality and not quantity.
  • I dont think we need static sharding, just automatic. I wouldn't justify the need of static wharding with the MVP.
  • imho static sharding would be waku's dystopia. companies pushing their users to shards they control with no guarantees on decentralization, privacy, etc. all shards should be equal = managed by the protocol = auto sharding.
  • if an app wants static shards, fine, fork the network (like bsc, gnosis, ethclassic or even eth networks). wouldn't be difficult with a network-id in the ENR (which acts as kind of the shard-cluster but more generic)

@kaiserd
Copy link
Contributor Author

kaiserd commented Apr 17, 2023

Thanks for your feedback and input.

Afaict, this is post-MVP, so I will prio Vac reorg / MVP work for now.

Here is a short version of my view (can go more into depth after MVP if necessary):

I think Waku should handle discovery only on waku shards.

I'm fine with both ways.
However, all shards sharing one discovery DHT (e.g. discv5) helps with resilience.
I would not see this as Waku handling the discovery; more like the Waku Network + apps using their own relay networks share a discovery network.

Regarding shard alloc, I consider all of these to be "valid" options

  • do alloc like the RFC describes (like IANA port alloc)
  • wild west alloc: apps just choose their shard cluster
  • on-chain management (this could be a nice future option, integrated with incentivization strategies)

I do not have a strong option on this.

I dont think we need this hierarchy of cluster + index. just shard index.

Imo the hierarchy fits nicely to Waku, even if you want to leave cluster management outside of Waku.
It is, in a sense, similar to Ethereum network IDs.
The shard cluster hierarchy does not enforce cluster management, it allows for it.
(And again, Imo the joint discovery network is beneficial; so I would not completely separate networks.)

I think we should cover way less shards. quality and not quantity.

Shard cluster index + shard index span a name space.
It does not mean that each of these shards is occupied.
For instance, the Waku Network can have a single shard cluster of "high quality" shards.

The namespace is just large enough to scale. (e.g. IPv4 vs IPv6)

I dont think we need static sharding, just automatic. I wouldn't justify the need of static wharding with the MVP.

Automatic sharding builds on static sharding, so static sharding is there anyways.
You could choose to not expose static sharding to apps.
I would allow apps to choose.

Also, automatic sharding is not available yet, and we need something to scale.
Imo, this is a natural transition.
You could choose to just not expose static sharding to apps post-MVP, i.e. do not support it by Waku discv5.
(Again, I'd let apps choose.)

imho static sharding would be waku's dystopia. companies pushing their users to shards they control with no guarantees on decentralization, privacy, etc. all shards should be equal = managed by the protocol = auto sharding.
if an app wants static shards, fine, fork the network (like bsc, gnosis, ethclassic or even eth networks). wouldn't be difficult with a network-id in the ENR (which acts as kind of the shard-cluster but more generic)

The only thing that static sharding would do here is support a joint discovery network.
Imo this makes sense.
And, the Waku Network will not be fit for all apps.


Generally, there is no right or wrong for all these points.
These are (mainly) opinions, and, after the restructure, the Waku team will have the final decision here.
Vac acts in advisory capacity.

(If there is no current blocker/ time-critical issue, I'll continue on this discussion post Vac reorg and post-MVP.)

cc @jm-clius @fryorcraken

@alrevuelta
Copy link

@kaiserd thanks for the reply. Summarizing my responses on top of this:

  • I think autosharding for the mvp is a good idea. With limitations but being a subset of the final solution, with work that wont be discarded after. ofc autosharding work will continue post mvp but with the foundations set.
  • I think we should be careful with the "its only for the mvp" statement. Some changes require time in spec, development, agreement, dissemination, deployment. This makes likely that "temporal solution ends up being a permanent one". Changing things takes time, so the less changes the better. Meaning that I would have auto sharding as target.
  • I would focus on consistent hashing + auto shards discovery. Doable with some limitations for the mvp but setting the foundations of what we want it to be.
  • I think autosharding will help status, both with scaling and to validate the idea if its ok to use wakuv2 aka gossiosub for their use case. It also makes deployments of new communities easier, since they are mapped to already existing topics.
  • Agree that all shards (and "network-id" if we had that concept) could share the same discovery DHT. But nodes won't connect to other network-id nodes.
  • The hierarchy shard index+cluster or my suggestion (network-id+shard) is similar, but the intention is different. My shards are handled by the protocol and no such thing as "app shard" exists.
  • Not sure I like the "high quality" shard concept. All shards should have the same quality and the protocol should try to enforce it. If we want this, apps shouldn't chose shards. They chose content topics that are mapped into shards aka pubsubtopics.
  • I think every shard should have shared traffic between the different apps using it, so again no "app shards" with said "app" having control of the shard. Ofc this will take some time until is true, but should be enforced by the protocol.
  • For the mvp, I would use a lower amout of shards, so that we can ensure they are all well covered (with enough nodes). Why would I use shard 55982 if there are no nodes there? And if they are not, anyone can control that shard with few nodes and trick the users to use it.

(If there is no current blocker/ time-critical issue, I'll continue on this discussion post Vac reorg and post-MVP.)

Not a blocker, just some opinionated feedback. Sharding rfc is not part of my responsabilities but I think I have fair points on some alternaties and why I think we should consider them.

@kaiserd
Copy link
Contributor Author

kaiserd commented Apr 21, 2023

Fore some of the points I already shared my opinion in the comments above.
I'd be happy to see other's opinions on these points, too.

trying to clarify a misunderstanding

Before adding more to the points, I'll try to clarify a (potential) misunderstanding:
The address space spanned by cluster index and shard index is independent of the specific sharding method.
You can have static sharing or dynamic sharding using it.
Static and dynamic refer to the mapping of content topics to pubsub topics (i.e. shards).

As said before: the fact that the address space is large does not mean each of the shard addresses points to an existing pubsub network.
The design rationale here was: large enough to not run out of addresses (to avoid an IPv4, IPv6 scenario).
But also not too large. Index encodings should be feasibly small to fit in an ENR for instance.

While this is generally comparable to Ethereum, there is are major differences:

  • Waku will have to cope with higher traffic loads (when scaling to larger user numbers)
  • Waku traffic is not as predictable (makes DoS protection harder)

So, supporting more shards makes sense (again: supporting in the sense of "these shards are addressable", as well as the discovery mechanism offers means to discover nodes in them,
not "these shards each must have peers").

shard addressing for Waku Network

The current version of RFC 51 assigns a large range of shard clusters to dynamic sharding:

The index range 49152 - 65535 is reserved for automatic sharding. Each index can be seen as a hash bucket. Consistent hashing maps content topics in one of these buckets.

I'd use these for the Waku Network which I sketched here
The range of actually used shard clusters can be grown over time, starting with one.
(The RFC can be updated to reflect this, once we agree on the Waku Network concept.
We could also shrink that range, and make part of it reserved for now.)

Content topics are mapped dynamically onto these shards and apps using the Waku Network just specify content topics.

shard addressing for apps outside the Waku Network

(I know you disagree here; I'm listing this to explain that part of my idea, too, and to gather other opinions.)

Apps that do not want to use the Waku Network can use their own shards in separate shard clusters.
(I specified one way of assigning these in the RFC, and listed other options in previous comments in this issue. I do not have a strong opinion on the way of assigning.)
Apps can choose to use either static or dynamic sharding (or even both) to map content topics to their shards.

more on your points

I think autosharding for the mvp is a good idea. With limitations but being a subset of the final solution, with work that wont be discarded after. ofc autosharding work will continue post mvp but with the foundations set.

Imo

  • static sharding should not be discarded
  • automatic sharding would most likely not be usable in time for the MVP.
    (Also when only designing and implementing a very limited subset of automatic sharding; and if the subset gets limited far enough, it will be reduced into a form of static sharding.)
  • static sharding, in a sense, is a stepping stone for automatic sharing

(Happy to see other opinions here.)

I think we should be careful with the "its only for the mvp" statement. Some changes require time in spec, development, agreement, dissemination, deployment. This makes likely that "temporal solution ends up being a permanent one". Changing things takes time, so the less changes the better. Meaning that I would have auto sharding as target.

I generally agree here.

I would focus on consistent hashing + auto shards discovery. Doable with some limitations for the mvp but setting the foundations of what we want it to be.

see answer to first point.
Again, imo, static sharding is a foundation.

I think autosharding will help status, both with scaling and to validate the idea if its ok to use wakuv2 aka gossiosub for their use case. It also makes deployments of new communities easier, since they are mapped to already existing topics.

It definitely could in the future. It depends on Status' requirements.
Status stated, they did not want to share resources with other apps,
because others can leach.
In the future, we can fix related issues, but Status wants it now.

Agree that all shards (and "network-id" if we had that concept) could share the same discovery DHT.

I thought one of your points was the Waku Network should be completely separated from apps that do not want to use it.
With this,network-id + shard index and cluster index + shard index are effectively the same.
(At least in regards to Waku relay and Waku Discv5, which is what we are looking at.)

But nodes won't connect to other network-id nodes.

yes. When using the cluster index addressing, they also would not connect to nodes in shards they are not interested in.

The hierarchy shard index+cluster or my suggestion (network-id+shard) is similar, but the intention is different. My shards are handled by the protocol and no such thing as "app shard" exists.

There seems to be another misunderstanding (might be linked to the misunderstanding I tried to clarify above).
If you are fine with having a joint discovery DHT (see point above), supporting discovery (of nodes in) shards that you called "app shards" is basically included.

Not sure I like the "high quality" shard concept. All shards should have the same quality and the protocol should try to enforce it. If we want this, apps shouldn't chose shards. They chose content topics that are mapped into shards aka pubsubtopics.

There is not really a "high quality shard" concept. I just used that term in answer to "I think we should cover way less shards. quality and not quantity."
But lower quantity shards (I assume low node density, maybe no DoS protection etc) could result from allowing apps to have their own shards.
Again, not all apps might agree with the trade-offs that the Waku Network commits to (see comment above).
Happy to see other opinions here.

I think every shard should have shared traffic between the different apps using it, so again no "app shards" with said "app" having control of the shard. Ofc this will take some time until is true, but should be enforced by the protocol.

see above

For the mvp, I would use a lower amout of shards, so that we can ensure they are all well covered (with enough nodes).

Yes. RFC 57 suggests a mapping.
And, we can even use a lower number of shards.
The plan for the MVP is to only focus on community shards + 1:1 shards and leave owner mapped shards out for now.
The large address space does not force us to have peers in each of these shards.
(Again, this seems to be linked to the misunderstanding that all addresses have to point to shards with peers in them.)

Why would I use shard 55982 if there are no nodes there?

You wouldn't.
(Only if you want to bootstrap something new there.)

And if they are not, anyone can control that shard with few nodes and trick the users to use it.

"Forking" the network would allow that, too.

An app that wants to use the Waku Network and protect its users from this could simply only accept whitelisted shards.
The Waku component apps include could use this as the default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Now/In Progress
Development

No branches or pull requests

3 participants