-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues #7
Comments
I presume you're talking about verifier time, not aggregation time? I'd think aggregation time should be good, although I never explored the question, and doubt anyone cares too much. There are no actual exponential-time algorithms in this crate or afaik any of its dependencies. We've some code in https://github.com/w3f/bls/blob/master/src/verifiers.rs that actually reduces asymptotic performance, like O(n log n) vs O(n), but should always improve verifier performance on realistic sizes. Any verifier code might've bugs of course since we never really polished this stuff. And our trait hierarchy might fail to be zero cost somehow. You can swap those verifiers to check their performance. We'd double performance with good multiscalar multiplication ala Pippenger of course. You can also point me to code that shows the problem or even better send me a pull request with benchmarks ala https://github.com/w3f/schnorrkel/tree/master/benches ;) Insert all usual caveats about release builds, etc. And 500 might exceed some CPU cache line. |
Thank you so much for the details, i'll post up everything soon in case we can't fix it yet. To be sure about what you mean, what you are saying is with BLS (or your crate specifically), if we verify a signature aggregated with 500, and now we verify one with 5000 aggregated, the verification time should be exactly 10 times higher and not exponentially higher right? |
All depends upon your problem, but roughly yes. If you aggregate distinct messages with distinct signers, then aggregation buys you little, so you should consider not using BLS. In this case, all my fancy verifiers would technically degrade your asymptotic performance from O(n) to O(n log n), but only on very fact code paths, so you should never witness this next to the O(n) but slow crypto. If you aggregate one messages with distinct signers like by using At some point, we'll improve performance and security by upgrading to zcash's refactored crypto crates, but not rushing that. |
Thank you. From what I understood from your comments, if we don't have distinct message (hash + pubkey), then we are subject to rogue key attack and BLS offers 0 security anymore. My company is an insurance company, there is about 5000 computers that will all sign "facts and documents (as a hash)" every 100ms or so, 5000 computers could become higher. We are doing hash (same hash for every signers) + pubkey (ours, individual per "signer") and then verifying by doing :
However, this scheme looks like if we have 5000, and then suddently 50,000 it's about 5000x slower when it should be only 10x slower. |
No. Rogue key attack would normally be defeated by either delinearizing the public keys or having previously checked proofs-of-possession, ala https://github.com/w3f/bls/blob/master/src/bit.rs You can do distinct message aggregation of course, but you only really save disk space and bandwidth. You still have one miller_loop iteration for every signature, which run very slowly. It's possible you've hit upon some bug in zcash's implementation of the miller_loop optimization, well they only ever have three, not 5000. It's also possible you're simply experiencing just how slow pairing based crypto runs, perhaps compounded by the CPU cache. It's also possible one should not be using the miller_loop optimization beyond some input size that exceed the CPU cache, i.e. your pessemization comes entirely from the repeated memory fetches from In any case, your Aggregation means you share the aggregated signature over the network somehow. As an example, |
Amazing explanation, I learned a lot! However, the whole point of BLS for my company is to save the bandwidth and storage cost (which was 110TB of signatures with ED25519), BLS reduce this cost so much. What we do exactly: Then we aggregate them and if someone connect to this API, we give him the BLS aggregated signature. Is our way the best way? We need the signature as light as possible but with some "decent" speed when verifying. |
Hi,
After aggregating 500 signatures into one, the performances are dramatically slow, and I noticed it's exponentially slower for every signature I add. Is there a way to improve it without changing crate? I'd like to keep BLS like.
I'm looking basically to decrease the time it takes to verify aggregated signatures.
Thank you so much for this publishing and your work btw!
The text was updated successfully, but these errors were encountered: