-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preliminary introduction of RowSet with supporting changes #427
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few minor comments and suggestions. Overall, I really like the changes so far in this PR. I think the changes relating to decoupling providing (ref #428) and refactoring PutBlock
into PutData
(ref #196) could both be isolated into their own PRs. Alternatively, I would also be ok with moving the RowSet
related changes into a different PR, until we discuss further in #423 and the upcoming ADR that maps out the required implementation changes required to propagate block data.
really looking forward to these already implemented changes, great work, and good hustle @Wondertan
func ProvideData( | ||
ctx context.Context, | ||
dah *DataAvailabilityHeader, | ||
croute routing.ContentRouting, | ||
logger log.Logger) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is awesome, and is a natural extension of #375. I really really like it. As discussed in our sync call, I think we should decouple as many of the features of this PR as possible, and this can be cherry picked out. ref #428
// compute roots | ||
eds.ColumnRoots() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not also compute the row roots? shouldn't that be an issue considering that we only use the row roots to retrieve data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we look at the code, we will see that calling either ColumnRoots
or RowRoots
will compute roots for all. Semantically, I should've used RowRoots
instead, as we mostly rely on Rows everywhere, but technically, it does not matter.
type DataAvailabilityHeader struct { | ||
// RowRoot_j = root((M_{j,1} || M_{j,2} || ... || M_{j,2k} )) | ||
RowsRoots NmtRoots `json:"row_roots"` | ||
// ColumnRoot_j = root((M_{1,j} || M_{2,j} || ... || M_{2k,j} )) | ||
ColumnRoots NmtRoots `json:"column_roots"` | ||
// cached result of Hash() not to be recomputed | ||
hash []byte | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though moving the DataAvailabilityHeader
outside of the types package breaks tendermint's pattern of keeping the core types inside that package, I think this makes a lot of sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tm's pattern is monolithic and we should steadily go away from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes under
Updates to p2p/ipld API
[...]
Updates to types package
should be broken into separate PRs. We also need to update the ADR that deals with ipld package.
Start relying on RowSet updates p2p/ipld API in consensus/state
These are the only changes directly related to changing gossiping from tendermint chunks to shares. And it is only the most basic preparation. It's hard to get a full picture of the required changes from this work. Similar to what Evan suggested yesterday, I'd suggest enumerating the places this will impact (in an adr / issue / google doc). From looking at this PR it seems like this could be weeks of work (but I might be wrong about this).
Edit:
Decouple providing from saving into new ProvideData func
Calculate extended square only once, finally!
This is really dope by the way 👌🏼
func (b *Block) RowSet(ctx context.Context, adder format.NodeAdder) (*RowSet, error) { | ||
shares, dataLen := b.ComputeShares() | ||
eds, err := ipld.PutData(ctx, shares, adder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Computing the rowset now also writes to IPFS? Is that intentional?
I think the least invasive approach would be:
- compute rowset where needed without touching storage yet
- use rowset (like previously partsset) during consensus and in the other places
- as a first step only store the block where store.SaveBlock is called (during consensus this is during finalize commit)
Later we can re-consider 3. and optimize to store and provide the data earlier.
0262cd0
to
613d803
Compare
Codecov Report
@@ Coverage Diff @@
## master #427 +/- ##
==========================================
- Coverage 61.82% 61.69% -0.14%
==========================================
Files 262 266 +4
Lines 22981 23072 +91
==========================================
+ Hits 14208 14234 +26
- Misses 7270 7329 +59
- Partials 1503 1509 +6
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left few more very minor questions.
I also appreciate the more incremental approach taken with this PR, which I think pairs extremely well with #441
{"Tampered Data", func(blk *Block) { | ||
blk.Data.Txs[0] = Tx("something else") | ||
blk.DataHash = nil // clear hash or change wont be noticed | ||
}, true}, | ||
{"Tampered DataHash", func(blk *Block) { | ||
blk.DataHash = tmrand.Bytes(len(blk.DataHash)) | ||
}, true}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why were these removed? shouldn't they still fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sooo, I forgot already what was the issue. Revisiting...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I removed DataHash validation from Block.ValidateBasic. Previously it checked that DAHeader matches the Header:
-
This does not actually give us anything. An attacker could still send us marshaled a Block with the Header.DataHash matching Block.DAHeader and to properly verify the Block we would need to recompute that all the data in the Block matches all the commitments. (I guess we must do this when we receive a serialized Block body, cc @liamsi)
-
One of the TODOs mentions
Remove DAH from Block and its proto
So removal of that validation is just a first step. The reason to remove DA from the body(or proto only) is that we need to recompute it anyways to verify the integrity, so passing it around does not make sense.
d0c602a
to
a74bc01
Compare
Rebased on master |
Here is a general suggestion how to proceed with this: we have good evidence that propagating the data in fixed sized chunks works well and is fast. Why don't we stick to that paradigm and still split the rows into parts of equal size? That way we are not be changing a network parameter (size of the gossiped chunks) without doing any experimentation / simulations up front. Instead we switch to gossip the (orig) data serialized and arranged into rows but chunk that up into equally sized parts. With that we have exactly the same confidence as before but can still get rid of the partset header (because the 64kb chunks of rows can still be authenticated against the DAHeader instead of the partset header, you'd just need to send all proof nodes that you can't recompute locally). The only change necessary would be that all fields besides block.Data need to be part of the proposal (or gossiped separately too). |
Let's close this for now. We won't continue this PR. At a later stage we might revisit this idea. |
* ci: Add release version check (#427) * ci: Add release version check Signed-off-by: Thane Thomson <[email protected]> * Fix workflow trigger Signed-off-by: Thane Thomson <[email protected]> --------- Signed-off-by: Thane Thomson <[email protected]> (cherry picked from commit 9d957b9) * Use Go 1.19 in v0.34.x Signed-off-by: Thane Thomson <[email protected]> --------- Signed-off-by: Thane Thomson <[email protected]> Co-authored-by: Thane Thomson <[email protected]>
Follow-up to #427. The workflow in #427 is merely informational, which is helpful, but can be bypassed. It's probably best to enforce this check when we push tags. If the check fails, it will prevent the cutting of a release, meaning we'll have to delete the tag, fix the version number, and tag again. --- #### PR checklist - [ ] Tests written/updated - [ ] Changelog entry added in `.changelog` (we use [unclog](https://github.com/informalsystems/unclog) to manage our changelog) - [ ] Updated relevant documentation (`docs/` or `spec/`) and code comments (cherry picked from commit 0d3c2f3) Co-authored-by: Thane Thomson <[email protected]>
Background
Initially, I aimed to just close #423 with this branch, but due to the high coupling of the code, I added multiple supporting changes and improvements that are now separated from the block propagation logic itself.
Changes
p2p/ipld
APIp2p/ipld
is now treated as libraryp2p/ipld
->types
==>types
->p2p/ipld
types
packagep2p/ipld
p2p/ipld
API inconsensus/state
TODO
Some of those should be fixed within the PR, for some, we should file issues and discuss.
p2p/ipld
toda
or other - TBDp2p/ipld
. Share, Sample, Leaf, NamespacedShare, Data and other alias are all about the same, but it is really confusing for any newcomerp2p/ipld