Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bubble error instead of panicking when computing the hash of the block data #728

Closed
evan-forbes opened this issue Apr 19, 2022 · 2 comments
Labels
C:erasure-coding Component: Erasure coding good first issue Good for newcomers T:chore

Comments

@evan-forbes
Copy link
Member

After #637, we added an instance where an error could occur when computing the hash of the block data, as the hash is dependent upon the selected size of the square. This was previously not the case, as we didn't have an option to select the square size for the block data to be hashed over, because of this, there was no reason to bubble up an error.

The path to reaching this panic shouldn't occur in practice, but its easy to reach on accident if using the code for other purposes. Instead of panicking, we should bubble this error up to allow for potential users/implementer to handle in those cases.

celestia-core/types/block.go

Lines 1050 to 1056 in 8e54103

// compute the data availability header
// todo(evan): add the non redundant shares back into the header
shares, _, err := data.ComputeShares(data.OriginalSquareSize)
if err != nil {
// todo(evan): see if we can get rid of this panic
panic(err)
}

@HoytRen
Copy link
Contributor

HoytRen commented Sep 8, 2022

After reviewed this issue. I agree with Evan that simply return nil will be a mistake. These functions should be pure functions, and the panic seems not necessary, however, the problem is data size can't be easily checked due to current splitting logic. The capacity of block is limited but the data is not. For example, the block is currently 4MB, but if you put a data which is a little fewer than 4MB, it will be problem, because here could be a lot of small massage, and padding logic of share could waste a lot of space that make it finally exceeded (if you right click a folder in Windows File Explorer, you could see a file takes more space actually because the filesystem has similar logic as the shares). Then the solution could not be easy, it's even more difficult than my proposal post on Discord (https://discord.com/channels/638338779505229824/804370245778276374/1016565184745193543), after we implemented the proposal, you can simply calculate the final space cost of a data, then we may be able to remove the panic. Even if we implemented that, I still need a further check to ensure here aren't other problem which could cause the panic. My question is should I start it now? Because it's do not easy.

@rootulp
Copy link
Collaborator

rootulp commented Sep 8, 2022

@HoytRen thanks for your interest in this issue! Based on #835 (comment) I think you shouldn't start this issue as it is currently stated.

There are several other issues we could use help on. Issues that immediately come to mind:

  1. good first issue Good for newcomers
  2. https://github.com/celestiaorg/celestia-app/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22

@rootulp rootulp closed this as not planned Won't fix, can't repro, duplicate, stale Sep 8, 2022
Repository owner moved this from TODO to Done in Celestia Node Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C:erasure-coding Component: Erasure coding good first issue Good for newcomers T:chore
Projects
No open projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants