You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After #637, we added an instance where an error could occur when computing the hash of the block data, as the hash is dependent upon the selected size of the square. This was previously not the case, as we didn't have an option to select the square size for the block data to be hashed over, because of this, there was no reason to bubble up an error.
The path to reaching this panic shouldn't occur in practice, but its easy to reach on accident if using the code for other purposes. Instead of panicking, we should bubble this error up to allow for potential users/implementer to handle in those cases.
After reviewed this issue. I agree with Evan that simply return nil will be a mistake. These functions should be pure functions, and the panic seems not necessary, however, the problem is data size can't be easily checked due to current splitting logic. The capacity of block is limited but the data is not. For example, the block is currently 4MB, but if you put a data which is a little fewer than 4MB, it will be problem, because here could be a lot of small massage, and padding logic of share could waste a lot of space that make it finally exceeded (if you right click a folder in Windows File Explorer, you could see a file takes more space actually because the filesystem has similar logic as the shares). Then the solution could not be easy, it's even more difficult than my proposal post on Discord (https://discord.com/channels/638338779505229824/804370245778276374/1016565184745193543), after we implemented the proposal, you can simply calculate the final space cost of a data, then we may be able to remove the panic. Even if we implemented that, I still need a further check to ensure here aren't other problem which could cause the panic. My question is should I start it now? Because it's do not easy.
After #637, we added an instance where an error could occur when computing the hash of the block data, as the hash is dependent upon the selected size of the square. This was previously not the case, as we didn't have an option to select the square size for the block data to be hashed over, because of this, there was no reason to bubble up an error.
The path to reaching this panic shouldn't occur in practice, but its easy to reach on accident if using the code for other purposes. Instead of panicking, we should bubble this error up to allow for potential users/implementer to handle in those cases.
celestia-core/types/block.go
Lines 1050 to 1056 in 8e54103
The text was updated successfully, but these errors were encountered: