Skip to content

Copy Method update to avoid DoS via decompression bomb #88

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

mohanmanikanta2299
Copy link
Contributor

Updated the code by using io.CopyN method instead of io.Copy to mitigate the vulnerability.

This change will help to copy the contents of the file in reasonably sized chunks by using the io.CopyN method in a loop. Added logic to limit the number of chunks copied in this loop in order to prevent an excessive number of iterations that could lead to denial of service.

@mohanmanikanta2299 mohanmanikanta2299 requested a review from a team as a code owner April 14, 2025 03:50
@mohanmanikanta2299 mohanmanikanta2299 changed the title io.Copy method update to avoid DoS via decompression bomb Method update to avoid DoS via decompression bomb Apr 14, 2025
@mohanmanikanta2299 mohanmanikanta2299 changed the title Method update to avoid DoS via decompression bomb Copy Method update to avoid DoS via decompression bomb Apr 14, 2025
Copy link
Collaborator

@dduzgun-security dduzgun-security left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the counter is incremented only after a full chunk is copied which could allow going over the max of 400MB by 4MB. Also, the range for the sizing seems a bit excessive for the files go-slug deals with, we could try with 50MB as a max.

@dduzgun-security
Copy link
Collaborator

Also, was func CopyN(dst Writer, src Reader, n int64) (written int64, err error) not working?

@eastebry
Copy link
Contributor

Hey folks! Just a heads-up, I think we need to be pretty careful with this change. This code path is used for extracting slugs within Terraform, and it seems plausible that these will exceed 10MB.

Generally speaking, I'm guessing the risk here is fairly low because (as far as I know, I could be wrong), slugs are mostly unpacked in single-tenant environments (either in Terraform core, or within the sandbox environment of Terraform cloud). Given this, I think we need to carefully consider if this security improvement is worth the potential user impact/risk.

Before merging this, can we please do the following:

  • Talk with all consumers of go-slug (Terraform code, registry, cloud, and potentially more) to discuss what would be a reasonable limit, or if any limit we would impose could pose problems .
  • If we enforce a specific limit, ensure that it is explained in our product documentation, and called out in any release notes.

@dduzgun-security
Copy link
Collaborator

Considering all the implications here and the customers use cases which could go from unpacking files of MB to GB in size based out of feedbacks from the TF team, I think we should close/dismiss this issue to not cause any incident. Setting a hard limit would cause potential problems in the future for specific customer use case. I'll go ahead an close this alert.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants