-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(pin_api): fetch chunks in parallel #4428
Conversation
sem := semaphore.NewWeighted(100) | ||
var errTraverse error | ||
var mtxErr sync.Mutex | ||
var wg sync.WaitGroup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use "golang.org/x/sync/errgroup" instead and get rid of the mutex and error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i thought about using that but I still need access to the internal error inside to goroutine to stop the traversal.
I shudder to think what would happen to a bee node that attempts to pin a huge manifest like the OSM data set. It will kick off go-routines after go-routines, until it likely runs out of memory and crashes. I believe there should be an upper limit on how many concurrent fetches are running and throttle the concurrency via a queue or something. Just take a sacrificial node on the mainnet and try to pin: |
it's capped at 100 parallel retrieval requests |
Ah, I see the weighted semaphore now! That's what I get for commenting before actually studying, rather than skimming, the code! |
Checklist
Description
closes #4427
Open API Spec Version Changes (if applicable)
Motivation and Context (Optional)
Related Issue (Optional)
Screenshots (if appropriate):