sf data is unable to retrieve huge jobs #2799
Labels
investigating
We're actively investigating this issue
validated
Version information for this issue has been validated
Summary
When trying to retrieve A LOT of records using bulk data, the cli just dies. I think that the reason for this is that it relies entirely in loading the data in memory. In this cases, one solution could be to rely on the filesystem to store the data retrieved
Steps To Reproduce
Create a bulk job that retrieves millions of records (+5M) and a lot of fields
Try to download it using sf data resume
Expected result
Retrive all the data
Actual result
The process just dies
❯ sf data query resume --bulk-query-id 750Jz00000K1nZRIAZ --target-org rep-prod-plat --result-format csv > cases.csv
(node:37340) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unpipe listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
(Use
node --trace-warnings ...
to show where the warning was created)(node:37340) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
(node:37340) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
(node:37340) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 finish listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
(node:37340) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 drain listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
System Information
zsh
Additional information
Doing various tests, I have found out that when requesting big chunks, after the query has finished processing, sometimes my org just stops sending the result, you can see this because the amount of data downloaded just freezes. This depends on the number of lines and fields you are retrieving, for us, normally 1 million with wich we havent found any problems
Just some numbers, recovering records 1M at a time, the process just takes a 1-2 minutes
The text was updated successfully, but these errors were encountered: