We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tested python version: 2.7.5
Spark/Python will raise an error when dumping files larger than 2GB:
OverflowError: size does not fit in an int.
This occurs when using zlib.
Related Works: joblib issue #122 joblib issue #300 python issue #23306 python issue #27130
The text was updated successfully, but these errors were encountered:
For Python 2.x, this bug has been fixed on Python 2.7.13 (or higher) (Release Notes).
Sorry, something went wrong.
The OverflowError problem was solved after upgrading python to 2.7.13.
close the issue.
Also, Spark has another limitation that the obj to dump can't be larger than 2GB when using struct.pack (see issue #6 ).
So the right thing to do is that to avoid a single file to be larger than 2GB. And this has nothing to do with zlib's bug.
No branches or pull requests
tested python version: 2.7.5
Spark/Python will raise an error when dumping files larger than 2GB:
This occurs when using zlib.
Related Works:
joblib issue #122
joblib issue #300
python issue #23306
python issue #27130
The text was updated successfully, but these errors were encountered: