-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parallel read #508
Add parallel read #508
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
WalkthroughThe pull request introduces changes to two main files: Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (6)
executorlib/standalone/hdf.py (3)
42-42
: SWMR mode enabled correctly, consider adding error handlingThe addition of SWMR mode is appropriate for enabling parallel reads. However, consider adding error handling for cases where the file might still be in the process of being written.
- with h5py.File(file_name, "r", swmr=True) as hdf: + try: + with h5py.File(file_name, "r", swmr=True) as hdf: + data_dict = {} + if "function" in hdf: + data_dict["fn"] = cloudpickle.loads(np.void(hdf["/function"])) + else: + raise TypeError("Function not found in HDF5 file.") + if "input_args" in hdf: + data_dict["args"] = cloudpickle.loads(np.void(hdf["/input_args"])) + else: + data_dict["args"] = () + if "input_kwargs" in hdf: + data_dict["kwargs"] = cloudpickle.loads(np.void(hdf["/input_kwargs"])) + else: + data_dict["kwargs"] = {} + return data_dict + except OSError as e: + raise IOError(f"Error reading HDF5 file in SWMR mode: {e}")
77-77
: Consider atomic operations for queue ID accessWhile SWMR mode is correctly implemented, accessing queue IDs in a parallel environment might need additional synchronization mechanisms to ensure consistency, especially if these IDs are used for job coordination.
Consider implementing a more robust queuing system (e.g., using a dedicated queue service or distributed lock) if queue IDs are critical for job coordination in your parallel processing workflow.
Line range hint
42-77
: Document SWMR mode implications and requirementsThe implementation of SWMR mode across all read operations is consistent and aligns with the parallel read objective. However, consider adding documentation about:
- The HDF5 version requirements for SWMR
- Performance implications of SWMR mode
- Limitations and constraints (e.g., file system requirements)
- Error handling recommendations for concurrent access scenarios
notebooks/2-hpc-submission.ipynb (3)
Line range hint
69-69
: Update code examples in documentationThe markdown cells contain code examples still using
backend="slurm_submission"
, but the notebook has moved to using Flux. Please update these examples to usebackend="flux_submission"
for consistency.Also applies to: 77-77, 83-83
Line range hint
2-5
: LGTM! Consider adding performance comparisonThe examples effectively demonstrate the transition to Flux backend. Consider adding a performance comparison section between SLURM and Flux for common operations to help users understand the benefits of each approach.
Also applies to: 73-77
Line range hint
1009-1009
: Enhance cache cleanup robustnessThe cache cleanup implementation could be improved:
- Add specific error handling for different failure scenarios
- Add logging for cleanup operations
- Consider implementing a cleanup policy (e.g., based on age or size)
if os.path.exists(cache_dir): print(os.listdir(cache_dir)) try: shutil.rmtree(cache_dir) - except OSError: - pass + except PermissionError as e: + print(f"Permission error while cleaning cache: {e}") + except FileNotFoundError as e: + print(f"Cache directory not found: {e}") + except OSError as e: + print(f"Error cleaning cache: {e}") + else: + print(f"Successfully cleaned cache directory: {cache_dir}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
executorlib/standalone/hdf.py
(2 hunks)notebooks/2-hpc-submission.ipynb
(1 hunks)
🔇 Additional comments (2)
executorlib/standalone/hdf.py (1)
69-69
: Verify SWMR behavior with concurrent writes
The SWMR mode is correctly implemented. However, please verify that the output reading behavior works as expected when the output dataset is being written concurrently.
notebooks/2-hpc-submission.ipynb (1)
1-1
: Verify Flux kernel setup requirements
The notebook now requires the Flux kernel instead of the standard Python kernel. Please ensure:
- Documentation is updated to include Flux kernel setup instructions
- CI/CD pipelines are updated to support the Flux kernel
Summary by CodeRabbit
New Features
Bug Fixes