Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add comprehensive error handling across ML-CaPsule #1207

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Harsh-Bajpay
Copy link

πŸ›‘οΈ Add Comprehensive Error Handling

Hey team! I've added some robust error handling across our ML-CaPsule project. This should make our code more reliable and easier to debug.

πŸ” What I Changed

  1. Created a new error handling utility

    • Added custom exceptions for ML-specific errors
    • Set up a nice logging system that writes to both file and console
    • Made a handy decorator for consistent error handling
  2. Added error checks in our prediction models

    • Data validation (no more sneaky negative ages!)
    • Input shape verification
    • Missing value detection
    • Better error messages when things go wrong
  3. Made model evaluation more robust

    • Dimension checks for predictions
    • Better cross-validation error handling
    • More informative error messages

🎯 Why These Changes?

I kept running into situations where our models would fail silently or give weird results. This should help us:

  • Catch problems early
  • Figure out what went wrong more easily
  • Keep our data clean
  • Make debugging less of a headache

πŸ“ Testing

I've tested this locally with a few scenarios:

  • Missing data
  • Invalid inputs
  • Wrong file paths
  • Model training failures

Everything seems to work as expected, but would love another pair of eyes on this!

πŸ“š Notes

  • Added logging to ml_capsule.log
  • You might need to create the log file first
  • Let me know if the error messages are clear enough

Let me know what you think! Happy to make any adjustments needed. 😊

Copy link

Thank you for submitting your pull request! πŸ™Œ We'll review it as soon as possible. If there are any specific instructions or feedback regarding your PR, we'll provide them here. Thanks again for your contribution! 😊

@Niketkumardheeryan
Copy link
Owner

hi @Harsh-Bajpay , try to add .ipynb file instead of .py with proper doc.

Copy link
Author

@Harsh-Bajpay Harsh-Bajpay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ““ Add Jupyter Notebook for Model Evaluation

Hi @Niketkumardheeryan ,

I've created a Jupyter Notebook (model_evaluation.ipynb) that replaces the previous utils/model_evaluation.py file. This notebook includes:

  • Structured Documentation: Each function is documented with clear explanations of parameters and return values.
  • Error Handling: The notebook retains the error handling functionality to ensure robustness.
  • Example Usage: An example section demonstrates how to use the evaluation and cross-validation functions with a sample dataset.

This format should make it easier for users to understand and interact with the model evaluation process. Let me know if you have any feedback or further changes!

Thanks! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants