-
Notifications
You must be signed in to change notification settings - Fork 5
Introduction
LeakPro was created in response to growing concerns about privacy breaches in machine learning models. Machine learning models can unintentionally expose sensitive information from training data through various attacks, including membership inference, data reconstruction, and model inversion. LeakPro addresses these challenges by offering a framework to stress-test models and mitigate potential risks.
Key motivations include:
- Understanding how private data leaks from machine learning models.
- Developing tools to evaluate privacy threats.
- Enabling researchers and developers to improve model privacy and robustness.
LeakPro envisions a world where developers can proactively test their machine learning models against privacy risks before deploying them in real-world applications. By providing an open-source platform, LeakPro seeks to:
- Advance Research: Enable cutting-edge research in privacy-preserving AI.
- Improve Security: Support organizations in securing their ML pipelines.
- Promote Collaboration: Foster a community of developers, researchers, and industry partners.
LeakPro is developed through a collaborative effort involving key partners in academia, industry, and research organizations. Current partners include:
- AI Sweden: Driving innovation in AI research and implementation.
- RISE Research Institutes of Sweden: Leading research initiatives in technology.
- Sahlgrenska University Hospital: Providing healthcare-focused data privacy insights.
- Region Halland: Supporting public-sector healthcare services.
- AstraZeneca: Ensuring responsible AI adoption in the pharmaceutical industry.
- Syndata: Advancing synthetic data generation for privacy protection.
- Scaleout Systems: Specializing in federated learning and data sharing technologies.
Stay tuned for more updates as LeakPro evolves. For a deeper dive, explore the other sections of this Wiki!