Skip to content

guoshangwei/secure-machine-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 

Repository files navigation

Secure Machine Learning

Related papers in the top-tier security conferences. We mainly focus on the security problems (both privacy and integrity) when machine learning models are trained or deployed in untrustworthy cloud platforms. The problems are quite pressing as the rapid development of Machine Learning-as-a-Service techniques.

Secure Training

Attacks

S&P

    2018 Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

CCS

    2018 Machine Learning with Membership Privacy using Adversarial Regularization

USENIX

    2018 With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning

Detection and Defence

S&P

    2018 Locally Differentially Private Frequent Itemset Mining

    2017 Is Interaction Necessary for Distributed Private Learning?
    2017 Pyramid: Enhancing Selectivity in Big Data Protection with Count Featurization
    2017 SecureML: A System for Scalable Privacy-Preserving Machine Learning

    2016 Deep Learning with Differential Privacy

USENIX

    2016 Oblivious Multi-Party Machine Learning on Trusted Processors

Secure Deployment

Attacks

S&P

    2018 Stealing Hyperparameters in Machine Learning

    2017 Membership Inference Attacks Against Machine Learning Models

CCS

    2018 Model-Reuse Attacks on Deep Learning Systems
    2018 Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations

    2017 Evading Classifiers by Morphing in the Dark
    2017 Machine Learning Models that Remember Too Much
    2017 DolphinAttack: Inaudible Voice Commands
    2017 Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
    2017 Towards Evaluating the Robustness of Neural Networks

    2016 SFADiff: Automated Evasion Attacks and Fingerprinting Using Black-box Differential Automata Learning

USENIX

    2018 When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks 
    [github](https://github.com/sdsatumd/fail)

    2016 Stealing Machine Learning Models via Prediction APIs

Detection and Defence

S&P

    2018 AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

    2016 Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

CCS

    2018 Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach

    2017 MagNet: A Two-Pronged Defense against Adversarial Examples
    2017 Oblivious Neural Network Predictions via MiniONN Transformations

USENIX

    2018 Formal Security Analysis of Neural Networks using Symbolic Intervals
    2018 Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
    2018 GAZELLE: A Low Latency Framework for Secure Neural Network Inference

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published