Skip to content

tcwangshiqi-columbia/symbolic_interval

Repository files navigation

Symbolic Interval Analysis Library

Introduction

Symbolic interval analysis is a formal analysis method for certifying the robustness of neural networks. Any safety properties of neural networks can be presented as a bounded input range, a targeted network, and a desired output behavior. Symbolic interval analysis will relax the network to an interval-based version such that it can directly take in arbitrary input interval and return an output interval. Such an analysis is sound, as it will always over-approximate the ground-truth output range of network within the given input range. Also, to make the estimations more accurate, the dependencies of the network inputs can be kept as a symbolic interval such that our interval-based network can propagate it layer by layer and return an output symbolic interval. The output interval/symbolic interval can then be used to verify the safety properties. The details of symbolic interval is first proposed in ReluVal and further improved in Neurify. You can find simple examples in ReluVal papers.

Applications of symbolic interval analysis

Symbolic interval analysis can be applied in various applications.

Formal verification of neural networks

Symbolic interval analysis can be combined with iterative input bisections. We have presented ReluVal (code available at https://github.com/tcwangshiqi-columbia/ReluVal) which is currently the state-of-the-art verifier for verifying small datasets like ACAS Xu.

On the other hand, symbolic interval analysis is an important part of Neurify (code available at https://github.com/tcwangshiqi-columbia/Neurify), which is one of the state-of-the-art verifiers for verifying large convolutional networks (over 10,000 ReLUs) on various safety properties. Specifically, it can be used to identify key non-linear ReLUs and a linear solver can be further called to efficiently verify large networks.

Training verifiable robust networks

To let the network learn to be robust and also to be easily verified, symbolic interval analysis can be incorporated into the training process. We present MixTrain for efficiently improving the verifiable robustness of trained networks. MixTrain is now one of the state-of-the-art scalable certifiable training methods.

Enhancing gradient-based attacks

Futhermore, we present interval attack (code available at https://github.com/tcwangshiqi-columbia/Interval-Attack) by applying symbolic interval analysis to enhance the state-of-the-art gradient-based attacks like PGD or CW attacks.

Usage

Prerequisites

The code is tested with python3 and PyTorch v1.0 with and without CUDA.

git clone https://github.com/tcwangshiqi-columbia/symbolic_interval

Examples

One can run the test file with symbolic interval analysis by

cd symbolic_interval
python test.py --method sym

To run and compare with existing methods, one can run

cd symbolic_interval
git clone https://github.com/locuslab/convex_adversarial
python test.py --compare_all

Compared to the previously state-of-the-art ConvexDual method, symbolic interval analysis is at least 2 times faster on CPU and GPU end.

APIs

from symbolic_interval.symbolic_network import Interval_network
from symbolic_interval.symbolic_network import sym_interval_analyze
from symbolic_interval.symbolic_network import naive_interval_analyze

Reporting bugs

If you find any issues with the code or have any question about symbolic interval analysis, please contact Shiqi Wang ([email protected]).

Citing symbolic interval analysis library

@inproceedings {shiqi2018reluval,
	author = {Shiqi Wang and Kexin Pei and Justin Whitehouse and Junfeng Yang and Suman Jana},
	title = {Formal Security Analysis of Neural Networks using Symbolic Intervals},
	booktitle = {27th {USENIX} Security Symposium ({USENIX} Security 18)},
	year = {2018},
	address = {Baltimore, MD},
	url = {https://www.usenix.org/conference/usenixsecurity18/presentation/wang-shiqi},
	publisher = {{USENIX} Association},
}
@inproceedings{wang2018efficient,
 	title={Efficient formal safety analysis of neural networks},
 	author={Wang, Shiqi and Pei, Kexin and Whitehouse, Justin and Yang, Junfeng and Jana, Suman},
 	booktitle={Advances in Neural Information Processing Systems},
 	pages={6367--6377},
 	year={2018}
}

Contributors

License

Copyright (C) 2018-2019 by its authors and contributors and their institutional affiliations under the terms of modified BSD license.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

About

The library for symbolic interval

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages