Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot execute verification algorithms but PGD attack #84

Open
ytsao opened this issue Dec 17, 2024 · 0 comments
Open

Cannot execute verification algorithms but PGD attack #84

ytsao opened this issue Dec 17, 2024 · 0 comments

Comments

@ytsao
Copy link

ytsao commented Dec 17, 2024

Hi,

I am trying to use alpha-beta-CROWN to verify my simple fully-connected neural network.
The network structure is defined as following:
I have a normalization layer and then it just several linear and ReLU layers.
However, when I used alpha-beta-CROWN to verify on MNIST dataset, I got the error message alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/base.py", line 446, in broadcast_backward if shape[i] == 1 and A.shape[i + 1] != 1: IndexError: tuple index out of range
Afterwards, I checked the code auto_LiRPA/operators/base.py, I found there is a comment saying that # Skip the batch dimension. # FIXME (05/11/2022): the following condition is not always correct. # We should not rely on checking dimension is "1" or not.
So, if I have the problem with batch dimension, how can I address with it to verify my network model? Thank you!

class Normalization(torch.nn.Module):
    def __init__(self, device, mean=0.1307, sigma=0.3081):
        super(Normalization, self).__init__()
        # self.mean = torch.FloatTensor([0.1307]).view((1, 1, 1, 1)).to(device)
        # self.sigma = torch.FloatTensor([0.3081]).view((1, 1, 1, 1)).to(device)
        mean = np.array(mean) if isinstance(mean, list) else np.array([mean])
        sigma = np.array(sigma) if isinstance(sigma, list) else np.array([sigma])

        self.mean = torch.nn.Parameter(torch.FloatTensor(
            mean).view((1, -1, 1, 1)), False)
        self.sigma = torch.nn.Parameter(torch.FloatTensor(
            sigma).view((1, -1, 1, 1)), False)

    def forward(self, x):
        return (x - self.mean) / self.sigma

class ORCA_mnist_5x100_DiffAI(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.normalization = Normalization(device="cpu", mean=0.1307, sigma=0.3081)
        self.layers = torch.nn.Sequential(
            torch.nn.Flatten(),
            torch.nn.Linear(28*28, 100),
            torch.nn.ReLU(),
            torch.nn.Linear(100, 100),
            torch.nn.ReLU(),
            torch.nn.Linear(100, 100),
            torch.nn.ReLU(),
            torch.nn.Linear(100, 100),
            torch.nn.ReLU(),
            torch.nn.Linear(100, 100),
            torch.nn.ReLU(),
            torch.nn.Linear(100, 10)
        )
    
    def forward(self, x):
        x = self.normalization(x)
        logits = self.layers(x)
        return logits

The following are my configuration in this example.

general:
  device: cpu  # Select device to run verifier, cpu or cuda (GPU).
  complete_verifier: bab 
  enable_incomplete_verification: true 
  buffer_has_batchdim: false
model:
  name: ORCA_mnist_5x100_DiffAI  # Model name. Will be evaluated as a python statement.
  path: models/orca/fc_5x100.pth  # Load pretrained model from this specified path.
  input_shape: (-1, 1, 28, 28)
data:
  start: 0  # Start from the i-th property in specified dataset.
  end: 20  # End with the (i-1)-th property in the dataset.
  mean: 0.0  # Mean vector used in data preprocessing.
  std: 1.0  # Std vector used in data preprocessing.
  dataset: MNIST  # Dataset name (only if not using specifications from a .csv file). Dataset must be defined in utils.py. For customized data, checkout custom/custom_model_data.py.
  #data_idx_file: exp_configs/orca/mnist_0.txt  # A text file with a list of example IDs to run.
specification:
  type: lp  # Type of verification specification. "lp" = L_p norm, "box" = element-wise lower and upper bound provided by dataloader.
  robustness_type: verified-acc  # For robustness verification: verify against all labels ("verified-acc" mode), or just the runnerup labels ("runnerup" mode), or using a specified label in dataset ("speicify-target" mode, only used for oval20). Not used when a VNNLIB spec is used.
  norm: .inf  # Lp-norm for epsilon perturbation in robustness verification (1, 2, inf).
  epsilon: 0.01  # Set perturbation size (Lp norm). If not set, a default value may be used based on dataset loader.
  epsilon_min: 0.0
attack:
  pgd_order: skip
  pgd_steps: 100
  pgd_restarts: 30
solver:
  batch_size: 2048 
  alpha-crown:
    iteration: 100
    lr_alpha: 0.1
  beta-crown:
    iteration: 20
    lr_beta: 0.05
    lr_alpha: 0.01
bab:
  timeout: 120 
  branching:
    reduceop: min 
    method: kfsb
    candidates: 3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant