Foolbox Attacks¶
Note
You can find a basic example notebook here.
Base Wrapper Class¶
The base class does not implement any attack. The Foolbox attack wrappers inherit from the BaseAttack class and have the same attributes.
-
class
pepr.robustness.foolbox_wrapper.
BaseAttack
(attack_alias, epsilons, data, labels, attack_indices_per_target, target_models, foolbox_attack, pars_descriptors)¶ Base foolbox attack class implementing the logic for running the attack and generating a report.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- epsilons (iterable) – List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- attack_indices_per_target (numpy.ndarray) – Array of indices to attack per target model.
- target_models (iterable) – List of target models which should be tested.
- foolbox_attack (foolbox.attack.Attack) – The foolbox attack object which is wrapped in this class.
- pars_descriptors (dict) – Dictionary of attack parameters and their description shown in the attack report. Example: {“target”: “Contrast reduction target”} for the attribute named “target” of L2ContrastReductionAttack.
-
attack_alias
¶ str – Alias for a specific instantiation of the class.
-
attack_pars
¶ dict – Dictionary containing all needed parameters fo the attack.
-
data
¶ numpy.ndarray – Dataset with all training samples used in the given pentesting setting.
-
labels
¶ numpy.ndarray – Array of all labels used in the given pentesting setting.
-
target_models
¶ iterable – List of target models which should be tested.
-
fmodels
¶ iterable – List of foolbox models converted from target models.
-
foolbox_attack
¶ foolbox.attack.Attack – The foolbox attack object which is wrapped in this class.
-
attack_results
¶ dict – Dictionary storing the attack model results.
- raw (list): List of raw adversarial examples per target model.
- clipped (list): The clipped adversarial examples. These are guaranteed to not be perturbed more than epsilon and thus are the actual adversarial examples you want to visualize. Note that some of them might not actually switch the class.
- is_adv (list): Contains a boolean for each sample, indicating which samples are true adversarial that are both misclassified and within the epsilon balls around the clean samples. For every target model a tensorflow.Tensor with an array of shape (epsilons, data).
- success_rate (list): Percentage of misclassified adversarial examples per target model and epsilon.
- l2_distance (list): Euclidean distance (L2 norm) between original and perturbed images for every single image per target model, epsilon and class (shape: (target_models, epsilons, classes, nb_records))
- avg_l2_distance (list): Average euclidean distance (L2 norm) between original and perturbed images (epsilon is upper bound) per target model and epsilon.
Foolbox Attack Wrappers¶
L2ContrastReductionAttack |
foolbox.attacks.L2ContrastReductionAttack wrapper class. |
VirtualAdversarialAttack |
foolbox.attacks.VirtualAdversarialAttack wrapper class. |
DDNAttack |
foolbox.attacks.DDNAttack wrapper class. |
L2ProjectedGradientDescentAttack |
foolbox.attacks.L2ProjectedGradientDescentAttack wrapper class. |
LinfProjectedGradientDescentAttack |
foolbox.attacks.LinfProjectedGradientDescentAttack wrapper class. |
L2BasicIterativeAttack |
foolbox.attacks.L2BasicIterativeAttack wrapper class. |
LinfBasicIterativeAttack |
foolbox.attacks.LinfBasicIterativeAttack wrapper class. |
L2FastGradientAttack |
foolbox.attacks.L2FastGradientAttack wrapper class. |
LinfFastGradientAttack |
foolbox.attacks.LinfFastGradientAttack wrapper class. |
L2AdditiveGaussianNoiseAttack |
foolbox.attacks.L2AdditiveGaussianNoiseAttack wrapper class. |
L2AdditiveUniformNoiseAttack |
foolbox.attacks.L2AdditiveUniformNoiseAttack wrapper class. |
L2ClippingAwareAdditiveGaussianNoiseAttack |
foolbox.attacks.L2ClippingAwareAdditiveGaussianNoiseAttack wrapper class. |
L2ClippingAwareAdditiveUniformNoiseAttack |
foolbox.attacks.L2ClippingAwareAdditiveUniformNoiseAttack wrapper class. |
LinfAdditiveUniformNoiseAttack |
foolbox.attacks.LinfAdditiveUniformNoiseAttack wrapper class. |
L2RepeatedAdditiveGaussianNoiseAttack |
foolbox.attacks.L2RepeatedAdditiveGaussianNoiseAttack wrapper class. |
L2RepeatedAdditiveUniformNoiseAttack |
foolbox.attacks.L2RepeatedAdditiveUniformNoiseAttack wrapper class. |
L2ClippingAwareRepeatedAdditiveGaussianNoiseAttack |
foolbox.attacks.L2ClippingAwareRepeatedAdditiveGaussianNoiseAttack wrapper class. |
L2ClippingAwareRepeatedAdditiveUniformNoiseAttack |
foolbox.attacks.L2ClippingAwareRepeatedAdditiveUniformNoiseAttack wrapper class. |
LinfRepeatedAdditiveUniformNoiseAttack |
foolbox.attacks.LinfRepeatedAdditiveUniformNoiseAttack wrapper class. |
InversionAttack |
foolbox.attacks.InversionAttack wrapper class. |
BinarySearchContrastReductionAttack |
foolbox.attacks.BinarySearchContrastReductionAttack wrapper class. |
LinearSearchContrastReductionAttack |
foolbox.attacks.LinearSearchContrastReductionAttack wrapper class. |
L2CarliniWagnerAttack |
foolbox.attacks.L2CarliniWagnerAttack wrapper class. |
NewtonFoolAttack |
foolbox.attacks.NewtonFoolAttack wrapper class. |
EADAttack |
foolbox.attacks.EADAttack wrapper class. |
GaussianBlurAttack |
foolbox.attacks.GaussianBlurAttack wrapper class. |
L2DeepFoolAttack |
foolbox.attacks.L2DeepFoolAttack wrapper class. |
LinfDeepFoolAttack |
foolbox.attacks.LinfDeepFoolAttack wrapper class. |
SaltAndPepperNoiseAttack |
foolbox.attacks.SaltAndPepperNoiseAttack wrapper class. |
LinearSearchBlendedUniformNoiseAttack |
foolbox.attacks.LinearSearchBlendedUniformNoiseAttack wrapper class. |
BinarizationRefinementAttack |
foolbox.attacks.BinarizationRefinementAttack wrapper class. |
BoundaryAttack |
foolbox.attacks.BoundaryAttack wrapper class. |
L0BrendelBethgeAttack |
foolbox.attacks.L0BrendelBethgeAttack wrapper class. |
L1BrendelBethgeAttack |
foolbox.attacks.L1BrendelBethgeAttack wrapper class. |
L2BrendelBethgeAttack |
foolbox.attacks.L2BrendelBethgeAttack wrapper class. |
LinfinityBrendelBethgeAttack |
foolbox.attacks.LinfinityBrendelBethgeAttack wrapper class. |
FGM |
alias of pepr.robustness.foolbox_wrapper.L2FastGradientAttack |
FGSM |
alias of pepr.robustness.foolbox_wrapper.LinfFastGradientAttack |
L2PGD |
alias of pepr.robustness.foolbox_wrapper.L2ProjectedGradientDescentAttack |
LinfPGD |
alias of pepr.robustness.foolbox_wrapper.LinfProjectedGradientDescentAttack |
PGD |
alias of pepr.robustness.foolbox_wrapper.LinfProjectedGradientDescentAttack |
Note
The DatasetAttack
is currently not supported by PePR.
-
class
pepr.robustness.foolbox_wrapper.
L2ContrastReductionAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ContrastReductionAttack wrapper class.
Attack description: Reduces the contrast of the input using a perturbation of the given size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- target (float): (optional) Target relative to the bounds from 0 (min) to 1 (max) towards which the contrast is reduced.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
VirtualAdversarialAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.VirtualAdversarialAttack wrapper class.
Attack description: Second-order gradient-based attack on the logits. The attack calculate an untargeted adversarial perturbation by performing a approximated second order optimization step on the KL divergence between the unperturbed predictions and the predictions for the adversarial perturbation. This attack was originally introduced as the Virtual Adversarial Training method.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): Number of update steps.
- xi (float): (optional) L2 distance between original image and first adversarial proposal.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
DDNAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.DDNAttack wrapper class.
Attack description: The Decoupled Direction and Norm L2 adversarial attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_epsilon (float): (optional) Initial value for the norm/epsilon ball.
- steps (int): (optional) Number of steps for the optimization.
- gamma (float): (optional) Factor by which the norm will be modified: new_norm = norm * (1 + or - gamma).
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2ProjectedGradientDescentAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ProjectedGradientDescentAttack wrapper class.
Attack description: L2 Projected Gradient Descent.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- rel_stepsize (float): (optional) Stepsize relative to epsilon.
- abs_stepsize (float): (optional) If given, it takes precedence over rel_stepsize.
- steps (int): (optional) Number of update steps to perform.
- random_start (bool): (optional) Whether the perturbation is initialized randomly or starts at zero.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfProjectedGradientDescentAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfProjectedGradientDescentAttack wrapper class.
Attack description: Linf Projected Gradient Descent.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- rel_stepsize (float): (optional) Stepsize relative to epsilon.
- abs_stepsize (float): (optional) If given, it takes precedence over rel_stepsize.
- steps (int): (optional) Number of update steps to perform.
- random_start (bool): (optional) Whether the perturbation is initialized randomly or starts at zero.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2BasicIterativeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2BasicIterativeAttack wrapper class.
Attack description: L2 Basic Iterative Method.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- rel_stepsize (float): (optional) Stepsize relative to epsilon.
- abs_stepsize (float): (optional) If given, it takes precedence over rel_stepsize.
- steps (int): (optional) Number of update steps.
- random_start (bool): (optional) Controls whether to randomly start within allowed epsilon ball.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfBasicIterativeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfBasicIterativeAttack wrapper class.
Attack description: L-infinity Basic Iterative Method.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- rel_stepsize (float): (optional) Stepsize relative to epsilon.
- abs_stepsize (float): (optional) If given, it takes precedence over rel_stepsize.
- steps (int): (optional) Number of update steps.
- random_start (bool): (optional) Controls whether to randomly start within allowed epsilon ball.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2FastGradientAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2FastGradientAttack wrapper class.
Attack description: Fast Gradient Method (FGM).
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- random_start (bool): (optional) Controls whether to randomly start within allowed epsilon ball.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfFastGradientAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfFastGradientAttack wrapper class.
Attack description: Fast Gradient Sign Method (FGSM).
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- random_start (bool): (optional) Controls whether to randomly start within allowed epsilon ball.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2AdditiveGaussianNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2AdditiveGaussianNoiseAttack wrapper class.
Attack description: Samples Gaussian noise with a fixed L2 size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2AdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2AdditiveUniformNoiseAttack wrapper class.
Attack description: Samples uniform noise with a fixed L2 size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2ClippingAwareAdditiveGaussianNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ClippingAwareAdditiveGaussianNoiseAttack wrapper class.
Attack description: Samples Gaussian noise with a fixed L2 size after clipping.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2ClippingAwareAdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ClippingAwareAdditiveUniformNoiseAttack wrapper class.
Attack description: Samples uniform noise with a fixed L2 size after clipping.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfAdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfAdditiveUniformNoiseAttack wrapper class.
Attack description: Samples uniform noise with a fixed L-infinity size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2RepeatedAdditiveGaussianNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2RepeatedAdditiveGaussianNoiseAttack wrapper class.
Attack description: Repeatedly samples Gaussian noise with a fixed L2 size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- repeats (int): (optional) How often to sample random noise.
- check_trivial (bool): (optional) Check whether original sample is already adversarial.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2RepeatedAdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2RepeatedAdditiveUniformNoiseAttack wrapper class.
Attack description: Repeatedly samples uniform noise with a fixed L2 size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- repeats (int): (optional) How often to sample random noise.
- check_trivial (bool): (optional) Check whether original sample is already adversarial.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2ClippingAwareRepeatedAdditiveGaussianNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ClippingAwareRepeatedAdditiveGaussianNoiseAttack wrapper class.
Attack description: Repeatedly samples Gaussian noise with a fixed L2 size after clipping.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- repeats (int): (optional) How often to sample random noise.
- check_trivial (bool): (optional) Check whether original sample is already adversarial.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2ClippingAwareRepeatedAdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2ClippingAwareRepeatedAdditiveUniformNoiseAttack wrapper class.
Attack description: Repeatedly samples uniform noise with a fixed L2 size after clipping.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- repeats (int): (optional) How often to sample random noise.
- check_trivial (bool): (optional) Check whether original sample is already adversarial.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfRepeatedAdditiveUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfRepeatedAdditiveUniformNoiseAttack wrapper class.
Attack description: Repeatedly samples uniform noise with a fixed L-infinity size.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- repeats (int): (optional) How often to sample random noise.
- check_trivial (bool): (optional) Check whether original sample is already adversarial.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
InversionAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.InversionAttack wrapper class.
Attack description: Creates “negative images” by inverting the pixel values.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
BinarySearchContrastReductionAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.BinarySearchContrastReductionAttack wrapper class.
Attack description: Reduces the contrast of the input using a binary search to find the smallest adversarial perturbation
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- binary_search_steps (int): (optional) Number of iterations in the binary search. This controls the precision of the results.
- target (float): (optional) Target relative to the bounds from 0 (min) to 1 (max) towards which the contrast is reduced.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinearSearchContrastReductionAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinearSearchContrastReductionAttack wrapper class.
Attack description: Reduces the contrast of the input using a linear search to find the smallest adversarial perturbation.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- steps (int): (optional) Number of iterations in the linear search. This controls the precision of the results.
- target (float): (optional) Target relative to the bounds from 0 (min) to 1 (max) towards which the contrast is reduced.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2CarliniWagnerAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2CarliniWagnerAttack wrapper class.
Attack description: Implementation of the Carlini & Wagner L2 Attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- binary_search_steps (int): (optional) Number of steps to perform in the binary search over the const c.
- steps (int): (optional) Number of optimization steps within each binary search step.
- stepsize (float): (optional) Stepsize to update the examples.
- confidence (float): (optional) Confidence required for an example to be marked as adversarial. Controls the gap between example and decision boundary.
- initial_const (float): (optional) Initial value of the const c with which the binary search starts.
- abort_early (bool): (optional) Stop inner search as soon as an adversarial example has been found. Does not affect the binary search over the const c.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
NewtonFoolAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.NewtonFoolAttack wrapper class.
Attack description: Implementation of the NewtonFool Attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): (optional) Number of update steps to perform..
- stepsize (float): (optional) Size of each update step..
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
EADAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.EADAttack wrapper class.
Attack description: Implementation of the EAD Attack with EN Decision Rule.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- binary_search_steps (int): (optional) Number of steps to perform in the binary search over the const c.
- steps (int): (optional) Number of optimization steps within each binary search step.
- initial_stepsize (float): (optional) Initial stepsize to update the examples.
- confidence (float): (optional) Confidence required for an example to be marked as adversarial. Controls the gap between example and decision boundary.
- initial_const (float): (optional) Initial value of the const c with which the binary search starts.
- regularization (float): (optional) Controls the L1 regularization.
- decision_rule (“EN” ir “L1”): (optional) Rule according to which the best adversarial examples are selected. They either minimize the L1 or ElasticNet distance.
- abort_early (bool): (optional) Stop inner search as soon as an adversarial example has been found. Does not affect the binary search over the const c.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
GaussianBlurAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.GaussianBlurAttack wrapper class.
Attack description: Blurs the inputs using a Gaussian filter with linearly increasing standard deviation.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): (optional) Number of sigma values tested between 0 and max_sigma.
- channel_axis (int): (optional) Index of the channel axis in the input data.
- max_sigma (float): (optional) Maximally allowed sigma value of the Gaussian blur.
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2DeepFoolAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2DeepFoolAttack wrapper class.
Attack description: A simple and fast gradient-based adversarial attack. Implements the DeepFool L2 attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): (optional) Maximum number of steps to perform.
- candidates (int): (optional) Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster.
- overshoot (float): (optional) How much to overshoot the boundary.
- loss (“crossentropy” or “logits”): (optional) Loss function to use inside the update function.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfDeepFoolAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfDeepFoolAttack wrapper class.
Attack description: A simple and fast gradient-based adversarial attack. Implements the DeepFool L-Infinity attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): (optional) Maximum number of steps to perform.
- candidates (int): (optional) Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster.
- overshoot (float): (optional) How much to overshoot the boundary.
- loss (“crossentropy” or “logits”): (optional) Loss function to use inside the update function.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
SaltAndPepperNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.SaltAndPepperNoiseAttack wrapper class.
Attack description: Increases the amount of salt and pepper noise until the input is misclassified.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- steps (int): (optional) The number of steps to run.
- across_channels (bool): Whether the noise should be the same across all channels.
- channel_axis (int): (optional) The axis across which the noise should be the same (if across_channels is True). If None, will be automatically inferred from the model if possible.
- epsilons (list): (optional) List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinearSearchBlendedUniformNoiseAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinearSearchBlendedUniformNoiseAttack wrapper class.
Attack description: Blends the input with a uniform noise input until it is misclassified.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- directions (int): (optional) Number of random directions in which the perturbation is searched.
- steps (int): (optional) Number of blending steps between the original image and the random directions.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
BinarizationRefinementAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.BinarizationRefinementAttack wrapper class.
Attack description: For models that preprocess their inputs by binarizing the inputs, this attack can improve adversarials found by other attacks. It does this by utilizing information about the binarization and mapping values to the corresponding value in the clean input or to the right side of the threshold.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- starting_points (list): Adversarial examples to improve.
- threshold (float): (optional) The threshold used by the models binarization. If none, defaults to (model.bounds()[1] - model.bounds()[0]) / 2.
- included_in (“lower” or “upper”): (optional) Whether the threshold value itself belongs to the lower or upper interval.
- distance (foolbox.distances.Distance): Distance measure for which minimal adversarial examples are searched.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
BoundaryAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.BoundaryAttack wrapper class.
Attack description: A powerful adversarial attack that requires neither gradients nor probabilities. This is the reference implementation for the attack.
Notes Differences to the original reference implementation:
- We do not perform internal operations with float64
- The samples within a batch can currently influence each other a bit
- We don’t perform the additional convergence confirmation
- The success rate tracking changed a bit
- Some other changes due to batching and merged loops
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_attack (Optional[foolbox.attacks.base.MinimizationAttack]): (optional) Attack to use to find a starting points. Defaults to LinearSearchBlendedUniformNoiseAttack. Only used if starting_points is None.
- steps (int): Maximum number of steps to run. Might converge and stop before that.
- spherical_step (float): (optional) Initial step size for the orthogonal (spherical) step.
- source_step (float): (optional) Initial step size for the step towards the target.
- source_step_convergence (float): (optional) Sets the threshold of the stop criterion: if source_step becomes smaller than this value during the attack, the attack has converged and will stop.
- step_adaptation (float): (optional) Factor by which the step sizes are multiplied or divided.
- tensorboard (Union[typing_extensions.Literal[False], None, str]): (optional) The log directory for TensorBoard summaries. If False, TensorBoard summaries will be disabled (default). If None, the logdir will be runs/CURRENT_DATETIME_HOSTNAME.
- update_stats_every_k (int): (optional)
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L0BrendelBethgeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L0BrendelBethgeAttack wrapper class.
Attack description: L0 variant of the Brendel & Bethge adversarial attack. This is a powerful gradient-based adversarial attack that follows the adversarial boundary (the boundary between the space of adversarial and non-adversarial images as defined by the adversarial criterion) to find the minimum distance to the clean image.
This is the reference implementation of the Brendel & Bethge attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_attack (Optional[foolbox.attacks.base.MinimizationAttack]): (optional) Attack to use to find a starting points. Defaults to LinearSearchBlendedUniformNoiseAttack. Only used if starting_points is None.
- overshoot (float): (optional)
- steps (int): (optional) Maximum number of steps to run.
- lr (float): (optional)
- lr_decay (float): (optional)
- lr_num_decay (int): (optional)
- momentum (float): (optional)
- tensorboard (Union[typing_extensions.Literal[False], None, str]): (optional) The log directory for TensorBoard summaries. If False, TensorBoard summaries will be disabled (default). If None, the logdir will be runs/CURRENT_DATETIME_HOSTNAME.
- binary_search_steps (int): (optional) Number of iterations in the binary search. This controls the precision of the results.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L1BrendelBethgeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L1BrendelBethgeAttack wrapper class.
Attack description: L1 variant of the Brendel & Bethge adversarial attack. This is a powerful gradient-based adversarial attack that follows the adversarial boundary (the boundary between the space of adversarial and non-adversarial images as defined by the adversarial criterion) to find the minimum distance to the clean image.
This is the reference implementation of the Brendel & Bethge attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_attack (Optional[foolbox.attacks.base.MinimizationAttack]): (optional) Attack to use to find a starting points. Defaults to LinearSearchBlendedUniformNoiseAttack. Only used if starting_points is None.
- overshoot (float): (optional)
- steps (int): (optional) Maximum number of steps to run.
- lr (float): (optional)
- lr_decay (float): (optional)
- lr_num_decay (int): (optional)
- momentum (float): (optional)
- tensorboard (Union[typing_extensions.Literal[False], None, str]): (optional) The log directory for TensorBoard summaries. If False, TensorBoard summaries will be disabled (default). If None, the logdir will be runs/CURRENT_DATETIME_HOSTNAME.
- binary_search_steps (int): (optional) Number of iterations in the binary search. This controls the precision of the results.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
L2BrendelBethgeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.L2BrendelBethgeAttack wrapper class.
Attack description: L2 variant of the Brendel & Bethge adversarial attack. This is a powerful gradient-based adversarial attack that follows the adversarial boundary (the boundary between the space of adversarial and non-adversarial images as defined by the adversarial criterion) to find the minimum distance to the clean image.
This is the reference implementation of the Brendel & Bethge attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_attack (Optional[foolbox.attacks.base.MinimizationAttack]): (optional) Attack to use to find a starting points. Defaults to LinearSearchBlendedUniformNoiseAttack. Only used if starting_points is None.
- overshoot (float): (optional)
- steps (int): (optional) Maximum number of steps to run.
- lr (float): (optional)
- lr_decay (float): (optional)
- lr_num_decay (int): (optional)
- momentum (float): (optional)
- tensorboard (Union[typing_extensions.Literal[False], None, str]): (optional) The log directory for TensorBoard summaries. If False, TensorBoard summaries will be disabled (default). If None, the logdir will be runs/CURRENT_DATETIME_HOSTNAME.
- binary_search_steps (int): (optional) Number of iterations in the binary search. This controls the precision of the results.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
class
pepr.robustness.foolbox_wrapper.
LinfinityBrendelBethgeAttack
(attack_alias, attack_pars, data, labels, data_conf, target_models)¶ foolbox.attacks.LinfinityBrendelBethgeAttack wrapper class.
Attack description: L-infinity variant of the Brendel & Bethge adversarial attack. This is a powerful gradient-based adversarial attack that follows the adversarial boundary (the boundary between the space of adversarial and non-adversarial images as defined by the adversarial criterion) to find the minimum distance to the clean image.
This is the reference implementation of the Brendel & Bethge attack.
Parameters: - attack_alias (str) – Alias for a specific instantiation of the class.
- attack_pars (dict) –
Dictionary containing all needed attack parameters:
- init_attack (Optional[foolbox.attacks.base.MinimizationAttack]): (optional) Attack to use to find a starting points. Defaults to LinearSearchBlendedUniformNoiseAttack. Only used if starting_points is None.
- overshoot (float): (optional)
- steps (int): (optional) Maximum number of steps to run.
- lr (float): (optional)
- lr_decay (float): (optional)
- lr_num_decay (int): (optional)
- momentum (float): (optional)
- tensorboard (Union[typing_extensions.Literal[False], None, str]): (optional) The log directory for TensorBoard summaries. If False, TensorBoard summaries will be disabled (default). If None, the logdir will be runs/CURRENT_DATETIME_HOSTNAME.
- binary_search_steps (int): (optional) Number of iterations in the binary search. This controls the precision of the results.
- epsilons (list): List of one or more epsilons for the attack.
- data (numpy.ndarray) – Dataset with all input images used to attack the target models.
- labels (numpy.ndarray) – Array of all labels used to attack the target models.
- data_conf (dict) –
Dictionary describing for every target model which record-indices should be used for the attack.
- attack_indices_per_target (numpy.ndarray): Array of indices of images to attack per target model.
- target_models (iterable) – List of target models which should be tested.
-
pepr.robustness.foolbox_wrapper.
FGM
¶ alias of
pepr.robustness.foolbox_wrapper.L2FastGradientAttack
-
pepr.robustness.foolbox_wrapper.
FGSM
¶ alias of
pepr.robustness.foolbox_wrapper.LinfFastGradientAttack
-
pepr.robustness.foolbox_wrapper.
L2PGD
¶ alias of
pepr.robustness.foolbox_wrapper.L2ProjectedGradientDescentAttack
-
pepr.robustness.foolbox_wrapper.
LinfPGD
¶ alias of
pepr.robustness.foolbox_wrapper.LinfProjectedGradientDescentAttack
-
pepr.robustness.foolbox_wrapper.
PGD
¶ alias of
pepr.robustness.foolbox_wrapper.LinfProjectedGradientDescentAttack