secml.adv.attacks.evasion¶
CAttackEvasion¶
-
class
secml.adv.attacks.evasion.c_attack_evasion.
CAttackEvasion
(classifier, surrogate_classifier, surrogate_data=None, y_target=None)[source]¶ Bases:
secml.adv.attacks.c_attack.CAttack
Interface for Evasion attacks.
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
ub
Returns ub
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
objective_function
(self, x)[source]¶ Objective function.
- Parameters
- xCArray
Array with points on which the objective function should be computed.
- Returns
- CArray
Value of the objective function on each point.
-
run
(self, x, y, ds_init=None, *args, **kargs)[source]¶ Runs evasion on a dataset.
- Parameters
- xCArray
Data points.
- yCArray
True labels.
- ds_initCDataset
Dataset for warm starts.
- Returns
- y_predCArray
Predicted labels for all ds samples by target classifier.
- scoresCArray
Scores for all ds samples by target classifier.
- adv_dsCDataset
Dataset of manipulated samples.
- f_objfloat
Average value of the objective function computed on each data point.
CAttackEvasionPGD¶
-
class
secml.adv.attacks.evasion.c_attack_evasion_pgd.
CAttackEvasionPGD
(classifier, surrogate_classifier, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=<no value>, y_target=None, attack_classes='all', solver_params=None)[source]¶ Bases:
secml.adv.attacks.evasion.c_attack_evasion_pgd_ls.CAttackEvasionPGDLS
Evasion attacks using Projected Gradient Descent.
- This class implements the maximum-confidence evasion attacks proposed in:
https://arxiv.org/abs/1708.06939, ICCV W. ViPAR, 2017.
- This is the multi-class extension of our original work in:
https://arxiv.org/abs/1708.06131, ECML 2013, implemented using a standard projected gradient solver.
It can also be used on sparse, high-dimensional feature spaces, using an L1 constraint on the manipulation of samples to preserve sparsity, as we did for crafting adversarial Android malware in:
https://arxiv.org/abs/1704.08996, IEEE TDSC 2017.
- For more on evasion attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- Attributes
class_type
‘e-pgd-ls’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CAttackEvasionPGDLS¶
-
class
secml.adv.attacks.evasion.c_attack_evasion_pgd_ls.
CAttackEvasionPGDLS
(classifier, surrogate_classifier, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_params=None)[source]¶ Bases:
secml.adv.attacks.evasion.c_attack_evasion.CAttackEvasion
Evasion attacks using Projected Gradient Descent with Line Search.
- This class implements the maximum-confidence evasion attacks proposed in:
https://arxiv.org/abs/1708.06939, ICCV W. ViPAR, 2017.
- This is the multi-class extension of our original work in:
https://arxiv.org/abs/1708.06131, ECML 2013,
implemented using a custom projected gradient solver that uses line search in each iteration to save gradient computations and speed up the attack.
It can also be used on sparse, high-dimensional feature spaces, using an L1 constraint on the manipulation of samples to preserve sparsity, as we did for crafting adversarial Android malware in:
https://arxiv.org/abs/1704.08996, IEEE TDSC 2017.
- For more on evasion attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- discrete: True/False (default: false).
If True, input space is considered discrete (integer-valued), otherwise continuous.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- Attributes
class_type
‘e-pgd-ls’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
y_target
¶