secml.adv.attacks.poisoning¶
CAttackPoisoning¶
-
class
secml.adv.attacks.poisoning.c_attack_poisoning.
CAttackPoisoning
(classifier, training_data, val, distance='l2', dmax=0, lb=0, ub=1, y_target=None, solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.c_attack_mixin.CAttackMixin
Interface for poisoning attacks.
- Parameters
- classifierCClassifier
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- valCDataset
Validation set.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
class_type
Defines class type.
classifier
Returns classifier
distance
todo
dmax
Returns dmax
f_eval
Returns the number of function evaluations made during the attack.
f_opt
Returns the value of the objective function evaluated on the optimal point founded by the attack.
f_seq
Returns a CArray containing the values of the objective function evaluations made by the attack.
grad_eval
Returns the number of function evaluations made during the attack.
lb
Returns lb
logger
Logger for current object.
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
x0
Returns the attacker’s initial sample features
x_opt
Returns the optimal point founded by the attack.
x_seq
Returns a CArray (number of iteration * number of features) containing the values of the attack point path.
xc
Returns the attacker’s sample features
- y_target
yc
Returns the attacker’s sample label
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class hyperparameters.
get_state
(self, **kwargs)Returns the object state dictionary.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads object from file.
load_state
(self, path)Sets the object state from file.
objective_function
(self, xc[, acc])- Parameters
objective_function_gradient
(self, xc[, …])Compute the loss derivative wrt the attack sample xc
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object to file.
save_state
(self, path, **kwargs)Store the object state to file.
set
(self, param_name, param_value[, copy])Set a parameter of the class.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
set_state
(self, state_dict[, copy])Sets the object state using input dictionary.
timed
([msg])Timer decorator.
-
property
n_points
¶ Returns the number of poisoning points.
-
objective_function
(self, xc, acc=False)[source]¶ - Parameters
- xc: poisoning point
- Returns
- f_obj: values of objective function (average hinge loss) at x
-
objective_function_gradient
(self, xc, normalization=True)[source]¶ Compute the loss derivative wrt the attack sample xc
The derivative is decomposed as:
dl / x = sum^n_c=1 ( dl / df_c * df_c / x )
-
property
random_seed
¶ Returns the attacker’s validation data
-
run
(self, x, y, ds_init=None, max_iter=1)[source]¶ Runs poisoning on multiple points.
It reads n_points (previously set), initializes xc, yc at random, and then optimizes the poisoning points xc.
- Parameters
- xCArray
Validation set for evaluating classifier performance. Note that this is not the validation data used by the attacker, which should be passed instead to CAttackPoisoning init.
- yCArray
Corresponding true labels for samples in x.
- ds_initCDataset or None, optional.
Dataset for warm start.
- max_iterint, optional
Number of iterations to re-optimize poisoning data. Default 1.
- Returns
- y_predpredicted labels for all val samples by targeted classifier
- scoresscores for all val samples by targeted classifier
- adv_xcmanipulated poisoning points xc (for subsequents warm starts)
- f_optfinal value of the objective function
-
property
training_data
¶ Returns the training set used to learn the targeted classifier
-
property
val
¶ Returns the attacker’s validation data
-
property
x0
¶ Returns the attacker’s initial sample features
-
property
xc
¶ Returns the attacker’s sample features
-
property
y_target
¶
-
property
yc
¶ Returns the attacker’s sample label
CAttackPoisoningLogisticRegression¶
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_logistic_regression.
CAttackPoisoningLogisticRegression
(classifier, training_data, val, distance='l1', dmax=0, lb=0, ub=1, y_target=None, solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against logistic regression.
This is an implementation of the attack developed in Sect. 3.3 in https://www.usenix.org/conference/usenixsecurity19/presentation/demontis:
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th USENIX Security Symposium. USENIX Association, 2019.
For more details on poisoning attacks, see also:
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/abs/1804.07933, ICML 2015
https://arxiv.org/pdf/1206.6389, ICML 2012
- Parameters
- classifierCClassifierLogistic
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- valCDataset
Validation set.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
class_type
Defines class type.
classifier
Returns classifier
distance
todo
dmax
Returns dmax
f_eval
Returns the number of function evaluations made during the attack.
f_opt
Returns the value of the objective function evaluated on the optimal point founded by the attack.
f_seq
Returns a CArray containing the values of the objective function evaluations made by the attack.
grad_eval
Returns the number of function evaluations made during the attack.
lb
Returns lb
logger
Logger for current object.
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
x0
Returns the attacker’s initial sample features
x_opt
Returns the optimal point founded by the attack.
x_seq
Returns a CArray (number of iteration * number of features) containing the values of the attack point path.
xc
Returns the attacker’s sample features
- y_target
yc
Returns the attacker’s sample label
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class hyperparameters.
get_state
(self, **kwargs)Returns the object state dictionary.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads object from file.
load_state
(self, path)Sets the object state from file.
objective_function
(self, xc[, acc])- Parameters
objective_function_gradient
(self, xc[, …])Compute the loss derivative wrt the attack sample xc
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object to file.
save_state
(self, path, **kwargs)Store the object state to file.
set
(self, param_name, param_value[, copy])Set a parameter of the class.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
set_state
(self, state_dict[, copy])Sets the object state using input dictionary.
timed
([msg])Timer decorator.
CAttackPoisoningRidge¶
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_ridge.
CAttackPoisoningRidge
(classifier, training_data, val, distance='l2', dmax=0, lb=0, ub=1, y_target=None, solver_type='pgd-ls', solver_params=None, init_type=None, random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against ridge regression.
This is an implementation of the attack developed in https://arxiv.org/abs/1804.07933:
H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli. Is feature selection secure against training data poisoning? In F. Bach and D. Blei, editors, JMLR W&CP, Proc. 32nd Int’l Conf. Mach. Learning (ICML), volume 37, pp. 1689-1698, 2015.
For more details on poisoning attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/pdf/1206.6389, ICML 2012
- Parameters
- classifierCClassifierRidge
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- valCDataset
Validation set.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
class_type
Defines class type.
classifier
Returns classifier
distance
todo
dmax
Returns dmax
f_eval
Returns the number of function evaluations made during the attack.
f_opt
Returns the value of the objective function evaluated on the optimal point founded by the attack.
f_seq
Returns a CArray containing the values of the objective function evaluations made by the attack.
grad_eval
Returns the number of function evaluations made during the attack.
lb
Returns lb
logger
Logger for current object.
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
x0
Returns the attacker’s initial sample features
x_opt
Returns the optimal point founded by the attack.
x_seq
Returns a CArray (number of iteration * number of features) containing the values of the attack point path.
xc
Returns the attacker’s sample features
- y_target
yc
Returns the attacker’s sample label
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class hyperparameters.
get_state
(self, **kwargs)Returns the object state dictionary.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads object from file.
load_state
(self, path)Sets the object state from file.
objective_function
(self, xc[, acc])- Parameters
objective_function_gradient
(self, xc[, …])Compute the loss derivative wrt the attack sample xc
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object to file.
save_state
(self, path, **kwargs)Store the object state to file.
set
(self, param_name, param_value[, copy])Set a parameter of the class.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
set_state
(self, state_dict[, copy])Sets the object state using input dictionary.
timed
([msg])Timer decorator.
CAttackPoisoningSVM¶
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_svm.
CAttackPoisoningSVM
(classifier, training_data, val, distance='l1', dmax=0, lb=0, ub=1, y_target=None, solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against Support Vector Machines (SVMs).
This is an implementation of the attack in https://arxiv.org/pdf/1206.6389:
B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In J. Langford and J. Pineau, editors, 29th Int’l Conf. on Machine Learning, pages 1807-1814. Omnipress, 2012.
where the gradient is computed as described in Eq. (10) in https://www.usenix.org/conference/usenixsecurity19/presentation/demontis:
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th USENIX Security Symposium. USENIX Association, 2019.
For more details on poisoning attacks, see also:
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/abs/1804.07933, ICML 2015
- Parameters
- classifierCClassifierSVM
Target SVM, trained in the dual (i.e., with kernel not set to None).
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- valCDataset
Validation set.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
class_type
Defines class type.
classifier
Returns classifier
distance
todo
dmax
Returns dmax
f_eval
Returns the number of function evaluations made during the attack.
f_opt
Returns the value of the objective function evaluated on the optimal point founded by the attack.
f_seq
Returns a CArray containing the values of the objective function evaluations made by the attack.
grad_eval
Returns the number of function evaluations made during the attack.
lb
Returns lb
logger
Logger for current object.
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
x0
Returns the attacker’s initial sample features
x_opt
Returns the optimal point founded by the attack.
x_seq
Returns a CArray (number of iteration * number of features) containing the values of the attack point path.
xc
Returns the attacker’s sample features
- y_target
yc
Returns the attacker’s sample label
Methods
alpha_xc
(self, xc)- Parameters
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class hyperparameters.
get_state
(self, **kwargs)Returns the object state dictionary.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads object from file.
load_state
(self, path)Sets the object state from file.
objective_function
(self, xc[, acc])- Parameters
objective_function_gradient
(self, xc[, …])Compute the loss derivative wrt the attack sample xc
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object to file.
save_state
(self, path, **kwargs)Store the object state to file.
set
(self, param_name, param_value[, copy])Set a parameter of the class.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
set_state
(self, state_dict[, copy])Sets the object state using input dictionary.
timed
([msg])Timer decorator.