secml.ml.classifiers.gradients¶
secml.ml.classifiers.gradients.mixin_classifier_gradient module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient.
CClassifierGradientMixin
[source]¶ Bases:
object
Abstract Mixin class that defines basic methods for classifier gradients.
Methods
grad_f_params
(self, x, y)Derivative of the decision function w.r.t.
grad_f_x
(self, x, y, \*\*kwargs)Derivative of the classifier decision function w.r.t.
grad_loss_params
(self, x, y[, loss])Derivative of a given loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
-
grad_f_params
(self, x, y)[source]¶ Derivative of the decision function w.r.t. the classifier parameters.
- Parameters
- xCArray
Features of the dataset on which the training objective is computed
- yint
Index of the class wrt the gradient must be computed.
-
grad_f_x
(self, x, y, **kwargs)[source]¶ Derivative of the classifier decision function w.r.t. an input sample
- Parameters
- xCArray
features of the dataset on which the decision function is computed
- yCArray
The label of the class wrt the function should be calculated.
- kwargs
Optional arguments for the gradient method. See specific classifier for a full description.
- Returns
- gradientCArray
Gradient of the classifier’s output wrt input. Vector-like array.
-
grad_loss_params
(self, x, y, loss=None)[source]¶ Derivative of a given loss w.r.t. the classifier parameters.
- Parameters
- xCArray
Features of the dataset on which the loss is computed
- yCArray
Dataset labels
- loss: None (default) or CLoss
If the loss is equal to None (default) the classifier loss is used to compute the derivative.
-
secml.ml.classifiers.gradients.mixin_classifier_gradient_kde module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_kde.
CClassifierGradientKDEMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient.CClassifierGradientMixin
Mixin class for CClassifierKDE gradients.
Methods
grad_f_params
(self, x, y)Derivative of the decision function w.r.t.
grad_f_x
(self, x, y, \*\*kwargs)Derivative of the classifier decision function w.r.t.
grad_loss_params
(self, x, y[, loss])Derivative of a given loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.
CClassifierGradientLinearMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient.CClassifierGradientMixin
Mixin class for CClassifierLinear gradients.
Methods
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
-
grad_f_params
(self, x, y=1)[source]¶ Derivative of the decision function w.r.t. the classifier parameters.
- Parameters
- xCArray
Features of the dataset on which the training objective is computed.
- yint
Index of the class wrt the gradient must be computed.
-
grad_f_x
(self, x=None, y=1, **kwargs)[source]¶ Computes the gradient of the classifier’s output wrt input.
- Parameters
- xCArray
The gradient is computed in the neighborhood of x.
- yint, optional
Index of the class wrt the gradient must be computed. Default 1.
- **kwargs
Optional parameters for the function that computes the gradient of the decision function. See the description of each classifier for a complete list of optional parameters.
- Returns
- gradientCArray
Gradient of the classifier’s output wrt input. Vector-like array.
-
grad_loss_params
(self, x, y, loss=None)[source]¶ Derivative of the classifier loss w.r.t. the classifier parameters.
d_loss / d_params = d_loss / d_f * d_f / d_params
- Parameters
- xCArray
Features of the dataset on which the loss is computed.
- yCArray
Dataset labels.
- loss: None (default) or CLoss
If the loss is equal to None (default) the classifier loss is used to compute the derivative.
-
grad_tr_params
(self, x, y)[source]¶ - Derivative of the classifier training objective w.r.t. the classifier
parameters.
If the loss is equal to None (default) the classifier loss is used to compute the derivative.
d_train_obj / d_params = d_loss / d_f * d_f / d_params + d_reg / d_params
- Parameters
- xCArray
Features of the dataset on which the loss is computed.
- yCArray
Dataset labels.
-
secml.ml.classifiers.gradients.mixin_classifier_gradient_logistic module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_logistic.
CClassifierGradientLogisticMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.CClassifierGradientLinearMixin
Mixin class for CClassifierLogistic gradients.
Methods
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
secml.ml.classifiers.gradients.mixin_classifier_gradient_ridge module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_ridge.
CClassifierGradientRidgeMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.CClassifierGradientLinearMixin
Mixin class for CClassifierRidge gradients.
Methods
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x[, y])Hessian of the training objective w.r.t.
secml.ml.classifiers.gradients.mixin_classifier_gradient_sgd module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_sgd.
CClassifierGradientSGDMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.CClassifierGradientLinearMixin
Mixin class for CClassifierSGD gradients.
Methods
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
secml.ml.classifiers.gradients.mixin_classifier_gradient_svm module¶
-
class
secml.ml.classifiers.gradients.mixin_classifier_gradient_svm.
CClassifierGradientSVMMixin
[source]¶ Bases:
secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.CClassifierGradientLinearMixin
Mixin class for CClassifierSVM gradients.
Methods
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t.
hessian_tr_params
(self[, x, y])Hessian of the training objective w.r.t.
-
grad_f_params
(self, x, y=1)[source]¶ Derivative of the decision function w.r.t. the classifier parameters.
- Parameters
- xCArray
Features of the dataset on which the training objective is computed.
- yint
Index of the class wrt the gradient must be computed.
-
grad_loss_params
(self, x, y, loss=None)[source]¶ Derivative of the loss w.r.t. the classifier parameters
dL / d_params = dL / df * df / d_params
- Parameters
- xCArray
Features of the dataset on which the loss is computed.
- yCArray
Labels of the training samples.
- loss: None (default) or CLoss
If the loss is equal to None (default) the classifier loss is used to compute the derivative.
-