secml.ml.peval.metrics

CMetric

class secml.ml.peval.metrics.c_metric.CMetric[source]

Bases: secml.core.c_creator.CCreator

Performance evaluation metrics.

Utility functions to measure classification performance. Some metrics might require probability estimates of the positive class, confidence values, or binary decisions values.

Each metric can be use either y_true (true ground labels) or y_pred (predicted labels) or score (predicted scores) or other data as inputs. Check documentation of each metric for more informations.

Examples

>>> from secml.ml.peval.metrics import CMetric
>>> from secml.array import CArray
>>> peval = CMetric.create('accuracy')
>>> print(peval.performance_score(y_true=CArray([0, 1, 2, 3]), y_pred=CArray([0, 1, 1, 3])))
0.75
>>> peval = CMetric.create('tpr-at-fpr', fpr=0.1)
>>> print(peval.performance_score(y_true=CArray([0, 1, 0, 0]), score=CArray([1, 1, 0, 0])))
0.3
Attributes
best_valuebest metric value. This is commonly a scalar (0.0 or 1.0).

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = None
performance_score(self, y_true=None, y_pred=None, score=None, **kwargs)[source]

Compute the performance metric.

Each metric can use as input either:
  • y_true (true ground labels)

  • y_pred (predicted labels)

  • score (predicted scores)

  • or any other data

Check documentation of each metric for more informations.

If not all the required data is passed, TypeError will be raised.

CMetricAccuracy

class secml.ml.peval.metrics.c_metric_accuracy.CMetricAccuracy[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Accuracy.

Accuracy score is the percentage (inside 0/1 range) of correctly predicted labels.

The metric uses:
  • y_true (true ground labels)

  • y_pred (predicted labels)

Examples

>>> from secml.ml.peval.metrics import CMetricAccuracy
>>> from secml.array import CArray
>>> peval = CMetricAccuracy()
>>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3])))
0.75
Attributes
class_type‘accuracy’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricAUC

class secml.ml.peval.metrics.c_metric_auc.CMetricAUC[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Area Under (ROC) Curve.

AUC is computed using the trapezoidal rule.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Notes

This implementation is restricted to the binary classification task.

Examples

>>> from secml.ml.peval.metrics import CMetricAUC
>>> from secml.array import CArray
>>> peval = CMetricAUC()
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.5
Attributes
class_type‘auc’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricAUCWMW

class secml.ml.peval.metrics.c_metric_auc_wmw.CMetricAUCWMW[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Area Under (ROC) Curve with Wilcoxon-Mann-Whitney statistic.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Notes

This implementation is restricted to the binary classification task.

Examples

>>> from secml.ml.peval.metrics import CMetricAUCWMW
>>> from secml.array import CArray
>>> peval = CMetricAUCWMW()
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.5
Attributes
class_type‘auc-wmw’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricConfusionMatrix

class secml.ml.peval.metrics.c_confusion_matrix.CMetricConfusionMatrix[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Attributes
best_value
class_type

Defines class type.

logger

Logger for current object.

verbose

Verbosity level of logger output.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

CMetricF1

class secml.ml.peval.metrics.c_metric_f1.CMetricF1[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: F1.

The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.

The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is:

F1 = 2 * (precision * recall) / (precision + recall)
The metric uses:
  • y_true (true ground labels)

  • y_pred (predicted labels)

Examples

>>> from secml.ml.peval.metrics import CMetricF1
>>> from secml.array import CArray
>>> peval = CMetricF1()
>>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3])))
0.6666666666666666
Attributes
class_type‘f1’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricMAE

class secml.ml.peval.metrics.c_metric_mae.CMetricMAE[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Mean Absolute Error.

Regression loss of ground truth (correct labels) and the predicted regression score.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Examples

>>> from secml.ml.peval.metrics import CMetricMAE
>>> from secml.array import CArray
>>> peval = CMetricMAE()
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.25
Attributes
class_type‘mae’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 0.0

CMetricMSE

class secml.ml.peval.metrics.c_metric_mse.CMetricMSE[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Mean Squared Error.

Regression loss of ground truth (correct labels) and the predicted regression score.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Examples

>>> from secml.ml.peval.metrics import CMetricMSE
>>> from secml.array import CArray
>>> peval = CMetricMSE()
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.25
Attributes
class_type‘mse’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 0.0

CMetricPartialAUC

class secml.ml.peval.metrics.c_metric_pauc.CMetricPartialAUC(fpr=0.01, n_points=1000)[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Partial Area Under (ROC) Curve.

ROC is only considered between 0 and fpr False Positive Rate.

AUC is computed using the trapezoidal rule.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Notes

This implementation is restricted to the binary classification task.

Examples

>>> from secml.ml.peval.metrics import CMetricPartialAUC
>>> from secml.array import CArray
>>> peval = CMetricPartialAUC(fpr=0.5)
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.125
Attributes
class_type‘pauc’

Defines class type.

fprfloat

Desired False Positive Rate in the interval [0,1]. Default 0.01 (1%)

n_pointsint

Number of points to be used when interpolating the partial ROC. Higher points means more accurate values but slower computation. Default 1000.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricPrecision

class secml.ml.peval.metrics.c_metric_precision.CMetricPrecision[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Precision.

The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.

The metric uses:
  • y_true (true ground labels)

  • y_pred (predicted labels)

Examples

>>> from secml.ml.peval.metrics import CMetricPrecision
>>> from secml.array import CArray
>>> peval = CMetricPrecision()
>>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3])))
0.625
Attributes
class_type‘precision’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CMetricRecall

class secml.ml.peval.metrics.c_metric_recall.CMetricRecall[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Recall (True Positive Rate).

The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. This is equivalent to True Positive Rate.

The metric uses:
  • y_true (true ground labels)

  • y_pred (predicted labels)

Examples

>>> from secml.ml.peval.metrics import CMetricRecall
>>> from secml.array import CArray
>>> peval = CMetricRecall()
>>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3])))
0.75
Attributes
class_type‘recall’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0

CRoc

class secml.ml.peval.metrics.c_roc.CBaseRoc[source]

Bases: object

Computes the receiver operating characteristic curve, or ROC curve.

This base class manage a single classifier output (a single repetition).

See also

CRoc

class that fully supports ROC repetitions.

Attributes
fpr

False Positive Rates.

th

Thresholds.

tpr

True Positive Rates.

Methods

compute(self, y_true, score[, positive_label])

Compute TPR/FPR for classifier output.

reset(self)

Reset stored data.

compute(self, y_true, score, positive_label=None)[source]

Compute TPR/FPR for classifier output.

Parameters
y_trueCArray

Flat array with true binary labels in range {0, 1} for each patterns or a single array. If labels are not binary, pos_label should be explicitly given.

scoreCArray

Flat array with target scores for each pattern, can either be probability estimates of the positive class or confidence values.

positive_labelint, optional

Label to consider as positive (others are considered negative).

Returns
single_rocCBaseRoc

Instance of the roc curve (tpr, fpr, th).

property fpr

False Positive Rates.

Flat array with increasing False Positive Rates. Element i

is the False Positive Rate of predictions with score >= thresholds[i].

reset(self)[source]

Reset stored data.

property th

Thresholds.

Flat array with decreasing thresholds on the decision function

used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.

property tpr

True Positive Rates.

Flat array with increasing True Positive Rates. Element i

is the True Positive Rate of predictions with score >= thresholds[i].

class secml.ml.peval.metrics.c_roc.CRoc[source]

Bases: secml.ml.peval.metrics.c_roc.CBaseRoc

Computes the receiver operating characteristic curve, or ROC curve.

“A receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of True Positive Rates out of the Positives (TPR = True Positive Rate) vs. the fraction of False Positives out of the Negatives (FPR = False Positive Rate), at various threshold settings. TPR is also known as sensitivity, and FPR is one minus the specificity or true negative rate.”

The class manage different repetitions of the same classification output.

Attributes
fpr

False Positive Rates.

has_mean

True if average has been computed for all ROCs.

has_std_dev

True if standard deviation has been computed for all ROCs.

mean_fpr

Averaged False Positive Rates.

mean_tpr

Averaged True Positive Rates.

n_reps

Return the number of computed ROC.

std_dev_tpr

Standard deviation of True Positive Rates.

th

Thresholds.

tpr

True Positive Rates.

Methods

average(self[, n_points, return_std])

Compute the average of computed ROC curves.

compute(self, y_true, score[, positive_label])

Compute ROC curve using input True labels and Classification Scores.

reset(self)

Reset stored data.

average(self, n_points=1000, return_std=False)[source]

Compute the average of computed ROC curves.

The average ROC is reset each time .compute_roc is called.

Parameters
n_pointsint, optional

Default 1000, is the number of points to be used for interpolation.

return_stdbool, optional

If True, standard deviation of True Positive Rates will be returned.

Returns
mean_fprCArray
Flat array with increasing False Positive Rates averaged over all

available repetitions. Element i is the false positive rate of predictions with score >= thresholds[i].

mean_tprCArray
Flat array with increasing True Positive Rates averaged over all

available repetitions. Element i is the true positive rate of predictions with score >= thresholds[i].

std_dev_tprCArray

Flat array with standard deviation of True Positive Rates. Only if return_std is True.

compute(self, y_true, score, positive_label=None)[source]

Compute ROC curve using input True labels and Classification Scores.

For multi-class data, label to be considered positive should specified.

If y_true and score are both lists (with same length), one roc curve for each pair is returned. If y_true is a single array, one roc curve for each (y_true, score[i]) is returned.

Each time the function is called, result is appended to tpr,`fpr`, and thr class attributes. Returned ROCs are the only associated with LATEST input data.

Parameters
y_trueCArray, list

List of flat arrays with true binary labels in range {0, 1} for each patterns or a single array. If a single array, one curve is returned for each (y_true, score[i]) pair. If labels are not binary, pos_label should be explicitly given.

scoreCArray, list

List of flat array with target scores for each pattern, can either be probability estimates of the positive class or confidence values. If y_true is a single array, one curve is returned for each (y_true, score[i]) pair.

positive_labelint, optional

Label to consider as positive (others are considered negative).

Returns
fprCArray or list
Flat array with increasing False Positive Rates or a list with

one array for each repetition. Element i is the False Positive Rate of predictions with score >= thresholds[i]

tprCArray or list
Flat array with increasing True Positive Rates or a list with

one array for each repetition. Element i is the True Positive Rate of predictions with score >= thresholds[i].

thCArray or list
Flat array with decreasing thresholds on the decision function

used to compute fpr and tpr or a list with one array for each repetition. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.

property fpr

False Positive Rates.

Flat array with increasing False Positive Rates or a list with

one array for each repetition. Element i is the False Positive Rate of predictions with score >= thresholds[i].

property has_mean

True if average has been computed for all ROCs.

property has_std_dev

True if standard deviation has been computed for all ROCs.

property mean_fpr

Averaged False Positive Rates.

Flat array with increasing False Positive Rates averaged over all

available repetitions. Element i is the false positive rate of predictions with score >= thresholds[i].

property mean_tpr

Averaged True Positive Rates.

Flat array with increasing True Positive Rates averaged over all

available repetitions. Element i is the True Positive Rate of predictions with score >= thresholds[i].

property n_reps

Return the number of computed ROC.

property std_dev_tpr

Standard deviation of True Positive Rates.

property th

Thresholds.

Flat array with decreasing thresholds on the decision function

used to compute fpr and tpr or a list with one array for each repetition. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.

property tpr

True Positive Rates.

Flat array with increasing True Positive Rates or a list with

one array for each repetition. Element i is the True Positive Rate of predictions with score >= thresholds[i].

secml.ml.peval.metrics.c_roc.average(fpr, tpr, n_points=1000)[source]

Compute the average of the input tpr/fpr pairs.

Parameters
fpr, tprCArray or list of CArray

CArray or list of CArrays with False/True Positive Rates as output of .CRoc.

n_pointsint, optional

Default 1000, is the number of points to be used for interpolation.

Returns
mean_fprCArray

Flat array with increasing False Positive Rates averaged over all available repetitions. Element i is the False Positive Rate of predictions with score >= thresholds[i].

mean_tprCArray

Flat array with increasing True Positive Rates averaged over all available repetitions. Element i is the True Positive Rate of predictions with score >= thresholds[i].

std_dev_tprCArray

Flat array with standard deviation of True Positive Rates.

secml.ml.peval.metrics.c_roc.refine_roc(fpr, tpr, th)[source]

Function to ensure the bounds of a ROC.

The first and last points should be (0,0) and (1,1) respectively.

Parameters
fprCArray

False Positive Rates, as returned by .BaseRoc.compute().

tprCArray

True Positive Rates, as returned by .BaseRoc.compute().

thCArray

Thresholds, as returned by .BaseRoc.compute().

CMetricTestError

class secml.ml.peval.metrics.c_metric_test_error.CMetricTestError[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: Test Error.

Test Error score is the percentage (inside 0/1 range) of wrongly predicted labels (inverse of accuracy).

The metric uses:
  • y_true (true ground labels)

  • y_pred (predicted labels)

Examples

>>> from secml.ml.peval.metrics import CMetricTestError
>>> from secml.array import CArray
>>> peval = CMetricTestError()
>>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3])))
0.25
Attributes
class_type‘test-error’

Defines class type.

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 0.0

CMetricTPRatFPR

class secml.ml.peval.metrics.c_metric_tpr_at_fpr.CMetricTPRatFPR(fpr=0.01)[source]

Bases: secml.ml.peval.metrics.c_metric.CMetric

Performance evaluation metric: True Positive Rate @ False Positive Rate.

The metric uses:
  • y_true (true ground labels)

  • score (estimated target values)

Notes

This implementation is restricted to the binary classification task.

Examples

>>> from secml.ml.peval.metrics import CMetricTPRatFPR
>>> from secml.array import CArray
>>> peval = CMetricTPRatFPR(fpr=0.5)
>>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0])))
0.5
Attributes
class_type‘tpr-at-fpr’

Defines class type.

fprfloat

Desired False Positive Rate in the interval [0,1]. Default 0.01 (1%)

Methods

copy(self)

Returns a shallow copy of current class.

create([class_item])

This method creates an instance of a class with given type.

deepcopy(self)

Returns a deep copy of current class.

get_class_from_type(class_type)

Return the class associated with input type.

get_params(self)

Returns the dictionary of class parameters.

get_state(self)

Returns the object state dictionary.

get_subclasses()

Get all the subclasses of the calling class.

list_class_types()

This method lists all types of available subclasses of calling one.

load(path)

Loads object from file.

load_state(self, path)

Sets the object state from file.

performance_score(self[, y_true, y_pred, score])

Compute the performance metric.

save(self, path)

Save class object to file.

save_state(self, path)

Store the object state to file.

set(self, param_name, param_value[, copy])

Set a parameter of the class.

set_params(self, params_dict[, copy])

Set all parameters passed as a dictionary {key: value}.

set_state(self, state_dict[, copy])

Sets the object state using input dictionary.

timed([msg])

Timer decorator.

best_value = 1.0