SecML: A library for Secure and Explainable Machine Learning¶
SecML is an open-source Python library for the security evaluation of Machine Learning (ML) algorithms.
It comes with a set of powerful features:
Dense/Sparse data support. We provide full, transparent support for both dense (through
numpy
library) and sparse data (throughscipy
library) in a single data structure.Wide range of supported ML algorithms. All supervised learning algorithms supported by
scikit-learn
are available, as well as Neural Networks (NNs) through PyTorch deep learning platform (coming soon).Built-in attack algorithms. Evasion and poisoning attacks based on a custom-developed fast solver.
Visualize your results. We provide visualization and plotting framework based on the widely-known library matplotlib.
Explain your results. Explainable ML methods to interpret model decisions via influential features and prototypes. (coming soon)
Extensible. Easily create new wrappers for ML models or attack algorithms extending our abstract interfaces.
Multi-processing. Do you want to save time further? We provide full compatibility with all the multi-processing features of
scikit-learn
andpytorch
, along with built-in support of the joblib library.
SecML is currently in development.¶
If you encounter any bug, please report them using the
GitLab issue tracker.
Please see our ROADMAP for an overview
of the future development directions.
Installation Guide¶
We recommend instaling SecML in a specific environment along with its dependencies.
Common frameworks to create and manage envs are virtualenv and conda. Both alternatives provide convenient user guides on how to properly setup the envs, so this guide will not cover the configuration procedure.
Operating System requirements¶
SecML can run under Python 2.7 and Python >= 3.5 with no configuration steps required, as all its dependencies are available as wheel packages for the main macOS versions and Linux distributions.
However, to support additional advanced features more packages can be necessary depending on the Operating System used:
Linux (Ubuntu >= 16.04 or equivalent dist):
python-tk
(Python 2.7),python3-tk
(Python >= 3.5), for running MatplotLib Tk-based backends;NVIDIA® CUDA® Toolkit for running
tf-gpu
extra component. See the TensorFlow Guide.
macOS (macOS >= 10.12 Sierra)
Installation process¶
Before starting the installation process try to obtain the latest version
of the pip
manager by calling: pip install -U pip
The setup process is managed by the Python package setuptools
.
Be sure to obtain the latest version by calling: pip install -U setuptools
Once the environment is set up, SecML can installed and run by multiple means:
Install from official PyPI repository:
pip install secml
Install from wheel/zip package (https://pypi.python.org/pypi/secml#files):
pip install <package-file>
In all cases, the setup process will try to install the correct dependencies.
In case something goes wrong during the install process, try to install
the dependencies first by calling: pip install -r requirements.txt
SecML should now be importable in python via: import secml
.
To update a current installation using any of the previous methods,
add the -U
parameter after the pip install
directive.
Extra Components¶
SecML comes with a set of extras components that can be installed if desired.
To specify the extra components to install, add the section [extras]
while
calling pip install
. extras
will be a comma-separated list of components
you want to install. Example:
pip install secml[extra1,extra2]
All the installation procedures via pip
described above allow definition
of the [extras]
section.
Available extra components¶
None at the moment.
Coming soon¶
pytorch
: Neural Networks (NNs) through PyTorch deep learning platform.
Will install:torch >= 0.4.*
,torchvision >= 0.1.8
cleverhans
: Wrapper of CleverHans, a Python library to benchmark vulnerability of machine learning systems to adversarial examples. Will install:tensorflow >= 1.14.*, < 2
,cleverhans
tf-gpu
: Shortcut for installingTensorFlow
package with GPU support.
Will install:tensorflow-gpu >= 1.14.*, < 2
Usage Guide¶
SecML is based on numpy, scipy, scikit-learn and pytorch, widely-used packages for scientific computing and machine learning with Python.
As a result, most of the interfaces of the library should be pretty familiar to frequent users of those packages.
The primary data class is the secml.array.CArray
, multi-dimensional
(currently limited to 2 dimensions) array structure which embeds both dense
and sparse data accepting as input numpy.ndarray
and scipy.sparse.csr_matrix
(more sparse formats will be supported soon). This structure is the standard
input and output of all other classes in the library.
The secml.ml
package contains all the Machine Learning algorithms and
support classes, including classifiers, loss and regularizer functions,
kernels and performance evaluation functions.
The secml.adv
package contains evasion and poisoning attacks based on a
custom-developed solver, along with classes to easily perform security
evaluation of Machine Learning algorithms.
The secml.figure
package contains a visualization and plotting framework
based on matplotlib.
(coming soon) The secml.explanation
package contains few different
explainable Machine Learning methods that allow interpreting classifiers
decisions by analyzing the relevant components such as features or training
prototypes.
(coming soon) The secml.pytorch
package contains support classes for the
PyTorch deep learning platform. This package will be
available only if the extra component pytorch
has been specified during installation.
(coming soon) The secml.tf.clvhs
package contains support classes for the
CleverHans library for benchmarking
machine learning systems’ vulnerability to adversarial examples.
This package will be available only if the extra component cleverhans
has been specified during installation.
Contributors¶
Your contribution is foundamental!
If you want to help the development of SecML, just set up the project locally by the following means:
(devs only) Install from local GitLab repository:
Clone the project repository in a directory of your choice
Run installation as:
pip install .
(devs only) Install from remote GitLab repository. In this case, given
{repourl}
in the format, es.,gitlab.com/secml/secml
:pip install git+ssh://git@{repourl}.git[@branch]#egg=secml
A specific branch to install can be specified using[@branch]
parameter. If omitted, the default branch will be installed.
Contributions can be sent in the form of a merge request via our GitLab issue tracker.
SecML can also be added as a dependency for other libraries/project.
Just add secml
or the full repository path command
git+ssh://git@{repourl}.git[@branch]#egg=secml
to the requirements.txt
file.
Editable Installation (development mode)¶
For SecML developers or users want to use the latest dev
version
of the library, pip
provides a convenient option which is called: editable mode.
By calling pip install
with the -e
option or python setup.py develop
,
only a reference to the project files is “installed” in the active
environment. In this way, project files can be edited/updated and the
new versions will be automatically executed by the Python interpreter.
Two common scenarios are listed below:
Editable install from a previously cloned local repository
Navigate to the repository directory
Run
python setup.py develop
Editable install from remote repository
Run
pip install -e git+ssh://git@{repourl}.git[@branch]#egg=secml
Project will be cloned automatically in
<venv path>/src/secml
The new repository can then be updated using standard
git
commands
Editable installs are also available while using SecML as a dependency of other libraries/projects (see Installation Guide for more information).
Authors¶
This library is maintained by PRALab - Pattern Recognition and Applications Lab.
List of contributors:
Credits¶
numpy
Travis E, Oliphant. “A guide to NumPy”, USA: Trelgol Publishing, 2006.scipy
Travis E. Oliphant. “Python for Scientific Computing”, Computing in Science & Engineering, 9, 10-20, 2007.scikit-learn
Pedregosa et al., “Scikit-learn: Machine Learning in Python”, JMLR 12, pp. 2825-2830, 2011.
Copyright¶
SecML has been developed by PRALab - Pattern Recognition and Applications lab and Pluribus One s.r.l. under Apache License 2.0. Copyright 2019.
Training of Classifiers and Visualization of Results¶
In this first tutorial we aim to show some basic functionality of SecML.
Creation and visualization of a simple 2D dataset¶
The first step is loading the dataset. We are going to use a simple toy dataset consisting of 3 clusters of points, normally distributed.
Each dataset of SecML is a CDataset
object, consisting of dataset.X
and dataset.Y
, where the samples and the corresponding labels are stored, respectively.
[1]:
random_state = 999
n_features = 2 # Number of features
n_samples = 1250 # Number of samples
centers = [[-2, 0], [2, -2], [2, 2]] # Centers of the clusters
cluster_std = 0.8 # Standard deviation of the clusters
from secml.data.loader import CDLRandomBlobs
dataset = CDLRandomBlobs(n_features=n_features,
centers=centers,
cluster_std=cluster_std,
n_samples=n_samples,
random_state=random_state).load()
The dataset will be split in training and test, and normalized in the standard interval [0, 1]
with a min-max normalizer.
[2]:
n_tr = 1000 # Number of training set samples
n_ts = 250 # Number of test set samples
# Split in training and test
from secml.data.splitter import CTrainTestSplit
splitter = CTrainTestSplit(
train_size=n_tr, test_size=n_ts, random_state=random_state)
tr, ts = splitter.split(dataset)
# Normalize the data
from secml.ml.features import CNormalizerMinMax
nmz = CNormalizerMinMax()
tr.X = nmz.fit_transform(tr.X)
ts.X = nmz.transform(ts.X)
Let’s visualize the dataset in a 2D plane.
The three clusters are clearly separable and normalized as we required.
[4]:
from secml.figure import CFigure
fig = CFigure(width=5, height=5)
# Convenience function for plotting a dataset
fig.sp.plot_ds(tr)
fig.show()

Training of classifiers¶
Now we can train a non-linear one-vs-all Support Vector Machine (SVM), using a Radial Basis Function (RBF) kernel for embedding.
To this end, we use the CClassifierMulticlassOVA
classifier, which extends to a multiclass case any binary classifier like the CClassifierSVM
.
We will evaluate the best training parameters through a 3-Fold Cross-Validation procedure, using the accuracy as the performance metric. Each classifier has an integrated routine, .estimate_parameters()
which estimates the best parameters on the given training set.
[5]:
# Creation of the multiclass classifier
from secml.ml.classifiers import CClassifierSVM
from secml.ml.classifiers.multiclass import CClassifierMulticlassOVA
from secml.ml.kernel import CKernelRBF
multiclass = CClassifierMulticlassOVA(CClassifierSVM, kernel=CKernelRBF())
# Parameters for the Cross-Validation procedure
xval_params = {'C': [0.1, 1, 10], 'kernel.gamma': [1, 10, 100]}
# Let's create a 3-Fold data splitter
from secml.data.splitter import CDataSplitterKFold
xval_splitter = CDataSplitterKFold(num_folds=3, random_state=random_state)
# Metric to use for training and performance evaluation
from secml.ml.peval.metrics import CMetricAccuracy
metric = CMetricAccuracy()
# Select and set the best training parameters for the classifier
print("Estimating the best training parameters...")
best_params = multiclass.estimate_parameters(
dataset=tr,
parameters=xval_params,
splitter=xval_splitter,
metric=metric,
perf_evaluator='xval'
)
print("The best training parameters are: ", best_params)
# We can now fit the classifier
multiclass.fit(tr)
# Compute predictions on a test set
y_pred = multiclass.predict(ts.X)
# Evaluate the accuracy of the classifier
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Accuracy on test set: {:.2%}".format(acc))
Estimating the best training parameters...
The best training parameters are: {'kernel.gamma': 10, 'C': 1}
Accuracy on test set: 98.80%
Visualization of the decision regions of the classifiers¶
Once the classifier is trained, we can visualize the decision regions over the entire feature space.
[6]:
fig = CFigure(width=5, height=5)
# Convenience function for plotting the decision function of a classifier
fig.sp.plot_decision_regions(multiclass, n_grid_points=200)
fig.sp.plot_ds(ts)
fig.sp.grid(grid_on=False)
fig.sp.title("Classification regions")
fig.sp.text(0.01, 0.01, "Accuracy on test set: {:.2%}".format(acc),
bbox=dict(facecolor='white'))
fig.show()

Training other classifiers¶
Now we can repeat the above process for other classifiers available in SecML. We are going to use a namedtuple
for easy storage of objects and parameters.
Please note that parameters estimation may take a while (up to a few minutes) depending on the machine the script is run on.
[7]:
from collections import namedtuple
CLF = namedtuple('CLF', 'clf_name clf xval_parameters')
# Binary classifiers
from secml.ml.classifiers import CClassifierSVM, CClassifierSGD
# Natively-multiclass classifiers
from secml.ml.classifiers import CClassifierKNN, CClassifierDecisionTree, CClassifierRandomForest
clf_list = [
CLF(
clf_name='SVM Linear',
clf=CClassifierMulticlassOVA(CClassifierSVM, kernel='linear'),
xval_parameters={'C': [0.1, 1, 10]}),
CLF(clf_name='SVM RBF',
clf=CClassifierMulticlassOVA(CClassifierSVM, kernel='rbf'),
xval_parameters={'C': [0.1, 1, 10], 'kernel.gamma': [1, 10, 100]}),
CLF(clf_name='Logistic (SGD)',
clf=CClassifierMulticlassOVA(
CClassifierSGD, regularizer='l2', loss='log',
kernel='linear', random_state=random_state),
xval_parameters={'alpha': [1e-7, 1e-6, 1e-5]}),
CLF(clf_name='kNN',
clf=CClassifierKNN(),
xval_parameters={'n_neighbors': [5, 10, 20]}),
CLF(clf_name='Decision Tree',
clf=CClassifierDecisionTree(),
xval_parameters={'max_depth': [1, 3, 5]}),
CLF(clf_name='Random Forest',
clf=CClassifierRandomForest(random_state=random_state),
xval_parameters={'n_estimators': [10, 20, 30]}),
]
from secml.data.splitter import CDataSplitterKFold
xval_splitter = CDataSplitterKFold(num_folds=3, random_state=random_state)
fig = CFigure(width=5 * len(clf_list) / 2, height=5 * 2)
for i, test_case in enumerate(clf_list):
clf = test_case.clf
xval_params = test_case.xval_parameters
print("\nEstimating the best training parameters of {:} ..."
"".format(test_case.clf_name))
best_params = clf.estimate_parameters(
dataset=tr, parameters=xval_params, splitter=xval_splitter,
metric='accuracy', perf_evaluator='xval')
print("The best parameters for '{:}' are: "
"".format(test_case.clf_name), best_params)
print("Training of {:} ...".format(test_case.clf_name))
clf.fit(tr)
# Predictions on test set and performance evaluation
y_pred = clf.predict(ts.X)
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Classifier: {:}\tAccuracy: {:.2%}".format(test_case.clf_name, acc))
# Plot the decision function
from math import ceil
# Use `CFigure.subplot` to divide the figure in multiple subplots
fig.subplot(2, int(ceil(len(clf_list) / 2)), i + 1)
fig.sp.plot_decision_regions(clf, n_grid_points=200)
fig.sp.plot_ds(ts)
fig.sp.grid(grid_on=False)
fig.sp.title(test_case.clf_name)
fig.sp.text(0.01, 0.01, "Accuracy on test set: {:.2%}".format(acc),
bbox=dict(facecolor='white'))
fig.show()
Estimating the best training parameters of SVM Linear ...
The best parameters for 'SVM Linear' are: {'C': 1}
Training of SVM Linear ...
Classifier: SVM Linear Accuracy: 99.20%
Estimating the best training parameters of SVM RBF ...
The best parameters for 'SVM RBF' are: {'kernel.gamma': 10, 'C': 1}
Training of SVM RBF ...
Classifier: SVM RBF Accuracy: 98.80%
Estimating the best training parameters of Logistic (SGD) ...
The best parameters for 'Logistic (SGD)' are: {'alpha': 1e-06}
Training of Logistic (SGD) ...
Classifier: Logistic (SGD) Accuracy: 98.80%
Estimating the best training parameters of kNN ...
The best parameters for 'kNN' are: {'n_neighbors': 10}
Training of kNN ...
Classifier: kNN Accuracy: 98.80%
Estimating the best training parameters of Decision Tree ...
The best parameters for 'Decision Tree' are: {'max_depth': 3}
Training of Decision Tree ...
Classifier: Decision Tree Accuracy: 99.20%
Estimating the best training parameters of Random Forest ...
The best parameters for 'Random Forest' are: {'n_estimators': 20}
Training of Random Forest ...
Classifier: Random Forest Accuracy: 98.80%

Evasion Attacks against Machine Learning models¶
In this tutorial we will experiment with adversarial evasion attacks against a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel.
Evasion attacks are performed at test time by perturbing a point with a carefully crafted noise so that the classifiers predicts an unexpected label for it.
We will first create and train the classifier, evaluating its performance in the standard scenario, i.e. not under attack.
The following part replicates the procedure from the first tutorial.
[1]:
random_state = 999
n_features = 2 # Number of features
n_samples = 1100 # Number of samples
centers = [[-2, 0], [2, -2], [2, 2]] # Centers of the clusters
cluster_std = 0.8 # Standard deviation of the clusters
from secml.data.loader import CDLRandomBlobs
dataset = CDLRandomBlobs(n_features=n_features,
centers=centers,
cluster_std=cluster_std,
n_samples=n_samples,
random_state=random_state).load()
n_tr = 1000 # Number of training set samples
n_ts = 100 # Number of test set samples
# Split in training and test
from secml.data.splitter import CTrainTestSplit
splitter = CTrainTestSplit(
train_size=n_tr, test_size=n_ts, random_state=random_state)
tr, ts = splitter.split(dataset)
# Normalize the data
from secml.ml.features import CNormalizerMinMax
nmz = CNormalizerMinMax()
tr.X = nmz.fit_transform(tr.X)
ts.X = nmz.transform(ts.X)
# Metric to use for training and performance evaluation
from secml.ml.peval.metrics import CMetricAccuracy
metric = CMetricAccuracy()
# Creation of the multiclass classifier
from secml.ml.classifiers import CClassifierSVM
from secml.ml.classifiers.multiclass import CClassifierMulticlassOVA
from secml.ml.kernel import CKernelRBF
clf = CClassifierMulticlassOVA(CClassifierSVM, kernel=CKernelRBF())
# Parameters for the Cross-Validation procedure
xval_params = {'C': [1e-2, 0.1, 1], 'kernel.gamma': [10, 100, 1e3]}
# Let's create a 3-Fold data splitter
from secml.data.splitter import CDataSplitterKFold
xval_splitter = CDataSplitterKFold(num_folds=3, random_state=random_state)
# Select and set the best training parameters for the classifier
print("Estimating the best training parameters...")
best_params = clf.estimate_parameters(
dataset=tr,
parameters=xval_params,
splitter=xval_splitter,
metric='accuracy',
perf_evaluator='xval'
)
print("The best training parameters are: ", best_params)
# We can now fit the classifier
clf.fit(tr)
# Compute predictions on a test set
y_pred = clf.predict(ts.X)
# Evaluate the accuracy of the classifier
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Accuracy on test set: {:.2%}".format(acc))
Estimating the best training parameters...
The best training parameters are: {'C': 0.1, 'kernel.gamma': 100}
Accuracy on test set: 99.00%
Generation of an Adversarial Example¶
We are going to generate an adversarial example against the SVM classifier using the gradient-based maximum-confidence algorithm for generating evasion attacks proposed in:
[biggio13-ecml] Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F., 2013. Evasion Attacks against Machine Learning at Test Time. In ECML-PKDD 2013.
[melis17-vipar] Melis, M., Demontis, A., Biggio, B., Brown, G., Fumera, G. and Roli, F., 2017. Is deep learning safe for robot vision? adversarial examples against the icub humanoid. In Proceedings of IEEE ICCV 2017.
[demontis19-usenix] Demontis, A., Melis, M., Pintor, M., Jagielski, M., Biggio, B., Oprea, A., Nita-Rotaru, C. and Roli, F., 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. In 28th Usenix Security Symposium, Santa Clara, California, USA.
which is implemented in SecML by the CAttackEvasionPGDLS
class (e-pgd-ls
).
Let’s define the attack parameters. Firstly, we chose to generate an l2 perturbation within a maximum ball of radius eps = 0.4
from the initial point. The maximum perturbation value is denoted as dmax
in our implementation. Secondly, we also add a low/upper bound as our feature space is limited in [0, 1]
. Lastly, as we are not interested in generating an adversarial example for a specific class, we perform an error-generic attack by setting y_target = None
.
The attack internally uses a solver based on Projected Gradient Descent with Bisect Line Search, implemented by the COptimizerPGDLS
class. The parameters of the solver can be specified while instancing the attack and must be optimized depending on the specific otimization problem.
[2]:
x0, y0 = ts[5, :].X, ts[5, :].Y # Initial sample
noise_type = 'l2' # Type of perturbation 'l1' or 'l2'
dmax = 0.4 # Maximum perturbation
lb, ub = 0, 1 # Bounds of the attack space. Can be set to `None` for unbounded
y_target = None # None if `error-generic` or a class label for `error-specific`
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.3,
'eta_min': 0.1,
'eta_max': None,
'max_iter': 100,
'eps': 1e-4
}
from secml.adv.attacks.evasion import CAttackEvasionPGDLS
pgd_ls_attack = CAttackEvasionPGDLS(
classifier=clf,
surrogate_classifier=clf,
surrogate_data=tr,
distance=noise_type,
dmax=dmax,
lb=lb, ub=ub,
solver_params=solver_params,
y_target=y_target)
# Run the evasion attack on x0
y_pred_pgdls, _, adv_ds_pgdls, _ = pgd_ls_attack.run(x0, y0)
print("Original x0 label: ", y0.item())
print("Adversarial example label (PGD-LS): ", y_pred_pgdls.item())
print("Number of classifier gradient evaluations: {:}"
"".format(pgd_ls_attack.grad_eval))
Original x0 label: 1
Adversarial example label (PGD-LS): 2
Number of classifier gradient evaluations: 6
Let’s now test another attack algorithm, implemented by CAttackEvasionPGD
, which leverage the standard Projected Gradient Descent solver (e-pgd
). We keep the same attack parameters as before.
[3]:
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.3,
'max_iter': 100,
'eps': 1e-4
}
from secml.adv.attacks.evasion import CAttackEvasionPGD
pgd_attack = CAttackEvasionPGD(
classifier=clf,
surrogate_classifier=clf,
surrogate_data=tr,
distance=noise_type,
dmax=dmax,
lb=lb, ub=ub,
solver_params=solver_params,
y_target=y_target)
# Run the evasion attack on x0
y_pred_pgd, _, adv_ds_pgd, _ = pgd_attack.run(x0, y0)
print("Original x0 label: ", y0.item())
print("Adversarial example label (PGD): ", y_pred_pgd.item())
print("Number of classifier gradient evaluations: {:}"
"".format(pgd_attack.grad_eval))
Original x0 label: 1
Adversarial example label (PGD): 2
Number of classifier gradient evaluations: 39
We can see that the classifier has been successfully evaded in both cases. However, the pgd-ls
solver with bisect line search queries the classifier gradient function many times less, making the process of generating the adversarial examples much faster.
Let’s now visualize both the attacks on a 2D plane. On the background, the value of the objective function of the attacks is shown.
[5]:
from secml.figure import CFigure
fig = CFigure(width=16, height=6, markersize=12)
# Let's replicate the `l2` constraint used by the attack for visualization
from secml.optim.constraints import CConstraintL2
constraint = CConstraintL2(center=x0, radius=dmax)
for i, (attack, adv_ds) in enumerate(
[(pgd_attack, adv_ds_pgd), (pgd_ls_attack, adv_ds_pgdls)]):
fig.subplot(1, 2, i + 1)
# Convenience function for plotting the attack objective function
fig.sp.plot_fun(attack.objective_function, plot_levels=False,
multipoint=True, n_grid_points=200)
# Let's also plot the decision boundaries of the classifier
fig.sp.plot_decision_regions(clf, plot_background=False, n_grid_points=200)
# Construct an array with the original point and the adversarial example
adv_path = x0.append(adv_ds.X, axis=0)
# Convenience function for plotting the optimization sequence
fig.sp.plot_path(attack.x_seq)
# Convenience function for plotting a constraint
fig.sp.plot_constraint(constraint)
fig.sp.title(attack.class_type)
fig.sp.grid(grid_on=False)
fig.title(r"Error-generic evasion attack ($\varepsilon={:}$)".format(dmax))
fig.show()

We can see that the initial point x0 (red hexagon) has been perturbed in the feature space so that is actually classified by the SVM as a point from another class. The final adversarial example is the green star. We also show the l2 constraint as a black circle which has limited the maximum perturbation applicable to x0.
Security evaluation of a classifier¶
We could be interested in evaluating the robustness of a classifier against increasing values of the maximum perturbation eps
.
SecML provides a way to easily produce a Security Evaluation Curve, by means of the CSecEval
class.
The CSecEval
instance will take a CAttack
as input and will test the classifier using the desired perturbation levels.
Please note that the security evaluation process may take a while (up to a few minutes) depending on the machine the script is run on.
[6]:
# Perturbation levels to test
from secml.array import CArray
e_vals = CArray.arange(start=0, step=0.1, stop=1.1)
from secml.adv.seceval import CSecEval
sec_eval = CSecEval(
attack=pgd_ls_attack, param_name='dmax', param_values=e_vals)
# Run the security evaluation using the test set
print("Running security evaluation...")
sec_eval.run_sec_eval(ts)
from secml.figure import CFigure
fig = CFigure(height=5, width=5)
# Convenience function for plotting the Security Evaluation Curve
fig.sp.plot_sec_eval(
sec_eval.sec_eval_data, marker='o', label='SVM RBF', show_average=True)
Running security evaluation...

We can see how the SVM classifier is highly vulnerable to adversarial attacks and we are able to evade it with a perturbation as small as eps = 0.1
.
For further reference about the security evaluation of machine-learning models under attack see:
[biggio13-tkde] Biggio, B., Fumera, G. and Roli, F., 2013. Security evaluation of pattern classifiers under attack. In IEEE transactions on knowledge and data engineering.
[biggio18-pr] Biggio, B. and Roli, F., 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Pattern Recognition.
Transferability of Evasion Attacks¶
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.
In this tutorial we are going to test if an evasion attack generated against a Support Vector Machine (SVM), the surrogate classifier, will transfer to other classifiers, the targets, or not.
For more details about the transferability property of adversarial attacks please refer to:
[demontis19-usenix] Demontis, A., Melis, M., Pintor, M., Jagielski, M., Biggio, B., Oprea, A., Nita-Rotaru, C. and Roli, F., 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. In 28th Usenix Security Symposium, Santa Clara, California, USA.
We will first create and train the surrogate and different target classifiers, evaluating their performance in the standard scenario, i.e. not under attack. The surrogate and the target classifiers will be trained on different training sets.
The following part partially replicates the procedure from the first tutorial.
[1]:
random_state = 999
n_features = 2 # Number of features
n_samples = 2250 # Number of samples
centers = [[-2, 0], [2, -2], [2, 2]] # Centers of the clusters
cluster_std = 0.8 # Standard deviation of the clusters
from secml.data.loader import CDLRandomBlobs
dataset = CDLRandomBlobs(n_features=n_features,
centers=centers,
cluster_std=cluster_std,
n_samples=n_samples,
random_state=random_state).load()
n_tr = 1000 # Number of training set samples
n_ts = 250 # Number of test set samples
# Split in training and test.
from secml.data.splitter import CTrainTestSplit
splitter = CTrainTestSplit(
train_size=2 * n_tr, test_size=n_ts, random_state=random_state)
tr, ts = splitter.split(dataset)
# Normalize the data
from secml.ml.features import CNormalizerMinMax
nmz = CNormalizerMinMax()
tr.X = nmz.fit_transform(tr.X)
ts.X = nmz.transform(ts.X)
# Generate 2 training datasets for surrogate and target classifiers
tr1 = tr[:n_tr, :] # Source classifier training set
tr2 = tr[:n_tr, :] # Target classifier training set
from collections import namedtuple
CLF = namedtuple('CLF', 'clf_name clf xval_parameters')
from secml.ml.classifiers.multiclass import CClassifierMulticlassOVA
# Binary classifiers
from secml.ml.classifiers import CClassifierSVM, CClassifierSGD
# Natively-multiclass classifiers
from secml.ml.classifiers import CClassifierKNN, CClassifierDecisionTree, CClassifierRandomForest
# Let's create a 3-Fold data splitter
from secml.data.splitter import CDataSplitterKFold
xval_splitter = CDataSplitterKFold(num_folds=3, random_state=random_state)
# Metric to use for training and performance evaluation
from secml.ml.peval.metrics import CMetricAccuracy
metric = CMetricAccuracy()
surr_clf = CLF(
clf_name='SVM Linear',
clf=CClassifierMulticlassOVA(CClassifierSVM, kernel='linear'),
xval_parameters={'C': [1e-2, 0.1, 1]})
print("Estimating the best training parameters of the surrogate classifier...")
best_params = surr_clf.clf.estimate_parameters(
dataset=tr1,
parameters=surr_clf.xval_parameters,
splitter=xval_splitter,
metric=metric,
perf_evaluator='xval'
)
print("The best training parameters "
"of the surrogate classifier are: ", best_params)
surr_clf.clf.fit(tr1)
y_pred = surr_clf.clf.predict(ts.X)
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Accuracy of the surrogate classifier on test set: {:.2%}".format(acc))
print("\nTraining the target classifiers...")
target_clf_list = [
CLF(
clf_name='SVM Linear',
clf=CClassifierMulticlassOVA(CClassifierSVM, kernel='linear'),
xval_parameters={'C': [1e-2, 0.1, 1]}),
CLF(clf_name='SVM RBF',
clf=CClassifierMulticlassOVA(CClassifierSVM, kernel='rbf'),
xval_parameters={'C': [1e-2, 0.1, 1], 'kernel.gamma': [1, 10, 100]}),
CLF(clf_name='Logistic (SGD)',
clf=CClassifierMulticlassOVA(
CClassifierSGD, regularizer='l2', loss='log',
kernel='linear', random_state=random_state),
xval_parameters={'alpha': [1e-6, 1e-5, 1e-4]}),
CLF(clf_name='kNN',
clf=CClassifierKNN(),
xval_parameters={'n_neighbors': [30, 40, 50]}),
CLF(clf_name='Decision Tree',
clf=CClassifierDecisionTree(),
xval_parameters={'max_depth': [1, 3, 5]}),
CLF(clf_name='Random Forest',
clf=CClassifierRandomForest(random_state=random_state),
xval_parameters={'n_estimators': [20, 30, 40]}),
]
for i, test_case in enumerate(target_clf_list):
clf = test_case.clf
xval_params = test_case.xval_parameters
print("\nEstimating the best training parameters of {:} ..."
"".format(test_case.clf_name))
best_params = clf.estimate_parameters(
dataset=tr2, parameters=xval_params, splitter=xval_splitter,
metric='accuracy', perf_evaluator='xval')
print("The best parameters for '{:}' are: "
"".format(test_case.clf_name), best_params)
print("Training of {:} ...".format(test_case.clf_name))
clf.fit(tr2)
# Predictions on test set and performance evaluation
y_pred = clf.predict(ts.X)
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Classifier: {:}\tAccuracy: {:.2%}".format(test_case.clf_name, acc))
Estimating the best training parameters of the surrogate classifier...
The best training parameters of the surrogate classifier are: {'C': 0.1}
Accuracy of the surrogate classifier on test set: 99.60%
Training the target classifiers...
Estimating the best training parameters of SVM Linear ...
The best parameters for 'SVM Linear' are: {'C': 0.1}
Training of SVM Linear ...
Classifier: SVM Linear Accuracy: 99.60%
Estimating the best training parameters of SVM RBF ...
The best parameters for 'SVM RBF' are: {'C': 0.1, 'kernel.gamma': 10}
Training of SVM RBF ...
Classifier: SVM RBF Accuracy: 99.20%
Estimating the best training parameters of Logistic (SGD) ...
The best parameters for 'Logistic (SGD)' are: {'alpha': 1e-05}
Training of Logistic (SGD) ...
Classifier: Logistic (SGD) Accuracy: 99.60%
Estimating the best training parameters of kNN ...
The best parameters for 'kNN' are: {'n_neighbors': 40}
Training of kNN ...
Classifier: kNN Accuracy: 99.20%
Estimating the best training parameters of Decision Tree ...
The best parameters for 'Decision Tree' are: {'max_depth': 3}
Training of Decision Tree ...
Classifier: Decision Tree Accuracy: 99.20%
Estimating the best training parameters of Random Forest ...
The best parameters for 'Random Forest' are: {'n_estimators': 30}
Training of Random Forest ...
Classifier: Random Forest Accuracy: 98.40%
Generation of the Adversarial Examples¶
As done in the second tutorial, we now craft the adversarial examples using the gradient-based maximum-confidence algorithm for generating evasion attacks, implemented by the CAttackEvasionPGDLS
class (e-pgd-ls
).
This time, we are going to generate an error-specific attack by setting y_target
to one of the classes of the dataset. In this way, we enforce the solver to perturb the points so that the classifier will classify them with the y_true
label.
Please note that the attack on multiple samples may take a while (up to a few minutes) depending on the machine the script is run on.
[2]:
noise_type = 'l2' # Type of perturbation 'l1' or 'l2'
dmax = 0.4 # Maximum perturbation
lb, ub = 0, 1 # Bounds of the attack space. Can be set to `None` for unbounded
y_target = 2 # `error-specific` attack. None for `error-generic`
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 1e-1,
'eta_min': 0.1,
'eta_max': None,
'max_iter': 100,
'eps': 1e-4
}
from secml.adv.attacks.evasion import CAttackEvasionPGDLS
pgd_ls_attack = CAttackEvasionPGDLS(
classifier=surr_clf.clf,
surrogate_classifier=surr_clf.clf,
surrogate_data=tr1,
distance=noise_type,
dmax=dmax,
lb=lb, ub=ub,
solver_params=solver_params,
y_target=y_target)
# Run the evasion attack on x0
print("Attack started...")
y_pred, scores, adv_ds, f_obj = pgd_ls_attack.run(ts.X, ts.Y)
print("Attack complete!")
Attack started...
Attack complete!
Analysis of Transferability¶
Let’s now test if the previously generated examples transfer to other models.
Initally, we test the performance of each target classifier on the adversarial examples. Later, we plot few of the samples on a 2D plane.
[4]:
# Metric to use for testing transferability
from secml.ml.peval.metrics import CMetricTestError
metric = CMetricTestError()
trans_error = []
transfer_rate = 0.0
for target_clf in target_clf_list:
print("\nTesting transferability of {:}".format(target_clf.clf_name))
origin_error = metric.performance_score(
y_true=ts.Y, y_pred=target_clf.clf.predict(ts.X))
print("Test error (no attack): {:.2%}".format(origin_error))
trans_error_clf = metric.performance_score(
y_true=ts.Y, y_pred=target_clf.clf.predict(adv_ds.X))
trans_error.append(trans_error_clf)
transfer_rate += trans_error_clf
# Computing the transfer rate
transfer_rate /= len(target_clf_list)
from secml.array import CArray
trans_acc = CArray(trans_error) * 100 # Show results in percentage
from secml.figure import CFigure
fig = CFigure(height=1)
a = fig.sp.imshow(trans_acc.reshape((1, 6)),
cmap='Oranges', interpolation='nearest',
alpha=.65, vmin=60, vmax=70)
fig.sp.xticks(CArray.arange((len(target_clf_list))))
fig.sp.xticklabels([c.clf_name for c in target_clf_list],
rotation=45, ha="right", rotation_mode="anchor")
fig.sp.yticks([0])
fig.sp.yticklabels([surr_clf.clf_name])
for i in range(len(target_clf_list)):
fig.sp.text(i, 0, trans_acc[i].round(2).item(), va='center', ha='center')
fig.sp.title("Test error of target classifiers under attack (%)")
fig.show()
print("\nAverage transfer rate: {:.2%}".format(transfer_rate))
Testing transferability of SVM Linear
Test error (no attack): 0.40%
Testing transferability of SVM RBF
Test error (no attack): 0.80%
Testing transferability of Logistic (SGD)
Test error (no attack): 0.40%
Testing transferability of kNN
Test error (no attack): 0.80%
Testing transferability of Decision Tree
Test error (no attack): 0.80%
Testing transferability of Random Forest
Test error (no attack): 1.60%

Average transfer rate: 66.13%
We can observe how the accuracy of the target classifiers on the adversarial point generated against the surrogate classifier is extremely low, which highlights how the machine-learning models are vulnerable to transfer attacks.
[5]:
from secml.figure import CFigure
from secml.array import CArray
from math import ceil
fig = CFigure(width=4.5 * len(target_clf_list) / 2,
height=4 * 2, markersize=10)
for clf_idx in range(len(target_clf_list)):
clf = target_clf_list[clf_idx].clf
fig.subplot(2, int(ceil(len(target_clf_list) / 2)), clf_idx + 1)
fig.sp.title(target_clf_list[clf_idx].clf_name)
fig.sp.plot_decision_regions(clf, n_grid_points=200)
fig.sp.grid(grid_on=False)
s_idx = ts.Y.find(ts.Y != y_target)
for pt in s_idx[:10]: # Plot the translation of multiple adversarial samples
pt_segment = CArray.append(ts.X[pt, :], adv_ds.X[pt, :], axis=0)
fig.sp.plot_path(pt_segment)
acc = metric.performance_score(
y_true=ts[s_idx[:10], :].Y, y_pred=clf.predict(adv_ds[s_idx[:10], :].X))
fig.sp.text(0.01, 0.01, "Transfer attack success: {:.1%}".format(acc),
bbox=dict(facecolor='white'))
fig.show()

These 2D plot clearly visualize the vulnerability of the target classifiers. The adversarial examples (green stars) which are inside the y_target = 2
green decision region, are successfully transfered.
Poisoning Attacks against Machine Learning models¶
In this tutorial we will experiment with adversarial poisoning attacks against a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel.
Poisoning attacks are performed at train time by injecting carefully crafted samples that alter the classifier decision function so that its accuracy decreases.
As in the previous tutorials, we will first create and train the classifier, evaluating its performance in the standard scenario, i.e. not under attack. The poisoning attack will also need a validation set to verify the classifier performance during the attack, so we split the training set furtherly in two.
[1]:
random_state = 999
n_features = 2 # Number of features
n_samples = 300 # Number of samples
centers = [[-1, -1], [+1, +1]] # Centers of the clusters
cluster_std = 0.9 # Standard deviation of the clusters
from secml.data.loader import CDLRandomBlobs
dataset = CDLRandomBlobs(n_features=n_features,
centers=centers,
cluster_std=cluster_std,
n_samples=n_samples,
random_state=random_state).load()
n_tr = 100 # Number of training set samples
n_val = 100 # Number of validation set samples
n_ts = 100 # Number of test set samples
# Split in training, validation and test
from secml.data.splitter import CTrainTestSplit
splitter = CTrainTestSplit(
train_size=n_tr + n_val, test_size=n_ts, random_state=random_state)
tr_val, ts = splitter.split(dataset)
splitter = CTrainTestSplit(
train_size=n_tr, test_size=n_val, random_state=random_state)
tr, val = splitter.split(dataset)
# Normalize the data
from secml.ml.features import CNormalizerMinMax
nmz = CNormalizerMinMax()
tr.X = nmz.fit_transform(tr.X)
val.X = nmz.transform(val.X)
ts.X = nmz.transform(ts.X)
# Metric to use for training and performance evaluation
from secml.ml.peval.metrics import CMetricAccuracy
metric = CMetricAccuracy()
# Creation of the multiclass classifier
from secml.ml.classifiers import CClassifierSVM
from secml.ml.kernel import CKernelRBF
clf = CClassifierSVM(kernel=CKernelRBF())
# Parameters for the Cross-Validation procedure
xval_params = {'C': [1, 10], 'kernel.gamma': [1, 10, 100]}
# Let's create a 3-Fold data splitter
from secml.data.splitter import CDataSplitterKFold
xval_splitter = CDataSplitterKFold(num_folds=3, random_state=random_state)
# Select and set the best training parameters for the classifier
print("Estimating the best training parameters...")
best_params = clf.estimate_parameters(
dataset=tr,
parameters=xval_params,
splitter=xval_splitter,
metric='accuracy',
perf_evaluator='xval'
)
print("The best training parameters are: ", best_params)
# We can now fit the classifier
clf.fit(tr)
print("Training of classifier complete!")
# Compute predictions on a test set
y_pred = clf.predict(ts.X)
Estimating the best training parameters...
The best training parameters are: {'C': 1, 'kernel.gamma': 10}
Training of classifier complete!
Generation of Poisoning Samples¶
We are going to generate an adversarial example against the SVM classifier using the gradient-based algorithm for generating poisoning attacks proposed in:
[biggio12-icml] Biggio, B., Nelson, B. and Laskov, P., 2012. Poisoning attacks against support vector machines. In ICML 2012.
[biggio15-icml] Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C. and Roli, F., 2015. Is feature selection secure against training data poisoning?. In ICML 2015.
[demontis19-usenix] Demontis, A., Melis, M., Pintor, M., Jagielski, M., Biggio, B., Oprea, A., Nita-Rotaru, C. and Roli, F., 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. In 28th Usenix Security Symposium, Santa Clara, California, USA.
which is implemented in SecML by different subclasses of the CAttackPoisoning
. For the purpose of attacking a SVM classifier we use the CAttackPoisoningSVM
class.
As done for the evasion attacks, let’s specify the parameters first. We set the bounds of the attack space to the known feature space given by validation dataset. Secondly, we set the desired number of adversarial points to generate, 20 in this example. Lastly, we chose the solver parameters for this specific optimization problem.
[2]:
lb, ub = val.X.min(), val.X.max() # Bounds of the attack space. Can be set to `None` for unbounded
n_poisoning_points = 20 # Number of poisoning points to generate
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.05,
'eta_min': 0.05,
'eta_max': None,
'max_iter': 100,
'eps': 1e-6
}
from secml.adv.attacks import CAttackPoisoningSVM
pois_attack = CAttackPoisoningSVM(classifier=clf,
training_data=tr,
surrogate_classifier=clf,
surrogate_data=tr,
val=val,
lb=val.X.min(), ub=val.X.max(),
solver_params=solver_params,
random_seed=random_state)
pois_attack.n_points = n_poisoning_points
# Run the poisoning attack
print("Attack started...")
pois_y_pred, pois_scores, pois_ds, f_opt = pois_attack.run(ts.X, ts.Y)
print("Attack complete!")
# Evaluate the accuracy of the original classifier
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
# Evaluate the accuracy after the poisoning attack
pois_acc = metric.performance_score(y_true=ts.Y, y_pred=pois_y_pred)
print("Original accuracy on test set: {:.2%}".format(acc))
print("Accuracy after attack on test set: {:.2%}".format(pois_acc))
Attack started...
Attack complete!
Original accuracy on test set: 94.00%
Accuracy after attack on test set: 90.00%
We can see that the classifiers has been successfully attacked. To increase the attack power, more poisoning points can be crafted, at the expense of a much slower optimization process.
Let’s now visualize the attack on a 2D plane. We need to train a copy of the original classifier on the join between the training set and the poisoning points.
[4]:
# Training of the poisoned classifier
pois_clf = clf.deepcopy()
pois_tr = tr.append(pois_ds) # Join the training set with the poisoning points
pois_clf.fit(pois_tr)
# Define common bounds for the subplots
min_limit = min(pois_tr.X.min(), ts.X.min())
max_limit = max(pois_tr.X.max(), ts.X.max())
grid_limits = [[min_limit, max_limit], [min_limit, max_limit]]
from secml.figure import CFigure
fig = CFigure(10, 10)
fig.subplot(2, 2, 1)
fig.sp.title("Original classifier (training set)")
fig.sp.plot_decision_regions(
clf, n_grid_points=200, grid_limits=grid_limits)
fig.sp.plot_ds(tr, markersize=5)
fig.sp.grid(grid_on=False)
fig.subplot(2, 2, 2)
fig.sp.title("Poisoned classifier (training set + poisoning points)")
fig.sp.plot_decision_regions(
pois_clf, n_grid_points=200, grid_limits=grid_limits)
fig.sp.plot_ds(tr, markersize=5)
fig.sp.plot_ds(pois_ds, markers=['*', '*'], markersize=12)
fig.sp.grid(grid_on=False)
fig.subplot(2, 2, 3)
fig.sp.title("Original classifier (test set)")
fig.sp.plot_decision_regions(
clf, n_grid_points=200, grid_limits=grid_limits)
fig.sp.plot_ds(ts, markersize=5)
fig.sp.text(0.05, -0.25, "Accuracy on test set: {:.2%}".format(acc),
bbox=dict(facecolor='white'))
fig.sp.grid(grid_on=False)
fig.subplot(2, 2, 4)
fig.sp.title("Poisoned classifier (test set)")
fig.sp.plot_decision_regions(
pois_clf, n_grid_points=200, grid_limits=grid_limits)
fig.sp.plot_ds(ts, markersize=5)
fig.sp.text(0.05, -0.25, "Accuracy on test set: {:.2%}".format(pois_acc),
bbox=dict(facecolor='white'))
fig.sp.grid(grid_on=False)
fig.show()

We can see how the SVM classifier decision functions changes after injecting the adversarial poisoning points (blue and red stars).
For more details about poisoning adversarial attacks please refer to:
[biggio18-pr] Biggio, B. and Roli, F., 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Pattern Recognition.
Evasion and Poisoning Attacks on MNIST dataset¶
In this tutorial we show how to load the MNIST handwritten digits dataset and use it to train a Support Vector Machine (SVM).
Later we are going to perform Evasion and Poisoning attacks against the trained classifier, as previosuly described in tutorials 2 and 4.
Training of the classifier¶
First, we load the dataset and train the classifier. For this tutorial, we only consider 2 digits, the 5 (five) and the 9 (nine).
[3]:
random_state = 999
n_tr = 100 # Number of training set samples
n_val = 500 # Number of validation set samples
n_ts = 500 # Number of test set samples
digits = (5, 9)
from secml.data.loader import CDataLoaderMNIST
loader = CDataLoaderMNIST()
tr_val = loader.load('training', digits=digits, num_samples=n_tr + n_val)
ts = loader.load('testing', digits=digits, num_samples=n_ts)
# Split in training and validation set
tr = tr_val[:n_tr, :]
val = tr_val[n_tr:, :]
# Normalize the features in `[0, 1]`
tr.X /= 255
val.X /= 255
ts.X /= 255
from secml.ml.classifiers import CClassifierSVM
# Force storing of the dual space variables (alphas and support vectors)
# Will be used by the poisoning attack later
clf = CClassifierSVM(C=10, store_dual_vars=True)
print("Training of classifier...")
clf.fit(tr)
# Compute predictions on a test set
y_pred = clf.predict(ts.X)
# Metric to use for performance evaluation
from secml.ml.peval.metrics import CMetricAccuracy
metric = CMetricAccuracy()
# Evaluate the accuracy of the classifier
acc = metric.performance_score(y_true=ts.Y, y_pred=y_pred)
print("Accuracy on test set: {:.2%}".format(acc))
Training of classifier...
Accuracy on test set: 93.60%
Evasion attack with MNIST dataset¶
Let’s define the attack parameters. Firstly, we chose to generate an l2 perturbation within a maximum ball of radius eps = 2
from the initial points. Secondly, we also add a low/upper bound as our feature space is limited in [0, 1]
. Lastly, as we are not interested in generating adversarial examples for a specific class, we perform an error-generic attack by setting y_target = None
.
Please note that the attack using the MNIST dataset may take a while (up to a few minutes) depending on the machine the script is run on.
[4]:
# For simplicity, let's attack a subset of the test set
attack_ds = ts[:25, :]
noise_type = 'l2' # Type of perturbation 'l1' or 'l2'
dmax = 2. # Maximum perturbation
lb, ub = 0., 1. # Bounds of the attack space. Can be set to `None` for unbounded
y_target = None # None if `error-generic` or a class label for `error-specific`
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.1,
'eta_min': 2.0,
'eta_max': None,
'max_iter': 100,
'eps': 1e-6
}
from secml.adv.attacks import CAttackEvasionPGDLS
pgd_ls_attack = CAttackEvasionPGDLS(classifier=clf,
surrogate_classifier=clf,
surrogate_data=tr,
distance=noise_type,
dmax=dmax,
solver_params=solver_params,
y_target=y_target)
print("Attack started...")
eva_y_pred, _, eva_adv_ds, _ = pgd_ls_attack.run(
attack_ds.X, attack_ds.Y, double_init=True)
print("Attack complete!")
acc = metric.performance_score(
y_true=attack_ds.Y, y_pred=clf.predict(attack_ds.X))
acc_attack = metric.performance_score(
y_true=attack_ds.Y, y_pred=eva_y_pred)
print("Accuracy on reduced test set before attack: {:.2%}".format(acc))
print("Accuracy on reduced test set after attack: {:.2%}".format(acc_attack))
Attack started...
Attack complete!
Accuracy on reduced test set before attack: 100.00%
Accuracy on reduced test set after attack: 20.00%
We can observe how the classifier trained on the MNIST dataset has been successfully evaded by the adversarial examples generated by our attack.
Let’s now visualize few of the adversarial examples. The first row are the original samples and the second row are the adversarial examples. Above each digit it is shown the true label and the predicted label in parenthesis.
[6]:
from secml.figure import CFigure
# Let's define a convenience function to easily plot the MNIST dataset
def show_digits(samples, preds, labels, digs, n_display=8):
samples = samples.atleast_2d()
n_display = min(n_display, samples.shape[0])
fig = CFigure(width=n_display*2, height=3)
for idx in range(n_display):
fig.subplot(2, n_display, idx+1)
fig.sp.xticks([])
fig.sp.yticks([])
fig.sp.imshow(samples[idx, :].reshape((28, 28)), cmap='gray')
fig.sp.title("{} ({})".format(digits[labels[idx].item()], digs[preds[idx].item()]),
color=("green" if labels[idx].item()==preds[idx].item() else "red"))
fig.show()
show_digits(attack_ds.X, clf.predict(attack_ds.X), attack_ds.Y, digits)
show_digits(eva_adv_ds.X, clf.predict(eva_adv_ds.X), eva_adv_ds.Y, digits)


Poisoning attack with MNIST dataset¶
For poisoning attacks the parameters are much simpler. We set the the bounds of the attack space and the number of adversarial points to generate, 50 in this example. Lastly, we chose the solver parameters for this specific optimization problem.
Please note that the attack using the MNIST dataset may take a while (up to a few minutes) depending on the machine the script is run on.
[7]:
lb, ub = 0., 1. # Bounds of the attack space. Can be set to `None` for unbounded
n_poisoning_points = 10 # Number of poisoning points to generate
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.1,
'eta_min': 2.0,
'eta_max': None,
'max_iter': 100,
'eps': 1e-6
}
from secml.adv.attacks import CAttackPoisoningSVM
pois_attack = CAttackPoisoningSVM(classifier=clf,
training_data=tr,
surrogate_classifier=clf,
surrogate_data=tr,
val=val,
lb=lb, ub=ub,
solver_params=solver_params,
random_seed=random_state)
pois_attack.n_points = n_poisoning_points
# Run the poisoning attack
print("Attack started...")
pois_y_pred, _, pois_points_ds, _ = pois_attack.run(ts.X, ts.Y)
print("Attack complete!")
# Evaluate the accuracy of the original classifier
acc = metric.performance_score(y_true=ts.Y, y_pred=clf.predict(ts.X))
# Evaluate the accuracy after the poisoning attack
pois_acc = metric.performance_score(y_true=ts.Y, y_pred=pois_y_pred)
print("Original accuracy on test set: {:.2%}".format(acc))
print("Accuracy after attack on test set: {:.2%}".format(pois_acc))
# Training of the poisoned classifier for visualization purposes
pois_clf = clf.deepcopy()
pois_tr = tr.append(pois_points_ds) # Join the training set with the poisoning points
pois_clf.fit(pois_tr)
show_digits(pois_points_ds.X, pois_clf.predict(pois_points_ds.X),
pois_points_ds.Y, digits)
Attack started...
Attack complete!
Original accuracy on test set: 93.60%
Accuracy after attack on test set: 45.40%

We can see that the classifier trained on the MNIST dataset has been successfully poisoned. To increase the attack power, more poisoning points can be crafted, at the expense of a much slower optimization process.
Let’s note that the label of each adversarial example we show has been flipped by the attack with respect to the actual true label. Thus, the predicted label (parenthesis) by the poisoned classifier is displayed in green when different from the true label of the digit.
secml.core¶
CCreator¶
-
class
secml.core.c_creator.
CCreator
[source]¶ Bases:
object
The magnificent global superclass.
- Attributes
class_type
strDefines class type.
- __super__str or None
String with superclass name. Can be None to explicitly NOT support .create() and .load().
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
Get all the subclasses of the calling class.
This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
class_type
¶ Defines class type.
-
copy
(self)[source]¶ Returns a shallow copy of current class.
As shallow copy creates a new instance of current object and then insert in the new object a reference (if possible) to each attribute of the original object.
-
classmethod
create
(class_item=None, *args, **kwargs)[source]¶ This method creates an instance of a class with given type.
The list of subclasses of calling superclass is looked for any class defining class_item = ‘value’. If found, the class type is listed.
Also a class instance can be passed as main argument. In this case the class instance is returned as is.
- Parameters
- class_itemstr or class instance or None, optional
Type of the class to instantiate. If a class instance of cls is passed, instead, it returns the instance directly. If this is None, an instance of the classing superclass is created.
- args, kwargsoptional arguments
Any other argument for the class to create. If a class instance is passed as class_item, optional arguments are NOT allowed.
- Returns
- instance_classany class
Instance of the class having the given type (class_type) or the same class instance passed as input.
-
deepcopy
(self)[source]¶ Returns a deep copy of current class.
As deep copy is time consuming in most cases, can sometimes be acceptable to select a subset of attributes and assign them to a new instance of the current class using .set_params.
-
classmethod
get_class_from_type
(class_type)[source]¶ Return the class associated with input type.
This will NOT check for classes with duplicated class type. The first class found with matching type will be returned.
- Parameters
- class_typestr
Type of the class which will be looked up for.
- Returns
- class_objclass
Desired class, if found. This is NOT an instance of the class.
-
get_params
(self)[source]¶ Returns the dictionary of class parameters.
A parameter is a PUBLIC or READ/WRITE attribute.
-
classmethod
get_subclasses
()[source]¶ Get all the subclasses of the calling class.
- Returns
- subclasseslist of tuple
The list containing a tuple (class.__name__, class) for each subclass of calling class. Keep in mind that in Python each class is a “subclass” of itself.
-
classmethod
list_class_types
()[source]¶ This method lists all types of available subclasses of calling one.
The list of subclasses of calling superclass is looked for any class defining class_item = ‘value’. If found, the class type is listed.
- Returns
- typeslist
List of the types of available subclasses of calling class.
-
classmethod
load
(path)[source]¶ Loads class from pickle object.
This function loads any object stored with pickle or cPickle and any output of .save().
- The object can be correctly loaded in the following cases:
loaded and calling class have the same type.
calling class is the superclass of the loaded class’s package.
calling class is .CCreator.
- Parameters
- pathstr
Path of the target object file.
-
property
logger
¶ Logger for current object.
-
save
(self, path)[source]¶ Save class object using pickle.
Store the current class instance to disk, preserving the state of each attribute.
.load() can be used to restore the instance later.
- Parameters
- pathstr
Path of the target object file.
- Returns
- obj_pathstr
The full path of the stored object.
-
set
(self, param_name, param_value, copy=False)[source]¶ Set a parameter that has a specific name to a specific value.
Only parameters, i.e. PUBLIC or READ/WRITE attributes, can be set.
- The following checks are performed before setting:
if parameter is an attribute of current class, set directly;
- else, iterate over __dict__ and look for a class attribute
having the desired parameter as an attribute;
- else, if attribute is not found on the 2nd level,
raise AttributeError.
If possible, a reference to the parameter to set is assigned. Use copy=True to always make a deepcopy before set.
- Parameters
- param_namestr
Name of the parameter to set.
- param_valueany
Value to set for the parameter.
- copybool
By default (False) a reference to the parameter to assign is set. If True or a reference cannot be extracted, a deepcopy of the parameter is done first.
-
set_params
(self, params_dict, copy=False)[source]¶ Set all parameters passed as a dictionary {key: value}.
This function natively takes as input the dictionary created by .get_params. Only parameters, i.e. PUBLIC or READ/WRITE attributes, can be set.
For more informations on the setting behaviour see .CCreator.set.
If possible, a reference to the parameter to set is assigned. Use copy=True to always make a deepcopy before set.
- Parameters
- params_dictdict
Dictionary of parameters to set.
- copybool
By default (False) a reference to the parameter to assign is set. If True or a reference cannot be extracted, a deepcopy of the parameter is done first.
See also
get_params
returns the dictionary of class parameters.
-
static
timed
(msg=None)[source]¶ Timer decorator.
Returns a decorator that can be used to measure execution time of any method. Performance data will be stored inside the class logger. Messages will be logged using the INFO logging level. As this decorator accepts optional arguments, must be called as a method. See examples.
- Parameters
- msgstr or None, optional
Custom message to display when entering the timed block. If None, “Entering timed block method_name…” will printed.
-
property
verbose
¶ Verbosity level of logger output.
- Available levels are:
0 = no verbose output 1 = info-level logging 2 = debug-level logging
-
secml.core.c_creator.
has_super
(cls)[source]¶ Returns True if input class __super__ is not None.
__super__ is defined and not None for class trees having a main superclass and one or more inherited classes.
- Parameters
- clsobj
Any class or class isntance.
attr_utils¶
-
secml.core.attr_utils.
as_public
(attr)[source]¶ Return the public name associated with a protected attribute.
Examples
>>> from secml.core.attr_utils import as_public
>>> as_public('_attr1') 'attr1' >>> as_public('attr1') # Public attributes are returned as is 'attr1' >>> as_public('__attr1') # This is NOT a private attribute! '_attr1'
-
secml.core.attr_utils.
as_protected
(attr)[source]¶ Return the protected name associated with a public attribute.
Examples
>>> from secml.core.attr_utils import as_protected
>>> as_protected('attr1') '_attr1' >>> as_protected('__attr1') '_attr1' >>> as_protected('_attr1') # Protected attributes are returned as is '_attr1'
-
secml.core.attr_utils.
has_protected
(obj, attr)[source]¶ True if attribute is a protected attribute of class.
- Parameters
- objobject
Target class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
as_private
(obj_class, attr)[source]¶ Return the PRIVATE name associated with input attribute.
- Parameters
- obj_classclass
Target class (usually extracted using obj.__class__).
- attrstr
Name of the target attribute.
-
secml.core.attr_utils.
has_private
(obj_class, attr)[source]¶ True if attribute is a private attribute of class.
- Parameters
- obj_classclass
Target class (usually extracted using obj.__class__).
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
get_private
(obj_class, attr, default=<no value>)[source]¶ Return the private attribute of class.
- Parameters
- obj_classclass
Target class (usually extracted using obj.__class__).
- attrstr
Name of the attribute to return.
- defaultany, optional
Value that is returned when the named attribute is not found.
-
secml.core.attr_utils.
has_property
(obj, attr)[source]¶ True if attribute is a property or has an associated property.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
get_property
(obj, attr)[source]¶ Return the property associated with input attribute.
If no property is associated with input attribute, raise AttributeError.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
has_getter
(obj, attr)[source]¶ True if an attribute has an associated getter.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
has_setter
(obj, attr)[source]¶ True if an attribute has an associated setter.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
add_readonly
(obj, attr, value=None)[source]¶ Add a READ ONLY attribute to object.
A read only attribute is defined as a protected attribute plus a getter associated with it.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to set.
- valueany, optional
Value to assign to the attribute. If not given, None is used.
-
secml.core.attr_utils.
add_readwrite
(obj, attr, value=None)[source]¶ Add a READ/WRITE attribute to object.
A read/write attribute is defined as a protected attribute plus a getter AND a setter associated with it.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to set.
- valueany, optional
Value to assign to the attribute. If not given, None is used.
-
secml.core.attr_utils.
is_public
(obj, attr)[source]¶ Return True if input attribute is PUBLIC.
A public attribute has the name without ‘_’ as a prefix.
- Parameters
- objobject
Any class instance. –> NOT USED
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
is_protected
(obj, attr)[source]¶ Return True if input attribute is PROTECTED.
A protected attribute has the name starting with only ‘_’ and no getter/setter associated with it.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
is_readonly
(obj, attr)[source]¶ Return True if input attribute is READ ONLY.
A read only attribute has ONLY a getter associated with it.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
is_readwrite
(obj, attr)[source]¶ Return True if input attribute is READ/WRITE.
A read/write attribute has BOTH a getter AND a setter associated with it.
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
is_readable
(obj, attr)[source]¶ Return True if input attribute is READABLE.
- A readable attribute can be one of the following:
public
read/write (getter/setter associated with property)
read only (getter associated with property)
- Parameters
- objobject
Any class instance.
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
is_writable
(obj, attr)[source]¶ Return True if input attribute is WRITABLE.
- A writable attribute can be one of the following:
public
read/write (getter/setter associated with property)
- Parameters
- attrstr
Name of the attribute to check.
-
secml.core.attr_utils.
extract_attr
(obj, mode)[source]¶ Generates a sequence of attributes from an input dictionary.
This function returns a generator with the dictionary’s keys having a name compatible with specified mode.
- The following modalities are available:
‘pub’ -> PUBLIC (no ‘_’ in the prefix)
‘rw’ -> READ/WRITE (a getter/setter is associated with it)
‘r’ -> READ ONLY (a getter is associated with it)
‘pro’ -> PROTECTED (‘_’ as the prefix and no getter/setter associated)
All modes can be stacked up using ‘+’ (see examples).
- Parameters
- objany object
Any class which attributes should be extracted.
- modestr
Extraction modality. All available modalities can be combined using a plus ‘+’.
Notes
Sorting of the attributes in the output generator is random.
constants¶
-
secml.core.constants.
inf
= inf¶ Not a number.
-
secml.core.constants.
nan
= nan¶ Machine epsilon.
This is defined as the smallest number that, when added to one, yields a result different from one.
Notes
This value can be different from machine to machine, but generally yelds approximately 1.49e-08.
Examples
>>> from secml.core.constants import eps >>> print(eps) 1.4901161193847656e-08
-
secml.core.constants.
eps
= 1.4901161193847656e-08¶ The mathematical constant e = 2.718281…, to available precision.
Examples
>>> from secml.core.constants import e >>> print(e) 2.718281828459045
-
secml.core.constants.
e
= 2.718281828459045¶ The mathematical constant pi = 3.141592…, to available precision.
Examples
>>> from secml.core.constants import pi >>> pi 3.141592653589793
decorators¶
-
class
secml.core.decorators.
deprecated
(extra='')[source]¶ Bases:
object
Decorator to mark a function or class as deprecated.
Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring.
The optional extra argument will be appended to the deprecation message and the docstring.
Note: to use this with the default value for extra, put in an empty of parentheses: >>> from secml.core.decorators import deprecated >>> deprecated() # doctest: +ELLIPSIS <secml.core.decorators.deprecated object at …>
>>> @deprecated() ... def some_function(): pass
- Parameters
- extrastr
Extra text to be added to the deprecation messages.
Notes
- Adapted from:
Methods
__call__
(self, obj)Call method.
exceptions¶
-
exception
secml.core.exceptions.
NotFittedError
[source]¶ Bases:
ValueError
,AttributeError
Exception to raise if the object is used before training.
This class inherits from both ValueError and AttributeError.
Examples
>>> from secml.ml.classifiers import CClassifierSVM >>> from secml.array import CArray >>> from secml.core.exceptions import NotFittedError >>> try: ... CClassifierSVM().predict(CArray([[1, 2]])) ... except NotFittedError as e: ... print(repr(e)) ... NotFittedError('this `CClassifierSVM` is not trained. Call `.fit()` first.',)
type_utils¶
-
secml.core.type_utils.
is_intlike
(x)[source]¶ Return True if input is integer or list/array of 1 integer.
Examples
>>> from secml.core.type_utils import is_intlike
>>> print(is_intlike(0)) # Standard int True >>> print(is_intlike(0.1)) # Standard float False
>>> print(is_intlike(np.array([0]))) # ndarray with one int True >>> print(is_intlike(np.array([0.1]))) # ndarray with one float False
-
secml.core.type_utils.
is_floatlike
(x)[source]¶ Return True if input is float or list/array of 1 float.
Examples
>>> from secml.core.type_utils import is_floatlike
>>> print(is_floatlike(0.1)) # Standard float True >>> print(is_floatlike(0)) # Standard int False
>>> print(is_floatlike(np.array([0.1]))) # ndarray with one float True >>> print(is_floatlike(np.array([0]))) # ndarray with one int False
-
secml.core.type_utils.
is_scalarlike
(x)[source]¶ True if input is scalar (int or float) or list/array of 1 real.
-
secml.core.type_utils.
is_inf
(x)[source]¶ True if input is a positive/negative infinity.
- Parameters
- xscalar
Examples
>>> from secml.core.type_utils import is_inf >>> from secml.core.constants import inf, nan
>>> print(is_inf(inf)) True >>> print(is_inf(-inf)) True
>>> print(is_inf(nan)) False
>>> print(is_inf(0.1)) False
>>> from secml.array import CArray >>> print(is_inf(CArray([inf]))) # Use `CArray.is_inf()` instead Traceback (most recent call last): ... TypeError: input must be a scalar.
-
secml.core.type_utils.
is_posinf
(x)[source]¶ True if input is a positive infinity.
- Parameters
- xscalar
Examples
>>> from secml.core.type_utils import is_posinf >>> from secml.core.constants import inf, nan
>>> print(is_posinf(inf)) True
>>> print(is_posinf(-inf)) False
>>> from secml.array import CArray >>> print(is_posinf(CArray([inf]))) # Use `CArray.is_posinf()` instead Traceback (most recent call last): ... TypeError: input must be a scalar.
-
secml.core.type_utils.
is_neginf
(x)[source]¶ True if input is a negative infinity.
- Parameters
- xscalar
Examples
>>> from secml.core.type_utils import is_neginf >>> from secml.core.constants import inf, nan
>>> print(is_neginf(-inf)) True
>>> print(is_neginf(inf)) False
>>> from secml.array import CArray >>> print(is_neginf(CArray([-inf]))) # Use `CArray.is_neginf()` instead Traceback (most recent call last): ... TypeError: input must be a scalar.
-
secml.core.type_utils.
is_nan
(x)[source]¶ True if input is Not a Number (NaN).
- Parameters
- xscalar
Notes
NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity.
Examples
>>> from secml.core.type_utils import is_nan >>> from secml.core.constants import inf, nan
>>> print(is_nan(nan)) True
>>> print(is_nan(inf)) False
>>> print(is_nan(0.1)) False
>>> from secml.array import CArray >>> print(is_neginf(CArray([nan]))) # Use `CArray.is_nan()` instead Traceback (most recent call last): ... TypeError: input must be a scalar.
secml.array¶
CArray¶
-
class
secml.array.c_array.
CArray
(data, dtype=None, copy=False, shape=None, tosparse=False)[source]¶ Creates an array.
Data will be stored in dense form by default.
- Parameters
- dataarray_like or any built-in datatype
Data to be stored. Can be any array-like structure (sparse or dense) or any built-in list, scalar or string.
- dtypestr or dtype, optional
Typecode or data-type to which the array is cast. If None (default), dtype is inferred from input data.
- copybool, optional
If False (default) a reference to input data will be stored if possibile. Otherwise, a copy of original data is made first. If data is a nested sequence (a list) or dtype is different, a copy will be made anyway.
- shapeint or sequence of ints, optional
Shape of the new array, e.g., ‘(2, 3)’ or ‘2’.
- tosparsebool, optional
If True, input data will be converted to sparse format. Otherwise (default), if input is not a CArray, a dense array is returned, or if CArray, its format is preserved.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1, 2], [3, 4]])) CArray([[1 2] [3 4]])
>>> print(CArray(True)) CArray([ True])
>>> print(CArray([1,0,3,4], tosparse=True)) CArray( (0, 0) 1 (0, 2) 3 (0, 3) 4)
>>> print(CArray([1,2,3], dtype=float, shape=(3,1))) # Custom dtype and shape CArray([[1.] [2.] [3.]])
- Attributes
T
Transposed array data.
dtype
Data-type of stored data.
is_vector_like
True if array is vector-like.
isdense
True if data is stored in DENSE form, False otherwise.
issparse
True if data is stored in SPARSE form, False otherwise.
ndim
Number of array dimensions.
nnz
Number of non-zero values in the array.
nnz_data
Return non-zero array elements.
nnz_indices
Index of non-zero array elements.
shape
Shape of stored data, tuple of ints.
size
Size (number of elements) of array.
Methods
abs
(self)Returns array elements without sign.
all
(self[, axis, keepdims])Test whether all array elements along a given axis evaluate to True.
any
(self[, axis, keepdims])Test whether any array elements along a given axis evaluate to True.
append
(self, array[, axis])Append values to the end of an array.
apply_along_axis
(self, func, axis, \*args, …)Apply function to 1-D slices along the given axis.
arange
([start, stop, step, dtype, sparse])Return evenly spaced values within a given interval.
argmax
(self[, axis])Indices of the maximum values along an axis.
argmin
(self[, axis])Indices of the minimum values along an axis.
argsort
(self[, axis, kind])Returns the indices that would sort an array.
astype
(self, dtype)Copy of the array, casted to a specified type.
atleast_2d
(self)View original array with at least two dimensions.
binary_search
(self, value)Returns the index of each input value inside the array.
bincount
(self)Count the number of occurrences of each value in array of non-negative ints.
ceil
(self)Return the ceiling of the input, element-wise.
clip
(self, c_min, c_max)Clip (limit) the values in an array.
comblist
(list_of_list[, dtype])Generate a cartesian product of list of list input.
concatenate
(array1, array2[, axis])Concatenate a sequence of arrays along the given axis.
cos
(self)Trigonometric cosine, element-wise.
cumsum
(self[, axis, dtype])Return the cumulative sum of the array elements along a given axis.
deepcopy
(self)Return a deepcopy of current array.
diag
(self[, k])Extract a diagonal from array or construct a diagonal array.
dot
(self, array)Dot product of two arrays.
empty
(shape[, dtype, sparse])Return a new array of given shape and type, without filling it.
exp
(self)Calculate the exponential of all elements in the input array.
eye
(n_rows[, n_cols, k, dtype, sparse])Return a 2-D array with ones on the diagonal and zeros elsewhere.
find
(self, condition)Returns vector-like array elements indices depending on condition.
find_2d
(self, condition)Returns array elements indices depending on condition.
flatten
(self)Return a flattened copy of array.
floor
(self)Return the floor of the input, element-wise.
from_iterables
(iterables_list)Build an array by chaining elements from objects in the input list.
get_data
(self)Return stored data as a standard array type.
get_nnz
(self[, axis])Counts the number of non-zero values in the array.
has_compatible_shape
(self, other)Return True if input CArray has a compatible shape.
interp
(self, x_data, y_data[, return_left, …])One-dimensional linear interpolation.
inv
(self)Compute the (multiplicative) inverse of a square matrix.
is_inf
(self)Test element-wise for positive or negative infinity.
is_nan
(self)Test element-wise for Not a Number (NaN).
is_neginf
(self)Test element-wise for negative infinity.
is_posinf
(self)Test element-wise for positive infinity.
item
(self)Returns the single element in the array as built-in type.
linspace
(start, stop[, num, endpoint, sparse])Return evenly spaced numbers over a specified interval.
load
(datafile[, dtype, arrayformat, …])Load array data from plain text file.
log
(self)Calculate the natural logarithm of all elements in the input array.
log10
(self)Calculate the base 10 logarithm of all elements in the input array.
logical_and
(self, array)Element-wise logical AND of array elements.
logical_not
(self)Element-wise logical NOT of array elements.
logical_or
(self, array)Element-wise logical OR of array elements.
max
(self[, axis, keepdims])Return the maximum of an array or maximum along an axis.
maximum
(self, array)Element-wise maximum of array elements.
mean
(self[, axis, dtype, keepdims])Compute the arithmetic mean along the specified axis.
median
(self[, axis, keepdims])Compute the median along the specified axis.
meshgrid
(xi[, indexing])Return coordinate matrices from coordinate vectors.
min
(self[, axis, keepdims])Return the minimum of an array or minimum along an axis.
minimum
(self, array)Element-wise minimum of array elements.
nan_to_num
(self)Replace nan with zero and inf with finite numbers.
nanargmax
(self[, axis])Indices of the maximum values along an axis ignoring NaNs.
nanargmin
(self[, axis])Indices of the minimum values along an axis ignoring NaNs
nanmax
(self[, axis, keepdims])Return the maximum of an array or maximum along an axis ignoring NaNs.
nanmin
(self[, axis, keepdims])Return the minimum of an array or minimum along an axis ignoring NaNs.
norm
(self[, order])Entrywise vector norm.
norm_2d
(self[, order, axis, keepdims])Matrix norm or vector norm along axis.
normpdf
(self[, mu, sigma])Return normal distribution function value with mean and standard deviation given for the current array values.
ones
(shape[, dtype, sparse])Return a new array of given shape and type, filled with ones.
pinv
(self[, rcond])Compute the (Moore-Penrose) pseudo-inverse of a matrix.
pow
(self, exp)Array elements raised to powers from input exponent, element-wise.
prod
(self[, axis, dtype, keepdims])Return the product of array elements over a given axis.
rand
(shape[, random_state, sparse, density])Return random floats in the half-open interval [0.0, 1.0).
randint
(low[, high, shape, random_state, sparse])Return random integers from low (inclusive) to high (exclusive).
randn
(shape[, random_state])Return a sample (or samples) from the “standard normal” distribution.
randsample
(a[, shape, replace, …])Generates a random sample from a given array.
randuniform
([low, high, shape, …])Return random samples from low (inclusive) to high (exclusive).
ravel
(self)Return a flattened array.
repeat
(self, repeats[, axis])Repeat elements of an array.
repmat
(self, m, n)Repeat an array M x N times.
reshape
(self, newshape)Gives a new shape to an array without changing its data.
resize
(self, newshape[, constant])Return a new array with the specified shape.
rint
(self)Round elements of the array to the nearest integer.
round
(self[, decimals])Evenly round to the given number of decimals.
save
(self, datafile[, overwrite])Save array data into plain text file.
sha1
(self)Calculate the sha1 hexadecimal hash of array.
shuffle
(self)Modify array in-place by shuffling its contents.
sign
(self)Returns element-wise sign of the array.
sin
(self)Trigonometric sine, element-wise.
sort
(self[, axis, kind, inplace])Sort an array.
sqrt
(self)Compute the positive square-root of an array, element-wise.
std
(self[, axis, ddof, keepdims])Compute the standard deviation along the specified axis.
sum
(self[, axis, keepdims])Sum of array elements over a given axis.
tocsr
(self)Return a sparse scipy.sparse.csr_matrix representation of array.
todense
(self[, dtype, shape])Converts array to dense format.
tolist
(self)Return the array as a (possibly nested) list.
tondarray
(self)Return a dense numpy.ndarray representation of array.
tosparse
(self[, dtype, shape])Converts array to sparse format.
transpose
(self)Returns current array with axes transposed.
unique
(self[, return_index, return_inverse, …])Find the unique elements of an array.
zeros
(shape[, dtype, sparse])Return a new array of given shape and type, filled with zeros.
-
a
¶
-
all
(self, axis=None, keepdims=True)[source]¶ Test whether all array elements along a given axis evaluate to True.
Axis selection is available for DENSE format only. For sparse format, logical operation is performed over all the dimensions of the array
- Parameters
- axisint or None, optional, dense only
Axis or axes along which logical AND between elements is performed. The default (axis = None) is to perform a logical AND over all the dimensions of the input array. If axis is negative, it counts from the last to the first axis.
- keepdimsbool, optional, dense only
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- bool or CArray
Logical AND element-wise. If axis is None, bool is returned. Otherwise, a CArray of booleans with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- Not a Number (NaN), positive infinity and negative infinity
evaluate to True because these are not equal to zero.
Examples
>>> from secml.array import CArray
>>> print(CArray([[True,False],[True,True]], tosparse=True).all()) False
>>> print(CArray([[True,False],[True,True]]).all(axis=0)) CArray([[ True False]])
>>> print(CArray([-1,0,2,0]).all(axis=0)) CArray([ True False True False]) >>> print(CArray([-1,0,2,0]).all(axis=1)) CArray([False])
>>> from secml.core.constants import nan, inf >>> print(CArray([nan, inf, -inf]).all()) True
-
any
(self, axis=None, keepdims=True)[source]¶ Test whether any array elements along a given axis evaluate to True.
Axis selection is available for DENSE format only. For sparse format, logical operation is performed over all the dimensions of the array
- Parameters
- axisint or None, optional, dense only
Axis or axes along which logical AND between elements is performed. The default (axis = None) is to perform a logical OR over all the dimensions of the input array. If axis is negative, it counts from the last to the first axis.
- keepdimsbool, optional, dense only
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- bool or CArray
Logical OR element-wise. If axis is None, bool is returned. Otherwise, a CArray of booleans with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- Not a Number (NaN), positive infinity and negative infinity
evaluate to True because these are not equal to zero.
Examples
>>> from secml.array import CArray
>>> print(CArray([[True,False],[True,True]], tosparse=True).any()) True
>>> print(CArray([[True,False],[True,False]]).any(axis=0)) CArray([[ True False]])
>>> print(CArray([-1,0,2,0]).any(axis=0)) CArray([ True False True False]) >>> print(CArray([-1,0,2,0]).any(axis=1)) CArray([ True])
>>> from secml.core.constants import nan, inf >>> print(CArray([nan, inf, -inf]).any()) True
-
append
(self, array, axis=None)[source]¶ Append values to the end of an array.
- Parameters
- arrayCArray or array_like
Second array.
- axisint or None, optional
The axis along which values are appended. If axis is None, both arrays are flattened before use.
- Returns
- CArray
A copy of array with values appended to axis. Note that append does not occur in-place: a new array is allocated and filled. If axis is None, out is a flattened array. Always return an array with the same format of the first array.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). Consequently, result of appending a flat array to a flat array is 1-Dimensional only if axis=1. Appending a flat array to a 2-Dimensional array, or vice versa, always results in a 2-Dimensional array.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2],[3,4]]).append([[11],[22]])) CArray([ 1 2 3 4 11 22]) >>> print(CArray([[1,2],[3,4]]).append([[11,22]], axis=0)) CArray([[ 1 2] [ 3 4] [11 22]])
>>> print(CArray([[1,2],[3,4]]).append(CArray([[11],[22]], tosparse=True))) CArray([ 1 2 3 4 11 22]) >>> array = CArray([[1,2],[3,4]], tosparse=True).append([[11],[22]]) >>> print(array) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 3 (0, 3) 4 (0, 4) 11 (0, 5) 22)
>>> print(CArray([1,2]).append([11,22])) CArray([ 1 2 11 22])
>>> print(CArray([1,2]).append([11,22], axis=0)) CArray([[ 1 2] [11 22]]) >>> print(CArray([1,2]).append([11,22], axis=1)) CArray([ 1 2 11 22])
-
apply_along_axis
(self, func, axis, *args, **kwargs)[source]¶ Apply function to 1-D slices along the given axis.
func should accept 1-D arrays and return a single scalar or a 1-D array.
Only 1-D and 2-D arrays are currently supported.
- Parameters
- funcfunction
Function object to apply along the given axis. Must return a single scalar or a 1-D array.
- axisint
Axis along which to apply the function.
- *args, **kwargsoptional
Any other input value for func.
- Returns
- CArray
1-Dimensional array of size data.shape[0] with the output of func for each row in data. Datatype of output array is always float.
Examples
>>> from secml.array import CArray
>>> a = CArray([[1,2],[10,20],[100,200]])
>>> def return_sum(x): ... return x.sum()
>>> print(a.apply_along_axis(return_sum, axis=0)) # Column-wise CArray([111. 222.])
>>> print(a.apply_along_axis(return_sum, axis=1)) # Row-wise CArray([ 3. 30. 300.])
-
classmethod
arange
(start=None, stop=None, step=1, dtype=None, sparse=False)[source]¶ Return evenly spaced values within a given interval.
Values are generated within the half-open interval [start, stop). For integer arguments the function is equivalent to the Python built-in range function, but returns an ndarray rather than a list.
When using a non-integer step, such as 0.1, the results will often not be consistent. It is better to use linspace for these cases.
- Parameters
- startscalar, optional
Start of interval. The interval includes this value. The default start value is 0.
- stopscalar
End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of the output.
- stepscalar, optional
Spacing between values. For any output out, this is the distance between two adjacent values, out[i+1] - out[i]. The default step size is 1. If step is specified, start must also be given.
- dtypestr or dtype, optional
The type of the output array. If dtype is not given, infer the data type from the other input arguments.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array is created.
- Returns
- CArray
Array of evenly spaced values. For floating point arguments, the length of the result is ceil((stop - start)/step). Because of floating point overflow, this rule may result in the last element of out being greater than stop.
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
See also
CArray.linspace
Evenly spaced numbers with handling of endpoints.
Examples
>>> from secml.array import CArray
>>> print(CArray.arange(4)) CArray([0 1 2 3])
>>> print(CArray.arange(4.0)) CArray([0. 1. 2. 3.])
>>> print(CArray.arange(4.0, dtype=int)) CArray([0 1 2 3])
>>> print(CArray.arange(0, 4)) CArray([0 1 2 3])
>>> print(CArray.arange(0, 4, 0.8)) CArray([0. 0.8 1.6 2.4 3.2])
-
argmax
(self, axis=None)[source]¶ Indices of the maximum values along an axis.
- Parameters
- axisint, None, optional
If None (default), array is flattened before computing index, otherwise the specified axis is used.
- Returns
- int or CArray
Index of the maximum of the array. If axis is None, int is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- In case of multiple occurrences of the maximum values, the
indices corresponding to the first occurrence are returned.
Examples
>>> from secml.array import CArray
>>> print(CArray([-1, 0, 3]).argmax()) 2
>>> print(CArray([[-1, 0],[4, 3]]).argmax(axis=0)) # We return the index of maximum for each column CArray([[1 1]])
>>> print(CArray([[-1, 0],[4, 3]]).argmax(axis=1)) # We return the index of maximum for each row CArray([[1] [0]])
>>> print(CArray([-3,0,1,2]).argmax(axis=0)) CArray([0 0 0 0]) >>> print(CArray([-3,0,1,2]).argmax(axis=1)) CArray([3])
-
argmin
(self, axis=None)[source]¶ Indices of the minimum values along an axis.
- Parameters
- axisint, None, optional
If None (default), array is flattened before computing index, otherwise the specified axis is used.
- Returns
- int or CArray
Index of the minimum of the array. If axis is None, int is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- In case of multiple occurrences of the minimum values, the
indices corresponding to the first occurrence are returned.
Examples
>>> from secml.array import CArray
>>> print(CArray([-1, 0, 3]).argmin()) 0
>>> print(CArray([[-1, 0],[4, 3]]).argmin(axis=0)) # We return the index of minimum for each column CArray([[0 0]])
>>> print(CArray([[-1, 0],[4, 3]]).argmin(axis=1)) # We return the index of maximum for each row CArray([[0] [1]])
>>> print(CArray([-3,0,1,2]).argmin(axis=0)) CArray([0 0 0 0]) >>> print(CArray([-3,0,1,2]).argmin(axis=1)) CArray([0])
-
argsort
(self, axis=-1, kind='quicksort')[source]¶ Returns the indices that would sort an array.
Perform an indirect sort along the given axis using the algorithm specified by the kind keyword. It returns an array of indices of the same shape as a that index data along the given axis in sorted order.
- Parameters
- axisint or None, optional
Axis along which to sort. The default is -1 (the last axis). If None, the flattened array is used.
- kind{‘quicksort’, ‘mergesort’, ‘heapsort’}, optional
Sorting algorithm to use. Default ‘quicksort’. For sparse arrays, only ‘quicksort’ is available.
- Returns
- CArray
Array of indices that sort the array along the specified axis. In other words, array[index_array] yields a sorted array.
See also
numpy.sort
Description of different sorting algorithms.
CArray.sort
In-Place sorting of array.
Examples
>>> from secml.array import CArray
>>> print(CArray([0,-3,5]).argsort()) CArray([1 0 2])
>>> print(CArray([[0,-3],[5,1]]).argsort(axis=1)) # Sorting of each row CArray([[1 0] [1 0]])
>>> print(CArray([[0,-3],[5,1]]).argsort(axis=None)) # Sorting the flattened array CArray([1 0 3 2])
-
astype
(self, dtype)[source]¶ Copy of the array, casted to a specified type.
- Parameters
- dtypestr or dtype
Typecode or data-type to which the array is cast.
- Returns
- CArray
Copy of the original array casted to new data type.
Examples
>>> from secml.array import CArray
>>> print(CArray([1, 2, 3]).astype(float)) CArray([1. 2. 3.])
>>> print(CArray([1.1, 2.1, 3.1], tosparse=True).astype(int)) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 3)
-
atleast_2d
(self)[source]¶ View original array with at least two dimensions.
A copy is made only if needed.
- Returns
- outCArray
Array with array.ndim >= 2.
Notes
Sparse arrays are always 2 dimensional so this method returns a view (if possible) of the original array without any changes.
Examples
>>> from secml.array import CArray
>>> print(CArray([1,2,3]).atleast_2d()) CArray([[1 2 3]])
-
binary_search
(self, value)[source]¶ Returns the index of each input value inside the array.
DENSE ARRAYS ONLY
If value is not found inside the array, the index of the closest value will be returned. Array will be flattened before search.
- Parameters
- valuescalar or CArray
Element or array of elements to search inside the flattened array.
- Returns
- int or CArray
Position of input value, or the closest one, inside flattened array. If value is an array, a CArray with the position of each value element is returned.
Examples
>>> from secml.array import CArray
>>> print(CArray([[0,0.1],[0.4,1.0]]).binary_search(0.3)) 2
>>> print(CArray([1,2,3,4]).binary_search(10)) 3
>>> print(CArray([1,2,3,4]).binary_search(CArray([-10,1,2.2,10]))) CArray([0 0 1 3])
-
bincount
(self)[source]¶ Count the number of occurrences of each value in array of non-negative ints.
Only vector like arrays of integer dtype are supported.
- Returns
- CArray
The occurrence number for every different element of array. The length of output array is equal to a.max()+1.
Examples
>>> from secml.array import CArray
>>> a = CArray([1, 2, 3, 1, 6], tosparse=True) >>> print(a.bincount()) CArray([0 2 1 1 0 0 1])
-
ceil
(self)[source]¶ Return the ceiling of the input, element-wise.
The ceil of the scalar x is the smallest integer i, such that i >= x.
- Returns
- out_ceilCArray
The ceiling of each element in x, with float dtype.
See also
Examples
>>> from secml.array import CArray
>>> print(CArray([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]).ceil()) CArray([-1. -1. -0. 1. 2. 2. 2.])
>>> # Array with dtype == int is upcasted to float before ceiling >>> print(CArray([[-2, -1], [1, 1]], tosparse=True).ceil()) CArray( (0, 0) -2.0 (0, 1) -1.0 (1, 0) 1.0 (1, 1) 1.0)
-
clip
(self, c_min, c_max)[source]¶ Clip (limit) the values in an array.
DENSE FORMAT ONLY
Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1.
- Parameters
- c_min, c_maxint
Clipping intervals.
- Returns
- CArray
Returns a new array containing the clipped array elements. Dtype of the output array depends on the dtype of original array and on the dtype of the clipping limits.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2],[3,4]]).clip(2, 4)) CArray([[2 2] [3 4]])
>>> from secml.core.constants import inf
>>> # inf is a float, array will be casted accordingly >>> print(CArray([[1,2],[3,4]]).clip(-inf, 2)) CArray([[1. 2.] [2. 2.]])
-
classmethod
comblist
(list_of_list, dtype=<class 'float'>)[source]¶ Generate a cartesian product of list of list input.
- Parameters
- list_of_listlist of list
1-D arrays to form the cartesian product of.
- dtypestr or dtype
Datatype of output array. Default float.
- Returns
- CArray
2-D array of shape (M, len(arrays)) containing cartesian products between input arrays.
Examples
>>> print(CArray.comblist([[1, 2, 3], [4, 5], [6, 7]])) CArray([[1. 4. 6.] [1. 4. 7.] [1. 5. 6.] [1. 5. 7.] [2. 4. 6.] [2. 4. 7.] [2. 5. 6.] [2. 5. 7.] [3. 4. 6.] [3. 4. 7.] [3. 5. 6.] [3. 5. 7.]])
>>> print(CArray.comblist([[1, 2], [3]], dtype=int)) CArray([[1 3] [2 3]])
-
classmethod
concatenate
(array1, array2, axis=1)[source]¶ Concatenate a sequence of arrays along the given axis.
The arrays must have the same shape, except in the dimension corresponding to axis (the second, by default).
This function preserves input masks if available.
- Parameters
- array1CArray or array_like
First array. If array1 is not an array, a CArray will be created before concatenating.
- array2CArray or array_like
Second array. If array2 is not an array, a CArray will be created before concatenating.
- axisint or None, optional
The axis along which the arrays will be joined. Default is 1. If None, both arrays are ravelled before concatenation.
- Returns
- CArray
The concatenated array. If first array is sparse, return sparse.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). Consequently, concatenation result of 2 flat arrays is a flat array only when axis=1.
Examples
>>> from secml.array import CArray
>>> print(CArray.concatenate([[1,2],[3,4]], [[11],[22]])) CArray([[ 1 2 11] [ 3 4 22]]) >>> print(CArray.concatenate([[1,2],[3,4]], [[11,22]], axis=0)) CArray([[ 1 2] [ 3 4] [11 22]])
>>> print(CArray.concatenate([[1,2],[3,4]], CArray([[11],[22]], tosparse=True))) CArray([[ 1 2 11] [ 3 4 22]]) >>> array = CArray.concatenate(CArray([[1,2],[3,4]], tosparse=True), [[11],[22]]) >>> print(array) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 11 (1, 0) 3 (1, 1) 4 (1, 2) 22)
>>> print(CArray.concatenate([1,2], [11,22])) CArray([ 1 2 11 22])
>>> print(CArray.concatenate([1,2], [11,22], axis=0)) CArray([[ 1 2] [11 22]]) >>> print(CArray.concatenate([1,2], [11,22], axis=1)) CArray([ 1 2 11 22])
-
cos
(self)[source]¶ Trigonometric cosine, element-wise.
DENSE FORMAT ONLY
The array elements are considered angles, in radians (
rad equals 360 degrees).
- Returns
- CArray
New array with trigonometric cosine element-wise.
Examples
>>> from secml.array import CArray >>> from secml.core.constants import pi
>>> print((CArray([0,90,180,270,360,-90,-180,-270])*pi/180).cos().round()) CArray([ 1. 0. -1. -0. 1. 0. -1. -0.])
>>> print((CArray([[45,135],[225,315]])*pi/180).cos()) CArray([[ 0.707107 -0.707107] [-0.707107 0.707107]])
-
cumsum
(self, axis=None, dtype=None)[source]¶ Return the cumulative sum of the array elements along a given axis.
DENSE FORMAT ONLY
- Parameters
- axisint or None, optional
Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.
- dtypedtype or None, optional
Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used.
- Returns
- CArray
New array with cumulative sum of elements. If axis is None, flat array with same size of input array. If axis is not None, same shape of input array.
Examples
>>> from secml.array import CArray
>>> print(CArray([-3,0,2]).cumsum()) CArray([-3 -3 -1])
>>> print(CArray([-3,0,1,2]).cumsum(axis=0)) CArray([-3 0 1 2]) >>> print(CArray([-3,0,1,2]).cumsum(axis=1)) CArray([-3 -3 -2 0])
>>> print(CArray([[-3,0],[1,2]]).cumsum(dtype=float)) CArray([-3. -3. -2. 0.])
>>> print(CArray([[-3,0],[1,2]]).cumsum(axis=1)) CArray([[-3 -3] [ 1 3]])
-
d
¶
-
diag
(self, k=0)[source]¶ Extract a diagonal from array or construct a diagonal array.
- Parameters
- kint, optional
Diagonal index. Default is 0. Use k > 0 for diagonals above the main diagonal, k < 0 for diagonals below the main diagonal.
- Returns
- CArray
The extracted diagonal or constructed diagonal dense array. If array is 2-Dimensional, returns its k-th diagonal. Depending on numpy version resulting array can be read-only or a view of the original array’s diagonal. To make output array writable, use deepcopy(). If array is vector-like, return a 2-D array with the array on the k-th diagonal.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1, 2, 3], [10, 20, 30]]).diag(k=1)) CArray([ 2 30])
>>> print(CArray([[2, 1]], tosparse=True).diag()) CArray( (0, 0) 2 (1, 1) 1)
>>> print(CArray([1, 2, 3]).diag(k=1)) CArray([[0 1 0 0] [0 0 2 0] [0 0 0 3] [0 0 0 0]])
-
dot
(self, array)[source]¶ Dot product of two arrays.
For 2-D arrays it is equivalent to matrix multiplication. If both arrays are dense flat (rows), it is equivalent to the inner product of vectors (without complex conjugation).
Format of output array is the same of the first product argument.
- Parameters
- arrayCArray
Second argument of dot product.
- Returns
- scalar or CArray
Result of dot product. A CArray with the same format of first argument or scalar if out.size == 1.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,1],[2,2]]).dot(CArray([[1,1],[0,0]], tosparse=True))) CArray([[1 1] [2 2]])
>>> print(CArray([10,20]).dot(CArray([[1],[0]], tosparse=True))) 10
OUTER PRODUCT
>>> print(CArray([[10],[20]]).dot(CArray([1,0], tosparse=True))) CArray([[10 0] [20 0]])
INNER PRODUCT BETWEEN VECTORS
>>> print(CArray([10,20]).dot(CArray([1,0]))) 10
Inner product between vector-like arrays is a matrix multiplication
>>> print(CArray([10,20]).dot(CArray([1,0], tosparse=True).T)) 10 >>> print(CArray([10,20], tosparse=True).dot(CArray([1,0]).T)) 10
-
property
dtype
¶ Data-type of stored data.
-
classmethod
empty
(shape, dtype=<class 'float'>, sparse=False)[source]¶ Return a new array of given shape and type, without filling it.
- Parameters
- shapeint or tuple
Shape of the new array, e.g., 2 or (2,3).
- dtypestr or dtype, optional
The desired data-type for the array. Default is float.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array is returned.
- Returns
- CArray
Array of arbitrary values with the given properties.
Notes
.empty, unlike .zeros, does not set the array values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution.
Examples
>>> from secml.array import CArray
>>> array = CArray.empty(3) >>> print(array CArray([ 0.00000000e+000 4.94944794e+173 6.93660640e-310]) # random
>>> array = CArray.empty((2,1), dtype=int, sparse=True) >>> print(array) CArray() >>> print(array.shape) (2, 1)
-
exp
(self)[source]¶ Calculate the exponential of all elements in the input array.
DENSE FORMAT ONLY
- Returns
- CArray
New array with element-wise exponential of current data.
Notes
The irrational number e is also known as Euler’s number. It is approximately 2.718281, and is the base of the natural logarithm,
ln
(this means that, if, then
. For real input,
exp(x)
is always positive.For complex arguments,
x = a + ib
, we can write. The first term,
, is already known (it is the real argument, described above). The second term,
, is
, a function with magnitude 1 and a periodic phase.
Examples
>>> from secml.array import CArray
>>> print(CArray([0,1,3]).exp()) CArray([ 1. 2.718282 20.085537])
-
classmethod
eye
(n_rows, n_cols=None, k=0, dtype=<class 'float'>, sparse=False)[source]¶ Return a 2-D array with ones on the diagonal and zeros elsewhere.
- Parameters
- n_rowsint
Number of rows in the output.
- n_colsint or None, optional
Number of columns in the output. If None, defaults to n_rows.
- kint, optional
Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal.
- dtypestr or dtype, optional
Data-type of the returned array.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array will be created.
- Returns
- CArray
An array where all elements are equal to zero, except for the k-th diagonal, whose values are equal to one.
Examples
>>> from secml.array import CArray >>> array = CArray.eye(2) >>> print(array) CArray([[1. 0.] [0. 1.]])
>>> array = CArray.eye(2, 3, k=1, dtype=int, sparse=True) >>> print(array) CArray( (0, 1) 1 (1, 2) 1)
>>> print(array.shape) (2, 3)
-
find
(self, condition)[source]¶ Returns vector-like array elements indices depending on condition.
- Parameters
- conditionCArray
Array with booleans representing desired condition.
- Returns
- list
List with indices corresponding to array elements where condition is True.
See also
find_2d
find method for arrays of generic shape.
Examples
>>> from secml.array import CArray
>>> array = CArray([1,0,-6,2,0]) >>> array_find = array.find(array > 0) >>> print(array_find) [0, 3] >>> print(array[array_find]) CArray([1 2])
>>> array = CArray([[1,0,-6,2,0]]) >>> array_find = array.find(array == 0) >>> print(array_find) [1, 4] >>> print(array[array_find].shape) (1, 2)
>>> array = CArray([[1,0,-6,2,0]], tosparse=True) >>> array_find = array.find(array == 0) >>> print(array_find) [1, 4] >>> print(array[array_find].shape) (1, 2)
-
find_2d
(self, condition)[source]¶ Returns array elements indices depending on condition.
- Parameters
- conditionCArray
Array with booleans representing desired condition.
- Returns
- list
List of len(out_find) == ndim with indices corresponding to array elements where condition is True. Es. for matrices, out_find[0] holds the indices of rows, out_find[1] the indices of columns.
Notes
Using .find_2d() output for indexing original array always result in a ravelled array with elements which corresponding condition was True.
Examples
>>> from secml.array import CArray
>>> array = CArray([[1,0],[-6,3],[2,7]]) >>> array_find = array.find_2d(array > 0) >>> print(array_find) [[0, 1, 2, 2], [0, 1, 0, 1]] >>> print(array[array_find]) CArray([1 3 2 7])
>>> array = CArray([[1,0],[-6,0],[2,0]], tosparse=True) >>> array_find = array.find_2d(array == 0) >>> print(array_find) [[0, 1, 2], [1, 1, 1]] >>> print(array[array_find].shape) (1, 3)
>>> array = CArray([1,0,2]) >>> array_find = array.find_2d(array > 0) >>> print(array_find) [[0, 0], [0, 2]] >>> print(array[array_find]) CArray([1 2])
-
flatten
(self)[source]¶ Return a flattened copy of array.
For dense format a 1-dim array, containing the elements of the input, is returned. For sparse format a (1 x array.size) array will be returned.
- Returns
- CArray
Output of the same dtype as a, of shape (array.size,) for dense format or (1,array.size) for sparse format.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2],[3,4]]).flatten()) CArray([1 2 3 4])
>>> print(CArray([[1],[2],[3]], tosparse=True).flatten()) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 3)
-
floor
(self)[source]¶ Return the floor of the input, element-wise.
The floor of the scalar x is the largest integer i, such that i <= x.
- Returns
- out_floorCArray
The floor of each element in x, with float dtype.
See also
CArray.round
Evenly round to the given number of decimals.
CArray.ceil
Return the ceiling of the input, element-wise.
Notes
Some spreadsheet programs calculate the “floor-towards-zero”, in other words floor(-2.5) == -2. We instead uses the definition of floor where floor(-2.5) == -3.
Examples
>>> from secml.array import CArray
>>> print(CArray([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]).floor()) CArray([-2. -2. -1. 0. 1. 1. 2.])
>>> # Array with dtype == int is upcasted to float before flooring >>> print(CArray([[-2, -1], [1, 1]], tosparse=True).floor()) CArray( (0, 0) -2.0 (0, 1) -1.0 (1, 0) 1.0 (1, 1) 1.0)
-
classmethod
from_iterables
(iterables_list)[source]¶ Build an array by chaining elements from objects in the input list.
- Parameters
- iterables_listlist of iterable
List of iterables to chain. Valid objects are CArrays, lists, tuples, and any other iterable. N-Dimensional arrays are flattened before chaining.
- Returns
- CArray
Flat CArray with all values chained from input objects.
Examples
>>> from secml.array import CArray
>>> print(CArray.from_iterables([[1, 2], (3, 4), CArray([5, 6])])) CArray([1 2 3 4 5 6])
>>> print(CArray.from_iterables([CArray([1, 2]), CArray([[3, 4], [5, 6]])])) CArray([1 2 3 4 5 6])
-
get_data
(self)[source]¶ Return stored data as a standard array type.
- Returns
- np.ndarray or scipy.sparse.csr_matrix
If array is dense, a np.ndarray is returned. If array is sparse, a scipy.sparse.csr_matrix is returned.
-
get_nnz
(self, axis=None)[source]¶ Counts the number of non-zero values in the array.
- Parameters
- axisbool or None, optional
Axis or tuple of axes along which to count non-zeros. Default is None, meaning that non-zeros will be counted along a flattened version of the array.
- Returns
- countCArray or int
Number of non-zero values in the array along a given axis. Otherwise, the total number of non-zero values in the array is returned.
Examples
>>> from secml.array import CArray
>>> a = CArray([[1,2],[0,5],[0,0],[2,0]]) >>> print(a.get_nnz()) # Total number of non-zero elements 4 >>> print(a.get_nnz(axis=0)) # Number of non-zero elements for each column CArray([2 2]) >>> print(a.get_nnz(axis=1)) # Number of non-zero elements for each row CArray([2 1 0 1])
-
has_compatible_shape
(self, other)[source]¶ Return True if input CArray has a compatible shape.
Two CArrays can be considered compatible if both have the same shape or both are vector-like.
- Parameters
- otherCArray
Array to check for shape compatibility
- Returns
- bool
True if input array has compatible shape with current array.
See also
is_vector_like
check if an array is vector-like.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2]]).has_compatible_shape(CArray([[1],[2]]))) False
>>> print(CArray([1,2]).has_compatible_shape(CArray([1,2,3]))) False
>>> print(CArray([[1,2]], tosparse=True).has_compatible_shape(CArray([1,2]))) True
-
interp
(self, x_data, y_data, return_left=None, return_right=None)[source]¶ One-dimensional linear interpolation.
DENSE FORMAT ONLY
Returns the 1-D piecewise linear interpolant to a function with given values at discrete data-points.
- Parameters
- x_dataCArray
Flat array of floats with the x-coordinates of the data points, must be increasing.
- y_dataCArray
Flat array of floats with the y-coordinates of the data points, same length as x_data.
- return_leftfloat, optional
Value to return for x < x_data[0], default is y_data[0].
- return_rightfloat, optional
Value to return for x > x_data[-1], default is y_data[-1].
- Returns
- CArray
The interpolated values, same shape as x.
Notes
The function does not check that the x-coordinate sequence x_data is increasing. If x_data is not increasing, the results are nonsense.
Examples
>>> from secml.array import CArray >>> from secml.figure import CFigure >>> from secml.core.constants import pi
>>> fig = CFigure(fontsize=14) >>> x_array = CArray.linspace(0, 2*pi, 10) >>> y_array = x_array.sin() >>> x_vals = CArray.linspace(0, 2*pi, 50)
>>> y_interp = x_vals.interp(x_array, y_array)
>>> fig.sp.plot(x_array, y_array, 'o') >>> fig.sp.plot(x_vals, y_interp, '-xr')
(Source code, png)
-
inv
(self)[source]¶ Compute the (multiplicative) inverse of a square matrix.
Given a square matrix a, return the matrix inv satisfying dot(array, array_inv) = dot(array_inv, array) = eye(array.shape[0]).
- Returns
- array_invCArray
(Multiplicative) inverse of the square matrix.
- Raises
- LinAlgErrordense only
If array is not square or inversion fails.
- ValueErrorsparse only
If array is not square or inversion fails
Notes
If the inverse of a sparse array is expected to be non-sparse, it will likely be faster to convert array to dense first.
Examples
>>> from secml.array import CArray
>>> array = CArray([[1., 2.], [3., 4.]]) >>> array_inv = array.inv() >>> (array.dot(array_inv).round() == CArray.eye(2)).all() True >>> (array_inv.dot(array).round() == CArray.eye(2)).all() True
>>> print(CArray([[1., 2.], [3., 4.]], tosparse=True).inv().round(1)) CArray( (0, 0) -2.0 (0, 1) 1.0 (1, 0) 1.5 (1, 1) -0.5)
>>> CArray([[1.,2.,3.], [4., 5.,6.]]).inv() Traceback (most recent call last): ... numpy.linalg.linalg.LinAlgError: Last 2 dimensions of the array must be square
-
is_inf
(self)[source]¶ Test element-wise for positive or negative infinity.
- Returns
- CArray
Array of the same shape as x, with True where x == +/-inf, otherwise False.
Examples
>>> from secml.core.constants import inf, nan >>> from secml.array import CArray
>>> a = CArray([1, inf, -inf, nan, 4.5]) >>> print(a.is_inf()) CArray([False True True False False])
-
is_nan
(self)[source]¶ Test element-wise for Not a Number (NaN).
- Returns
- CArray
Array of the same shape as x, with True where x == nan, otherwise False.
Examples
>>> from secml.core.constants import inf, nan >>> from secml.array import CArray
>>> a = CArray([1, inf, -inf, nan, 4.5]) >>> print(a.is_nan()) CArray([False False False True False])
-
is_neginf
(self)[source]¶ Test element-wise for negative infinity.
- Returns
- CArray
Array of the same shape as x, with True where x == -inf, otherwise False.
Examples
>>> from secml.core.constants import inf, nan >>> from secml.array import CArray
>>> a = CArray([1, inf, -inf, nan, 4.5]) >>> print(a.is_neginf()) CArray([False False True False False])
-
is_posinf
(self)[source]¶ Test element-wise for positive infinity.
- Returns
- CArray
Array of the same shape as x, with True where x == +inf, otherwise False.
Examples
>>> from secml.core.constants import inf, nan >>> from secml.array import CArray
>>> a = CArray([1, inf, -inf, nan, 4.5]) >>> print(a.is_posinf()) CArray([False True False False False])
-
property
is_vector_like
¶ True if array is vector-like.
An array is vector-like when 1-Dimensional or 2-Dimensional with shape[0] == 1.
- Returns
- bool
True if array is vector-like.
Examples
>>> from secml.array import CArray
>>> a = CArray([1,2,3]) >>> a.is_vector_like True
>>> a = CArray([1,2,3], tosparse=True) # sparse arrays always 2-D >>> a.is_vector_like True
>>> a = CArray([[1,2],[3,4]]) >>> a.is_vector_like False
-
property
isdense
¶ True if data is stored in DENSE form, False otherwise.
- Returns
- bool
True if data is stored in DENSE form, False otherwise.
-
property
issparse
¶ True if data is stored in SPARSE form, False otherwise.
- Returns
- bool
True if data is stored in SPARSE form, False otherwise.
-
item
(self)[source]¶ Returns the single element in the array as built-in type.
- Returns
- int, float, bool, str
The single element in the array.
Examples
>>> from secml.array import CArray
>>> print(CArray([1]).item()) 1
>>> print(CArray([[1.]]).item()) 1.0
>>> print(CArray([1], tosparse=True).item()) 1
>>> print(CArray([1,2,3]).item()) Traceback (most recent call last): ... ValueError: cannot use .item(). Array has size 3
>>> print(CArray([]).item()) Traceback (most recent call last): ... ValueError: cannot use .item(). Array has size 0
-
classmethod
linspace
(start, stop, num=50, endpoint=True, sparse=False)[source]¶ Return evenly spaced numbers over a specified interval.
Returns num evenly spaced float samples, calculated over the interval [start, stop ]. The endpoint of the interval can optionally be excluded.
- Parameters
- startscalar
The starting value of the sequence.
- stopscalar
The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of num + 1 evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.
- numint, optional
Number of samples to generate. Default is 50.
- endpointbool, optional
If True, stop is the last sample. Otherwise, it is not included. Default is True.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array is created.
- Returns
- CArray
There are num equally spaced samples in the closed interval [start, stop] or the half-open interval [start, stop) (depending on whether endpoint is True or False).
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
See also
CArray.arange
Similar to linspace, but uses a specific step size.
Examples
>>> from secml.array import CArray
>>> array = CArray.linspace(3.0, 4, num=5) >>> print(array) CArray([3. 3.25 3.5 3.75 4. ])
>>> array = CArray.linspace(3, 4., num=5, endpoint=False) >>> print(array) CArray([3. 3.2 3.4 3.6 3.8])
-
classmethod
load
(datafile, dtype=<class 'float'>, arrayformat='dense', startrow=0, skipend=0, cols=None)[source]¶ Load array data from plain text file.
The default encoding is utf-8.
- Parameters
- datafilestr or file_handle
File or filename to read. If the filename extension is gz or bz2, the file is first decompressed.
- dtypestr, dtype, optional
Data type of the resulting array, default ‘float’. If None, the dtype will be determined by the contents of the file.
- arrayformat{‘dense’, ‘sparse’}, optional
Format of array to load, default ‘dense’.
- startrowint, optional, dense only
Array row to start loading from.
- skipendint, optional, dense only
Number of lines to skip from the end of the file when reading.
- cols{CArray, int, tuple}, optional, dense only
Columns to load from target file.
- Returns
- CArray
Array resulting from loading, 2-Dimensional.
-
log
(self)[source]¶ Calculate the natural logarithm of all elements in the input array.
DENSE FORMAT ONLY
- Returns
- CArray
New array with element-wise natural logarithm of current data.
Notes
Logarithm is a multivalued function: for each x there is an infinite number of z such that exp(z) = x. The convention is to return the z whose imaginary part lies in [-pi, pi].
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields
nan
and sets the invalid floating point error flag.For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
References
- 1
M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
- 2
Wikipedia, “Logarithm”. http://en.wikipedia.org/wiki/Logarithm
Examples
>>> from secml.array import CArray
>>> print(CArray([0,1,3]).log()) CArray([ -inf 0. 1.098612])
-
log10
(self)[source]¶ Calculate the base 10 logarithm of all elements in the input array.
DENSE FORMAT ONLY
- Returns
- CArray
New array with element-wise base 10 logarithm of current data.
Notes
Logarithm is a multivalued function: for each x there is an infinite number of z such that 10**z = x. The convention is to return the z whose imaginary part lies in [-pi, pi].
For real-valued input data types, log10 always returns real output. For each value that cannot be expressed as a real number or infinity, it yields
nan
and sets the invalid floating point error flag.For complex-valued input, log10 is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log10 handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
References
- 1
M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
- 2
Wikipedia, “Logarithm”. http://en.wikipedia.org/wiki/Logarithm
Examples
>>> from secml.array import CArray
>>> print(CArray([0,1,3]).log10()) CArray([ -inf 0. 0.477121])
-
logical_and
(self, array)[source]¶ Element-wise logical AND of array elements.
Compare two arrays and returns a new array containing the element-wise logical AND.
- Parameters
- arrayCArray
The array holding the elements to compare current array with. Must have the same shape of first array.
- Returns
- CArray
The element-wise logical AND between the two arrays. If one of the two arrays is sparse, result will be sparse.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]]).logical_and(CArray([[2,-1],[2,-1]]))) CArray([[ True False] [ True False]])
>>> print(CArray([-1]).logical_and(CArray([2]))) CArray([ True])
>>> array = CArray([1,0,2,-1]) >>> print((array > 0).logical_and(array < 2)) CArray([ True False False False])
-
logical_not
(self)[source]¶ Element-wise logical NOT of array elements.
- Returns
- CArray
The element-wise logical NOT.
Notes
For sparse arrays this operation is usually really expensive as the number of zero elements is higher than the number of non-zeros.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]]).logical_not()) CArray([[False True] [False True]])
>>> print(CArray([True]).logical_not()) CArray([False])
>>> array = CArray([1,0,2,-1]) >>> print((array > 0).logical_not()) CArray([False True False True])
-
logical_or
(self, array)[source]¶ Element-wise logical OR of array elements.
Compare two arrays and returns a new array containing the element-wise logical OR.
- Parameters
- arrayCArray or array_like
The array like object holding the elements to compare current array with. Must have the same shape of first array.
- Returns
- out_andCArray
The element-wise logical OR between the two arrays.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]]).logical_or(CArray([[2,0],[2,-1]]))) CArray([[ True False] [ True True]])
>>> print(CArray([True]).logical_or(CArray([False]))) CArray([ True])
>>> array = CArray([1,0,2,-1]) >>> print((array > 0).logical_or(array < 2)) CArray([ True True True True])
-
max
(self, axis=None, keepdims=True)[source]¶ Return the maximum of an array or maximum along an axis.
- Parameters
- axisint or None, optional
Axis along which to operate. If None (default), array is flattened before use.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Maximum of the array. If axis is None, scalar is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- For sparse arrays all elements are taken into account (both
zeros and non-zeros).
- NaN values are propagated, that is if at least one item is NaN,
the corresponding max value will be NaN as well.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]], tosparse=True).max()) 2
>>> print(CArray([[-1,0],[2,0]]).max(axis=0)) CArray([[2 0]]) >>> print(CArray([[-1,0],[2,0]]).max(axis=1)) CArray([[0] [2]])
>>> print(CArray([-1,0,2,0]).max(axis=0)) CArray([-1 0 2 0]) >>> print(CArray([-1,0,2,0]).max(axis=1)) CArray([2])
>>> from secml.core.constants import nan >>> print(CArray([5,nan]).max()) nan
-
maximum
(self, array)[source]¶ Element-wise maximum of array elements.
Compare two arrays and returns a new array containing the element-wise maximum. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated.
- Parameters
- arrayCArray or array_like
The array like object holding the elements to compare current array with. Must have the same shape of first array.
- Returns
- CArray
The element-wise maximum between the two arrays.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]]).maximum(CArray([[2,-1],[2,-1]]))) CArray([[2 0] [2 0]])
>>> print(CArray([[-1,0],[2,0]], tosparse=True).maximum(CArray([[2,-1],[2,-1]]))) CArray( (0, 0) 2 (1, 0) 2)
>>> print(CArray([-1]).maximum(CArray([2]))) CArray([2])
-
mean
(self, axis=None, dtype=None, keepdims=True)[source]¶ Compute the arithmetic mean along the specified axis.
Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. Output is casted to dtype float.
- Parameters
- axisint, optional
Axis along which the means are computed. The default is to compute the mean of the flattened array.
- dtypedata-type, optional
Type to use in computing the mean. For integer inputs, the default is float64; for floating point inputs, it is the same as the input dtype.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array.
- Returns
- float or CArray
Mean of the elements in the array. If axis is None, float is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- The arithmetic mean is the sum of the elements along
the axis divided by the number of elements.
- Note that for floating-point input, the mean is computed
using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-precision accumulator using the dtype keyword can alleviate this issue.
- By default, float16 results are computed using float32
intermediates for extra precision.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,4],[4,3]], tosparse=True).mean()) 3.0
>>> print(CArray([[1,4],[4,3]], tosparse=True).mean(axis=0)) CArray([[2.5 3.5]])
>>> print(CArray([1,4,4,3]).mean(axis=0)) CArray([1. 4. 4. 3.]) >>> print(CArray([1,4,4,3]).mean(axis=1)) CArray([3.])
-
median
(self, axis=None, keepdims=True)[source]¶ Compute the median along the specified axis.
Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i e., V_sorted[(N-1)/2], when N is odd, and the average of the two middle values of V_sorted when N is even.
DENSE FORMAT ONLY
- Parameters
- axisint, optional
Axis along which the means are computed. The default is to compute the median of the flattened array.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array.
- Returns
- float or CArray
Median of the elements in the array. If axis is None, float is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- If the input contains integers or floats smaller than float64,
then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,4],[4,3]]).median()) 3.5
>>> print(CArray([[1,4],[4,3]]).median(axis=0)) CArray([[2.5 3.5]])
>>> print(CArray([1,4,3]).median()) # array size is odd 3.0
>>> print(CArray([1,4,4,3]).median(axis=0)) CArray([1. 4. 4. 3.]) >>> print(CArray([1,4,4,3]).median(axis=1)) CArray([3.5])
-
classmethod
meshgrid
(xi, indexing='xy')[source]¶ Return coordinate matrices from coordinate vectors.
DENSE ARRAYS ONLY
Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn.
- Parameters
- x1, x2, …, xituple of CArray or list
1-D arrays representing the coordinates of a grid.
- indexing{‘xy’, ‘ij’}, optional
Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. See Examples.
- Returns
- X1, X2, …, XNtuple of CArray
For vectors x1, x2,…, ‘xn’ with lengths Ni=len(xi), return (N1, N2, N3,…Nn) shaped arrays if indexing=’ij’ or (N2, N1, N3,…Nn) shaped arrays if indexing=’xy’ with the elements of xi repeated to fill the matrix along the first dimension for x1, the second for x2 and so on.
Examples
>>> from secml.array import CArray
>>> x = CArray([1,3,5]) >>> y = CArray([2,4,6]) >>> xv, yv = CArray.meshgrid((x, y)) >>> print(xv) CArray([[1 3 5] [1 3 5] [1 3 5]]) >>> print(yv) CArray([[2 2 2] [4 4 4] [6 6 6]])
>>> xv, yv = CArray.meshgrid((x, y), indexing='ij') >>> print(xv) CArray([[1 1 1] [3 3 3] [5 5 5]]) >>> print(yv) CArray([[2 4 6] [2 4 6] [2 4 6]])
-
min
(self, axis=None, keepdims=True)[source]¶ Return the minimum of an array or minimum along an axis.
- Parameters
- axisint or None, optional
Axis along which to operate. If None (default), array is flattened before use.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Minimum of the array. If axis is None, scalar is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- For sparse arrays all elements are taken into account (both
zeros and non-zeros).
- NaN values are propagated, that is if at least one item is NaN,
the corresponding max value will be NaN as well.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]], tosparse=True).min()) -1
>>> print(CArray([[-2,0],[-1,0]]).min(axis=0)) CArray([[-2 0]]) >>> print(CArray([[-2,0],[-1,0]]).min(axis=1)) CArray([[-2] [-1]])
>>> print(CArray([-1,0,2,0]).min(axis=0)) CArray([-1 0 2 0]) >>> print(CArray([-1,0,2,0]).min(axis=1)) CArray([-1])
>>> from secml.core.constants import nan >>> print(CArray([5,nan]).min()) nan
-
minimum
(self, array)[source]¶ Element-wise minimum of array elements.
Compare two arrays and returns a new array containing the element-wise minimum. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated.
- Parameters
- arrayCArray or array_like
The array like object holding the elements to compare current array with. Must have the same shape of first array.
- Returns
- CArray
The element-wise minimum between the two arrays.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-1,0],[2,0]]).minimum(CArray([[2,-1],[2,-1]]))) CArray([[-1 -1] [ 2 -1]])
>>> print(CArray([[-1,0],[2,0]], tosparse=True).minimum(CArray([[2,-1],[2,-1]]))) CArray( (0, 0) -1 (0, 1) -1 (1, 0) 2 (1, 1) -1)
>>> print(CArray([-1]).minimum(CArray([2]))) CArray([-1])
-
nan_to_num
(self)[source]¶ Replace nan with zero and inf with finite numbers.
Replace array elements if Not a Number (NaN) with zero, if (positive or negative) infinity with the largest (smallest or most negative) floating point value that fits in the array dtype. All finite numbers are upcast to the output dtype (default float64).
Notes
We use the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity.
Examples
>>> from secml.array import CArray >>> from secml.core.constants import nan, inf >>> import numpy as np >>> np.set_printoptions(precision=1)
>>> array = CArray([-1,0,1,nan,inf,-inf]) >>> array.nan_to_num() >>> print(array) CArray([-1.000000e+000 0.000000e+000 1.000000e+000 0.000000e+000 1.797693e+308 -1.797693e+308])
>>> # Restoring default print precision >>> np.set_printoptions(precision=8)
-
nanargmax
(self, axis=None)[source]¶ Indices of the maximum values along an axis ignoring NaNs.
For all-NaN slices ValueError is raised. Warning: the results cannot be trusted if a slice contains only NaNs and infs.
DENSE ARRAYS ONLY
- Parameters
- axisint, None, optional
If None (default), array is flattened before computing index, otherwise the specified axis is used.
- Returns
- int or CArray
Index of the maximum of the array ignoring NaNs. If axis is None, int is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- In case of multiple occurrences of the maximum values, the
indices corresponding to the first occurrence are returned.
Examples
>>> from secml.array import CArray
>>> from secml.core.constants import nan >>> print(CArray([5, nan]).argmax()) 1
>>> print(CArray([5, nan]).nanargmax()) 0
>>> print(CArray([[-1, nan], [nan, 0]]).nanargmax()) 3
>>> print(CArray([[-1, nan], [nan, 0]]).nanargmax(axis=0)) CArray([[0 1]]) >>> print(CArray([[-1, nan], [nan, 0]]).nanargmax(axis=1)) CArray([[0] [1]])
-
nanargmin
(self, axis=None)[source]¶ Indices of the minimum values along an axis ignoring NaNs
For all-NaN slices ValueError is raised. Warning: the results cannot be trusted if a slice contains only NaNs and infs.
- Parameters
- axisint, None, optional
If None (default), array is flattened before computing index, otherwise the specified axis is used.
- Returns
- int or CArray
Index of the minimum of the array ignoring NaNs. If axis is None, int is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
- In case of multiple occurrences of the minimum values, the
indices corresponding to the first occurrence are returned.
Examples
>>> from secml.array import CArray
>>> from secml.core.constants import nan >>> print(CArray([5, nan]).argmin()) 1
>>> print(CArray([5, nan]).nanargmin()) 0
>>> print(CArray([[-1, nan], [nan, 0]]).nanargmin()) 0
>>> print(CArray([[-1, nan], [nan, 0]]).nanargmin(axis=0)) CArray([[0 1]]) >>> print(CArray([[-1, nan], [nan, 0]]).nanargmin(axis=1)) CArray([[0] [1]])
-
nanmax
(self, axis=None, keepdims=True)[source]¶ Return the maximum of an array or maximum along an axis ignoring NaNs.
When all-NaN slices are encountered a RuntimeWarning is raised and Nan is returned for that slice.
DENSE ARRAYS ONLY
- Parameters
- axisint or None, optional
Axis along which to operate. If None (default), flattened input is used.
- keepdimsbool, optional, dense only
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Maximum of the array ignoring NaNs. If axis is None, scalar is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Examples
>>> from secml.array import CArray
>>> from secml.core.constants import nan >>> print(CArray([5, nan]).max()) nan
>>> print(CArray([5, nan]).nanmax()) 5.0
>>> print(CArray([[-1, nan], [nan, 0]]).nanmax()) 0.0
>>> print(CArray([[-1, nan], [nan, 0]]).nanmax(axis=0)) CArray([[-1. 0.]]) >>> print(CArray([[-1, nan], [nan, 0]]).nanmax(axis=1)) CArray([[-1.] [ 0.]])
-
nanmin
(self, axis=None, keepdims=True)[source]¶ Return the minimum of an array or minimum along an axis ignoring NaNs.
When all-NaN slices are encountered a RuntimeWarning is raised and Nan is returned for that slice.
DENSE ARRAYS ONLY
- Parameters
- axisint or None, optional
Axis along which to operate. If None (default), flattened input is used.
- keepdimsbool, optional, dense only
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Index of the minimum of the array ignoring NaNs. If axis is None, scalar is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Examples
>>> from secml.array import CArray
>>> from secml.core.constants import nan >>> print(CArray([5, nan]).min()) nan
>>> print(CArray([5, nan]).nanmin()) 5.0
>>> print(CArray([[-1, nan], [nan, 0]]).nanmin()) -1.0
>>> print(CArray([[-1, nan], [nan, 0]]).nanmin(axis=0)) CArray([[-1. 0.]]) >>> print(CArray([[-1, nan], [nan, 0]]).nanmin(axis=1)) CArray([[-1.] [ 0.]])
-
property
ndim
¶ Number of array dimensions.
This is always 2 for sparse arrays.
-
property
nnz
¶ Number of non-zero values in the array.
Examples
>>> from secml.array import CArray
>>> print(CArray([1,0,3,0], tosparse=True).nnz) 2
-
property
nnz_data
¶ Return non-zero array elements.
- Returns
- nnz_dataCArray
Flat array, dense, shape (n, ), with non-zero array elements.
Examples
>>> from secml.array import CArray
>>> array = CArray([1,0,3,0], tosparse=True) >>> print(array.nnz_data) CArray([1 3])
-
property
nnz_indices
¶ Index of non-zero array elements.
- Returns
- nnz_indiceslist
List of 2 lists. Inside out[0] there are the indices of the corresponding rows and inside out[1] there are the indices of the corresponding columns of non-zero array elements.
Examples
>>> from secml.array import CArray
>>> array = CArray([1,0,3,0], tosparse=True) >>> nzz_indices = array.nnz_indices >>> nzz_indices [[0, 0], [0, 2]] >>> print(array[nzz_indices]) CArray( (0, 0) 1 (0, 1) 3)
>>> array = CArray([1,0,3,0]) >>> nzz_indices = array.nnz_indices >>> nzz_indices [[0, 0], [0, 2]] >>> print(array[nzz_indices]) CArray([1 3])
-
norm
(self, order=None)[source]¶ Entrywise vector norm.
This function provides vector norms on vector-like arrays.
This function is able to return one of an infinite number of vector norms (described below), depending on the value of the order parameter.
- Parameters
- order{int, inf, -inf}, optional
Order of the norm (see table under Notes).
- Returns
- float
Norm of the array.
See also
numpy.norm
Full description of different norms.
Notes
For integer order parameter, norm is computed as norm = sum(abs(array)**order)**(1./order). For other norm types, see np.norm description.
Negative vector norms are only supported for dense arrays.
Differently from numpy, we consider flat vectors as 2-Dimensional with shape (1,array.size).
If input 2-Dimensional array is NOT vector-like, ValueError will be raised.
Examples
>>> from secml.array import CArray >>> from secml.core.constants import inf
>>> print(round(CArray([1,2,3]).norm(), 5)) 3.74166 >>> print(round(CArray([[1,2,3]]).norm(2), 5)) 3.74166
>>> print(CArray([1,2,3]).norm(1)) 6.0 >>> print(CArray([1,2,3]).tosparse().norm(1)) 6.0
>>> print(CArray([1,2,3]).norm(inf)) 3.0 >>> print(CArray([1,2,3]).norm(-inf)) 1.0
>>> print(CArray([[1,2],[2,4]]).norm()) Traceback (most recent call last): ... ValueError: Array has shape (2, 2). Call .norm_2d() to compute matricial norm or vector norm along axis.
-
norm_2d
(self, order=None, axis=None, keepdims=True)[source]¶ Matrix norm or vector norm along axis.
This function provides matrix norm or vector norm along axis of 2D arrays. Flat arrays will be converted to 2D before computing the norms.
This function is able to return one of seven different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the order parameter.
- Parameters
- order{‘fro’, non-zero int, inf, -inf}, optional
Order of the norm (see table under Notes). ‘fro’ stands for Frobenius norm.
- axisint or None, optional
If axis is an integer, it specifies the axis of array along which to compute the vector norms. If axis is None then the matrix norm is returned.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- float or CArray
Norm of the array. If axis is None, float is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
See also
numpy.norm
Full description of different norms.
Notes
For integer order parameter, norm is computed as norm = sum(abs(array)**order)**(1./order). For other norm types, see np.norm description. Negative vector norms along axis are only supported for dense arrays.
Examples
>>> from secml.array import CArray >>> from secml.core.constants import inf
>>> print(round(CArray([1,2,3]).norm_2d(), 5)) 3.74166
>>> print(CArray([1,2,3]).norm_2d(1)) # max(sum(abs(x), axis=0)) 3.0 >>> print(CArray([[1,2,3]]).norm_2d(1)) 3.0
>>> print(CArray([1,2,3]).norm_2d(inf)) # max(sum(abs(x), axis=1)) 6.0 >>> print(CArray([1,2,3]).norm_2d(-inf)) # min(sum(abs(x), axis=1)) 6.0
>>> print(CArray([[1,2],[2,4]], tosparse=True).norm_2d()) 5.0
>>> print(CArray([[1,2],[2,4]]).norm_2d(axis=0).round(5)) CArray([[2.23607 4.47214]]) >>> print(CArray([[1,2],[2,4]]).norm_2d(axis=1).round(5)) CArray([[2.23607] [4.47214]])
>>> print(CArray([1,2,3]).norm_2d(2, axis=0)) CArray([[1. 2. 3.]]) >>> print(CArray([1,2,3]).norm_2d(2, axis=1).round(5)) CArray([[3.74166]])
>>> print(CArray([1,0,3], tosparse=True).norm_2d(axis=0)) # Norm is dense CArray([[1. 0. 3.]]) >>> print(CArray([1,0,3], tosparse=True).norm_2d(axis=1).round(5)) CArray([[3.16228]])
-
normpdf
(self, mu=0.0, sigma=1.0)[source]¶ Return normal distribution function value with mean and standard deviation given for the current array values.
DENSE ARRAYS ONLY
The norm pdf is given by:
The standard normal distribution has
and
.
- Parameters
- mufloat, optional
Normal distribution mean. Default 0.0.
- sigmafloat, optional
Normal distribution standard deviation. Default 1.0.
- Returns
- CArray
Normal distribution values.
Examples
>>> from secml.array import CArray
>>> print(CArray([1,2,3]).normpdf()) CArray([0.241971 0.053991 0.004432])
>>> print(CArray([1,2,3]).normpdf(2,0.5)) CArray([0.107982 0.797885 0.107982])
-
classmethod
ones
(shape, dtype=<class 'float'>, sparse=False)[source]¶ Return a new array of given shape and type, filled with ones.
- Parameters
- shapeint or tuple
Shape of the new array, e.g., 2 or (2,3).
- dtypestr or dtype, optional
The desired data-type for the array. Default is float.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array of ones is created.
- Returns
- CArray
Array of ones with the given properties.
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
Examples
>>> from secml.array import CArray
>>> print(CArray.ones(2)) CArray([1. 1.])
>>> print(CArray.ones((2,1), dtype=int, sparse=True)) CArray( (0, 0) 1 (1, 0) 1)
-
pinv
(self, rcond=1e-15)[source]¶ Compute the (Moore-Penrose) pseudo-inverse of a matrix.
DENSE FORMAT ONLY
Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values.
- Parameters
- rcondfloat
Cutoff for small singular values. Singular values smaller (in modulus) than rcond * largest_singular_value (again, in modulus) are set to zero.
- Returns
- array_pinvCArray
The pseudo-inverse of array. Resulting array have shape (array.shape[1], array.shape[0]).
- Raises
- LinAlgErrordense only
If array is not square or inversion fails.
Notes
The pseudo-inverse of a matrix A, denoted
, is defined as: “the matrix that ‘solves’ [the least-squares problem]
,” i.e., if
is said solution, then
is that matrix such that :math:’bar{x} = A^+b’. It can be shown that if
is the singular value decomposition of A, then
, where
are orthogonal matrices,
is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then
is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [1]
References
- 1
G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142.
Examples
>>> from secml.array import CArray
- The following example checks that:
array * array_pinv * array == array and array_pinv * array * array_pinv == array_pinv
>>> array = CArray([[1,3],[0,5],[8,2]]) >>> array_pinv = array.pinv() >>> (array == array.dot(array_pinv.dot(array)).round()).all() True
>>> (array_pinv.round(2) == array_pinv.dot(array.dot(array_pinv)).round(2)).all() True
-
pow
(self, exp)[source]¶ Array elements raised to powers from input exponent, element-wise.
Raise each base in the array to the positionally-corresponding power in exp. exp must be broadcastable to the same shape of array. If exp is a scalar, works like standard
**
operator.- Parameters
- expCArray or scalar
Exponent of power, can be another array (DENSE ONLY) or a single scalar. If array, must have the same shape of original array.
- Returns
- CArray
New array with the power of current data using input exponents.
Examples
>>> from secml.array import CArray
>>> print(CArray([1,2,3]).pow(2)) CArray([1 4 9])
>>> print(CArray([1,2,3]).pow(CArray([2,0,3]))) CArray([ 1 1 27])
>>> print(CArray([1,0,3], tosparse=True).pow(2)) CArray( (0, 0) 1 (0, 2) 9)
-
prod
(self, axis=None, dtype=None, keepdims=True)[source]¶ Return the product of array elements over a given axis.
- Parameters
- axisint or None, optional
Axis along which the product is computed. The default (None) is to compute the product over the flattened array.
- dtypestr or dtype, optional
The data-type of the returned array, as well as of the accumulator in which the elements are multiplied. By default, if array is of integer type, dtype is the default platform integer. (Note: if the type of array is unsigned, then so is dtype.) Otherwise, the dtype is the same as that of array.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Product of the elements in the array. If axis is None, scalar is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). This means that when axis=0, a flat array is returned as is (see examples).
Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform:
>>> print(CArray([536870910, 536870910, 536870910, 536870910]).prod()) # random result 16
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2],[3,4]]).prod()) 24
>>> print(CArray([[1,2],[3,4]], tosparse=True).prod(axis=1)) CArray( (0, 0) 2 (1, 0) 12) >>> print(CArray([[1,2],[3,4]]).prod(axis=0, dtype=float)) CArray([[3. 8.]])
>>> print(CArray([1,2,3]).prod(axis=0)) CArray([1 2 3]) >>> print(CArray([1,2,3]).prod(axis=1)) CArray([6])
-
classmethod
rand
(shape, random_state=None, sparse=False, density=0.01)[source]¶ Return random floats in the half-open interval [0.0, 1.0).
Results are from the “continuous uniform” distribution over the stated interval. To sample Unif[a, b), b > a multiply the output of rand by (b-a) and add a:
(b - a) * rand() + a
- Parameters
- shapeint or tuple
Shape of the new array.
- random_stateint or None, optional
If int, random_state is the seed used by the random number generator; If None, is the seed used by np.random.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array of zeros is created.
- densityscalar, optional, sparse only
Density of the generated sparse array, default 0.01 (1%). Density equal to one means a full array, density of 0 means no non-zero items.
- Returns
- CArray
Array of random floats with the given shape and format.
Examples
>>> from secml.array import CArray
>>> array_dense = CArray.randn(shape=2) >>> print(array_dense) CArray([-0.170139 0.445385])
>>> array_dense = CArray.rand(shape=(2, 3)) >>> print(array_dense) [[ 0.68588225 0.88371576 0.3958642 ] [ 0.58243871 0.05104796 0.77719998]]
>>> array_sparse = CArray.rand((2, 3), sparse=True, density=0.45) >>> print(array_sparse) CArray( (0, 0) 0.209653887609 (1, 1) 0.521906773406)
-
classmethod
randint
(low, high=None, shape=None, random_state=None, sparse=False)[source]¶ Return random integers from low (inclusive) to high (exclusive).
Return random integers from the “discrete uniform” distribution in the “half-open” interval [low, high). If high is None (the default), then results are from [0, low).
- Parameters
- lowint
Lowest (signed) integer to be drawn from the distribution (unless high=None, in which case this parameter is the highest such integer).
- highint or None, optional
If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if high=None).
- shapeint, tuple or None, optional
Shape of output array. If None, a single value is returned.
- random_stateint or None, optional
If int, random_state is the seed used by the random number generator; If None, is the seed used by np.random.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a random sparse array is created.
- Returns
- CArray
Size-shaped array of random integers.
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
Examples
>>> from secml.array import CArray
>>> array = CArray.randint(5, shape=10) >>> print(array) CArray([1 0 0 2 2 0 2 4 3 4])
>>> array = CArray.randint(0, 5, 10) >>> print(array) CArray([0 2 2 0 3 1 4 2 4 1])
>>> array = CArray.randint(0, 5, (2, 2)) >>> print(array) CArray([[3 2] [0 2]])
-
classmethod
randn
(shape, random_state=None)[source]¶ Return a sample (or samples) from the “standard normal” distribution.
DENSE FORMAT ONLY
The samples are generated from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1.
- Parameters
- shapeint or tuple
Shape of the new array.
- random_stateint or None, optional
If int, random_state is the seed used by the random number generator; If None, is the seed used by np.random.
- Returns
- CArray or float
A new array of given shape with floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied.
Examples
>>> from secml.array import CArray
>>> array_dense = CArray.randn(shape=2) >>> print(array_dense) CArray([-0.739091 1.201532])
>>> array_dense = CArray.randn(shape=(2, 3)) >>> print(array_dense) CArray([[ 0.2848132 -0.02965108 1.41184901] [-1.3842878 0.2673215 0.18978747]])
-
classmethod
randsample
(a, shape=None, replace=False, random_state=None, sparse=False)[source]¶ Generates a random sample from a given array.
- Parameters
- aCArray or int
If an array, a random sample is generated from its elements. If an int, the random sample is generated as if a was CArray.arange(n)
- shapeint, tuple or None, optional
Shape of output array. If None, a single value is returned.
- replacebool, optional
Whether the sample is with or without replacement, default False.
- random_stateint or None, optional
If int, random_state is the seed used by the random number generator; If None, is the seed used by np.random.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a random sparse array is created.
- Returns
- CArray
The generated random samples.
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
Examples
>>> from secml.array import CArray
>>> array = CArray.randsample(10, shape=(2, 3)) >>> print(array) CArray([[2 9 4] [8 6 5]])
>>> array = CArray.randsample(CArray([1,5,6,7,3]), shape=4) >>> print(array) CArray([3 7 5 6])
>>> CArray.randsample(3, 4) Traceback (most recent call last): ... ValueError: Cannot take a larger sample than population when 'replace=False'
-
classmethod
randuniform
(low=0.0, high=1.0, shape=None, random_state=None, sparse=False)[source]¶ Return random samples from low (inclusive) to high (exclusive).
Samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn.
- Parameters
- lowfloat or CArray, optional
- Lower boundary of the output interval. All values generated
will be greater than or equal to low. The default value is 0.
- A CArray of floats can be passed to specify a different bound
for each position.
- highfloat or CArray, optional
- Upper boundary of the output interval. All values generated
will be less than high. The default value is 1.0.
- A CArray of floats can be passed to specify a different bound
for each position.
- shapeint, tuple or None, optional
Shape of output array. If None, a single value is returned.
- random_stateint or None, optional
- If int, random_state is the seed used by the
random number generator; If None, is the seed used by np.random.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a random sparse array is created.
- Returns
- CArray
Size-shaped array of random samples.
Warning
When sparse is True, array is created as dense and then converted to sparse format. Consequently, the performance of this method is not comparable to other sparse array creation routines.
Examples
>>> from secml.array import CArray
>>> print(CArray.randuniform(high=5.0, shape=5)) CArray([ 4.36769 0.139844 3.711734 4.924484 3.737672])
>>> print(CArray.randuniform(shape=(2, 5))) CArray([[ 0.158324 0.485235 0.723386 0.072326 0.344732] [ 0.761642 0.844458 0.501523 0.171417 0.002068]])
>>> print(CArray.randuniform(CArray([-1, -2, 3]), 5, (2, 3))) CArray([[ -0.584032 1.433291 3.671319] [ 3.566163 -1.139602 4.268376]])
-
ravel
(self)[source]¶ Return a flattened array.
For dense format a 1-D array, containing the elements of the input, is returned. For sparse format a (1 x array.size) array will be returned.
A copy is made only if needed.
- Returns
- CArray
Flattened view (if possible) of the array with shape (array.size,) for dense format or (1, array.size) for sparse format.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2],[3,4]]).ravel()) CArray([1 2 3 4])
>>> print(CArray([[1],[2],[3]], tosparse=True).ravel()) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 3)
-
repeat
(self, repeats, axis=None)[source]¶ Repeat elements of an array.
DENSE FORMAT ONLY
- Parameters
- repeatsint, list or CArray
The number of repetitions for each element. If this is an array_like object, will be broadcasted to fit the shape of the given axis.
- axisint, optional
The axis along which to repeat values. By default, array is flattened before use.
- Returns
- CArray
Output array which has the same shape as original array, except along the given axis. If axis is None, a flat array is returned.
Examples
>>> from secml.array import CArray
>>> x = CArray([[1,2],[3,4]])
>>> print(x.repeat(2)) CArray([1 1 2 2 3 3 4 4])
>>> print(x.repeat(2, axis=1)) # Repeat the columns on the right CArray([[1 1 2 2] [3 3 4 4]]) >>> print(x.repeat(2, axis=0)) # Repeat the rows on the right CArray([[1 2] [1 2] [3 4] [3 4]])
>>> print(x.repeat([1, 2], axis=0)) CArray([[1 2] [3 4] [3 4]])
>>> x.repeat([1, 2]) # repeats size must be consistent with axis Traceback (most recent call last): ... ValueError: operands could not be broadcast together with shape (4,) (2,)
>>> x = CArray([1,2,3]) >>> print(x.repeat(2, axis=0)) # Repeat the (only) row on the right CArray([1 1 2 2 3 3]) >>> print(x.repeat(2, axis=1)) # No columns to repeat Traceback (most recent call last): ... numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1
-
repmat
(self, m, n)[source]¶ Repeat an array M x N times.
- Parameters
- m, nint
The number of times the array is repeated along the first and second axes.
- Returns
- CArray
The result of repeating array m X n times.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,2]],tosparse=True).repmat(2,2)) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 1 (0, 3) 2 (1, 0) 1 (1, 1) 2 (1, 2) 1 (1, 3) 2)
>>> print(CArray([1,2]).repmat(2,2)) CArray([[1 2 1 2] [1 2 1 2]]) >>> print(CArray([1,2]).repmat(1,2)) CArray([[1 2 1 2]]) >>> print(CArray([1,2]).repmat(2,1)) CArray([[1 2] [1 2]])
-
reshape
(self, newshape)[source]¶ Gives a new shape to an array without changing its data.
- Parameters
- newshapeint or sequence of ints
Desired shape for output array. If an integer or a tuple of length 1, resulting array will have shape (n,) if dense, (1,n) if sparse.
- A copy is made only if needed.
- Returns
- CArray
Array with new shape. If possible, a view of original array data will be returned, otherwise a copy will be made first.
Examples
>>> from secml.array import CArray
>>> print(CArray([1,2,3]).reshape((3,1))) CArray([[1] [2] [3]])
>>> print(CArray([[1],[2],[3]], tosparse=True).reshape(3)) CArray( (0, 0) 1 (0, 1) 2 (0, 2) 3)
>>> CArray([1,2,3]).reshape(4) Traceback (most recent call last): ... ValueError: cannot reshape array of size 3 into shape (4,)
-
resize
(self, newshape, constant=0)[source]¶ Return a new array with the specified shape.
Missing entries are filled with input constant (default 0).
DENSE FORMAT ONLY
- Parameters
- newshapeint or sequence of ints
Integer or one integer for each desired dimension of output array. If a tuple of length 1, output sparse array will have shape (1, n).
- constantscalar
Scalar to be used for filling missing entries. Default 0.
- Returns
- CArray
Array with new shape. Array dtype is preserved. Missing entries are filled with the desired constant (default 0).
Examples
>>> from secml.array import CArray
>>> print(CArray([1,2,3]).resize((3,3))) CArray([[1 2 3] [0 0 0] [0 0 0]])
>>> print(CArray([1,2,3]).resize((3,1))) CArray([[1] [2] [3]])
>>> print(CArray([1,2,3]).resize((1,3))) CArray([[1 2 3]])
>>> print(CArray([[1,2,3]]).resize((5, ))) CArray([1 2 3 0 0])
>>> from secml.core.constants import inf >>> print(CArray([[1,2,3]]).resize((5, ), constant=inf)) CArray([ 1 2 3 -9223372036854775808 -9223372036854775808])
>>> print(CArray([[0, 1],[2, 3]]).resize(3)) CArray([0 1 2])
>>> print(CArray([[0, 1],[2, 3]]).resize((3, 3))) CArray([[0 1 2] [3 0 0] [0 0 0]])
>>> print(CArray([[0, 1, 2],[3, 4, 5]]).resize((2, 2))) CArray([[0 1] [2 3]])
-
round
(self, decimals=0)[source]¶ Evenly round to the given number of decimals.
- Parameters
- decimalsint, optional
Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to round to the left of the decimal point.
- Returns
- out_roundedCArray
An new array containing the rounded values. The real and imaginary parts of complex numbers are rounded separately. The result of rounding a float is a float.
See also
Notes
For values exactly halfway between rounded decimal values, we rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [1] and errors introduced when scaling by powers of ten.
References
- 1
“Lecture Notes on the Status of IEEE 754”, William Kahan, http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
- 2
“How Futile are Mindless Assessments of Roundoff in Floating-Point Computation?”, William Kahan, http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
Examples
>>> from secml.array import CArray
>>> print(CArray([1.28,5.62]).round()) CArray([1. 6.])
>>> print(CArray([1.28,5.62],tosparse=True).round(decimals=1)) CArray( (0, 0) 1.3 (0, 1) 5.6)
>>> print(CArray([.5, 1.5, 2.5, 3.5, 4.5]).round()) # rounds to nearest even value CArray([0. 2. 2. 4. 4.])
>>> print(CArray([1,5,6,11]).round(decimals=-1)) CArray([ 0 0 10 10])
-
save
(self, datafile, overwrite=False)[source]¶ Save array data into plain text file.
Data is stored preserving original data type.
The default encoding is utf-8.
- Parameters
- datafilestr, file_handle (dense only)
Text file to save data to. If a string, it’s supposed to be the filename of file to save. If a file handle, data will be stored using active file handle mode. If the filename ends in .gz, the file is automatically saved in compressed gzip format. load() function understands gzipped files transparently.
- overwritebool, optional
If True and target file already exists, file will be overwritten. Otherwise (default), IOError will be raised.
Notes
Dense format, flat arrays are stored with shape N x 1.
- Sparse format, we only save non-zero data along with indices
necessary to reconstruct original 2-dimensional array.
- Dense format, shape of original array can be easily recognized
from target text file.
-
sha1
(self)[source]¶ Calculate the sha1 hexadecimal hash of array.
- Returns
- hashstr
Hexadecimal hash of array.
Examples
>>> from secml.array import CArray
>>> print(CArray([0,1,3]).sha1()) 9d9d15176c022373488fb8a2b34be0ba3046f5c6
-
property
shape
¶ Shape of stored data, tuple of ints.
-
shuffle
(self)[source]¶ Modify array in-place by shuffling its contents.
This function only shuffles the array along the first index of a not vector-like, multi-dimensional array.
Examples
>>> from secml.array import CArray
>>> array = CArray([2,3,0,1]) >>> array.shuffle() >>> print(array) CArray([0 2 1 3]) # random result
>>> array = CArray([[2,3],[0,1]]) >>> array.shuffle() >>> print(array) CArray([[0 1] [2 3]])
-
sign
(self)[source]¶ Returns element-wise sign of the array.
The sign function returns -1 if x < 0, 0 if x == 0, 1 if x > 0.
- Returns
- CArray
Array with sign of each element.
Examples
>>> from secml.array import CArray
>>> print(CArray([[-2,0,2]]).sign()) CArray([[-1 0 1]])
>>> print(CArray([-2,0,2], tosparse=True).sign()) CArray( (0, 0) -1 (0, 2) 1)
-
sin
(self)[source]¶ Trigonometric sine, element-wise.
DENSE FORMAT ONLY
The array elements are considered angles, in radians (
rad equals 360 degrees).
- Returns
- CArray
New array with trigonometric sine element-wise.
Notes
The sine is one of the fundamental functions of trigonometry (the mathematical study of triangles). Consider a circle of radius 1 centered on the origin. A ray comes in from the
axis, makes an angle at the origin (measured counter-clockwise from that axis), and departs from the origin. The
coordinate of the outgoing ray’s intersection with the unit circle is the sine of that angle. It ranges from -1 for
to +1 for
. The function has zeroes where the angle is a multiple of
. Sines of angles between
and
are negative. The numerous properties of the sine and related functions are included in any standard trigonometry text.
Examples
>>> from secml.array import CArray >>> from secml.core.constants import pi
>>> print((CArray([0,90,180,270,360,-90,-180,-270])*pi/180).sin().round()) CArray([ 0. 1. 0. -1. -0. -1. -0. 1.])
>>> print((CArray([[45,135],[225,315]])*pi/180).sin()) CArray([[ 0.707107 0.707107] [-0.707107 -0.707107]])
-
property
size
¶ Size (number of elements) of array.
For sparse data, this counts both zeros and non-zero elements.
-
sort
(self, axis=-1, kind='quicksort', inplace=False)[source]¶ Sort an array.
- Parameters
- axisint, optional
Axis along which to sort. The default is -1 (the last axis).
- kind{‘quicksort’, ‘mergesort’, ‘heapsort’}, optional
Sorting algorithm to use. Default ‘quicksort’. For sparse arrays, only ‘quicksort’ is available.
- inplacebool, optional
If True, array will be sorted in-place. Default False.
- Returns
- CArray
Sorted array.
See also
numpy.sort
Description of different sorting algorithms.
CArray.argsort
Indirect sort.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). This means that when axis=0, flat array is returned as is (see examples).
For large sparse arrays is actually faster to convert to dense first.
Examples
>>> from secml.array import CArray
>>> array = CArray([5,-1,0,-3]) >>> print(array.sort()) CArray([-3 -1 0 5])
>>> array = CArray([5,-1,0,-3]) >>> print(array.sort(axis=0)) CArray([ 5 -1 0 -3])
>>> array = CArray([5,-1,0,-3]) >>> print(array.sort(axis=1)) CArray([-3 -1 0 5])
>>> array = CArray([5,-1,0,-3]) >>> out = array.sort(inplace=True) >>> print(out) CArray([-3 -1 0 5]) >>> array[0] = 100 >>> print(out) CArray([100 -1 0 5])
-
sqrt
(self)[source]¶ Compute the positive square-root of an array, element-wise.
If any array element is complex, a complex array is returned (and the square-roots of negative reals are calculated). If all of the array elements are real, so is the resulting array, with negative elements returning nan.
- Returns
- CArray
A new array with the element-wise positive square-root of original array.
Notes
sqrt has, consistent with common convention, its branch cut the real “interval” [-inf, 0), and is continuous from above on it. A branch cut is a curve in the complex plane across which a given complex function fails to be continuous.
Examples
>>> from secml.array import CArray
>>> print(CArray(2).sqrt()) CArray([1.414214])
>>> print(CArray([2,3,4]).sqrt()) CArray([1.414214 1.732051 2. ])
>>> print(CArray([[2,3],[4,5]],tosparse=True).sqrt().round(4)) CArray( (0, 0) 1.4142 (0, 1) 1.7321 (1, 0) 2.0 (1, 1) 2.2361)
>>> print(CArray([-3, 0]).sqrt()) CArray([nan 0.])
-
std
(self, axis=None, ddof=0, keepdims=True)[source]¶ Compute the standard deviation along the specified axis.
Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis.
- Parameters
- axisint, optional
Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.
- ddofint, optional
Means Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is zero.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array.
- Returns
- float or CArray
Standard deviation of the elements in the array. If axis is None, float is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Notes
The standard deviation is the square root of the average of the squared deviations from the mean, i.e., ‘std = sqrt(mean(abs(x - x.mean())**2))’.
The average squared deviation is normally calculated as ‘x.sum() / N’, where ‘N = len(x)’. If, however, ddof is specified, the divisor ‘N - ddof’ is used instead. In standard statistical practice, ‘ddof=1’ provides an unbiased estimator of the variance of the infinite population. ‘ddof=0’ provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ‘ddof=1’, it will not be an unbiased estimate of the standard deviation per se.
Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and not-negative.
For floating-point input, the mean is computed using default float precision. Depending on the input data, this can cause the results to be inaccurate, especially for 32-bit machines (float32).
Examples
>>> from secml.array import CArray
>>> print(round(CArray([[1,4],[4,3]],tosparse=True).std(), 2)) 1.22
>>> print(CArray([[1,4],[4,3]],tosparse=True).std(axis=0)) CArray([[1.5 0.5]])
>>> print(CArray([[1,4],[4,3]]).std(axis=0, ddof=1).round(2)) CArray([[2.12 0.71]])
>>> print(CArray([1,4,4,3]).std(axis=0)) CArray([0. 0. 0. 0.]) >>> print(CArray([1,4,4,3]).std(axis=1).round(2)) CArray([1.22])
-
sum
(self, axis=None, keepdims=True)[source]¶ Sum of array elements over a given axis.
- Parameters
- axisint or None, optional
Axis along which a sum is performed. The default (axis = None) is perform a sum over all the dimensions of the input array. axis may be negative, in which case it counts from the last to the first axis.
- keepdimsbool, optional
If this is set to True (default), the result will broadcast correctly against the original array. Otherwise resulting array is flattened.
- Returns
- scalar or CArray
Sum of the elements in the array. If axis is None, float is returned. Otherwise, a CArray with shape and number of dimensions consistent with the original array and the axis parameter is returned.
Examples
>>> from secml.array import CArray
>>> print(CArray([-3,0,2]).sum()) -1
>>> print(CArray([[-3,0],[1,2]], tosparse=True).sum(axis=1)) CArray([[-3] [ 3]])
>>> print(CArray([-3,0,1,2]).sum(axis=0)) CArray([-3 0 1 2]) >>> print(CArray([-3,0,1,2]).sum(axis=1)) CArray([0])
-
t
¶
-
tocsr
(self)[source]¶ Return a sparse scipy.sparse.csr_matrix representation of array.
- Returns
- scipy.sparse.csr_matrix
A representation of current data as scipy.sparse.csr_matrix. If possible, we avoid copying original data.
Examples
>>> from secml.array import CArray
>>> array = CArray([[1,2],[0,4]], tosparse=True).tocsr() >>> print(array) (0, 0) 1 (0, 1) 2 (1, 1) 4 >>> type(array) <class 'scipy.sparse.csr.csr_matrix'>
>>> array = CArray([1,2,3]).tocsr() >>> print(array) (0, 0) 1 (0, 1) 2 (0, 2) 3 >>> type(array) <class 'scipy.sparse.csr.csr_matrix'>
-
todense
(self, dtype=None, shape=None)[source]¶ Converts array to dense format.
Return current array if it has already a dense format.
- Parameters
- dtypestr or dtype, optional
Typecode or data-type to which the array is cast.
- shapesequence of ints, optional
Shape of the new array, e.g., ‘(2, 3)’.
- Returns
- CArray
Dense array with input data and desired dtype and/or shape.
Notes
If current array has already a dense format, dtype and shape parameters will not be functional. Use .astype() or .reshape() function to alter array shape/dtype.
Examples
>>> from secml.array import CArray
>>> print(CArray([[2, 0], [3, 4]], tosparse=True).todense(dtype=float)) CArray([[2. 0.] [3. 4.]])
>>> print(CArray([[2, 0], [3, 4]], tosparse=True).todense(shape=(4,))) CArray([2 0 3 4])
-
tolist
(self)[source]¶ Return the array as a (possibly nested) list.
Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.
- Returns
- list
The possibly nested list of array elements.
Examples
>>> from secml.array import CArray
>>> array = CArray([[1,2],[0,4]]).tolist() >>> array [[1, 2], [0, 4]] >>> print(CArray(array)) CArray([[1 2] [0 4]])
>>> print(CArray(array, tosparse=True)) CArray( (0, 0) 1 (0, 1) 2 (1, 1) 4)
-
tondarray
(self)[source]¶ Return a dense numpy.ndarray representation of array.
- Returns
- numpy.ndarray
A representation of current data as numpy.ndarray. If possible, we avoid copying original data.
Examples
>>> from secml.array import CArray
>>> array = CArray([1,2,3]).tondarray() >>> array array([1, 2, 3]) >>> type(array) <class 'numpy.ndarray'>
>>> array = CArray([[1,2],[0,4]],tosparse=True).tondarray() >>> array array([[1, 2], [0, 4]], dtype=int64) >>> type(array) <class 'numpy.ndarray'>
-
tosparse
(self, dtype=None, shape=None)[source]¶ Converts array to sparse format.
Return current array if it has already a sparse format.
- Parameters
- dtypestr or dtype, optional
Typecode or data-type to which the array is cast.
- shapesequence of ints, optional
Shape of the new array, e.g., ‘(2, 3)’. Only 2-Dimensional sparse arrays are supported.
- Returns
- CArray
Sparse array with input data and desired dtype and/or shape.
Notes
If current array has already a sparse format, dtype and shape parameters will not be functional. Use .astype() or .reshape() function to alter array shape/dtype.
Examples
>>> from secml.array import CArray
>>> print(CArray([[2, 0], [3, 4]]).tosparse(dtype=float)) CArray( (0, 0) 2.0 (1, 0) 3.0 (1, 1) 4.0)
>>> print(CArray([[2, 0], [3, 4]]).tosparse(shape=(1, 4))) CArray( (0, 0) 2 (0, 2) 3 (0, 3) 4)
-
transpose
(self)[source]¶ Returns current array with axes transposed.
A view is returned if possible.
- Returns
- CArray
A view, if possible, of current array with axes suitably permuted.
Examples
>>> from secml.array import CArray
>>> print(CArray([1, 2, 3]).transpose()) CArray([[1] [2] [3]])
>>> print(CArray([[1], [2], [3]]).transpose()) CArray([[1 2 3]])
-
unique
(self, return_index=False, return_inverse=False, return_counts=False)[source]¶ Find the unique elements of an array.
There are three optional outputs in addition to the unique elements: - the indices of the input array that give the unique values - the indices of the unique array that reconstruct the input array - the number of times each unique value comes up in the input array
- Parameters
- return_indexbool, optional
If True, also return the indices of array that result in the unique array (default False).
- return_inversebool, optional, dense only
If True, also return the indices of the unique array that can be used to reconstruct the original array (default False).
- return_countsbool, optional
If True, also return the number of times each unique item appears.
- Returns
- uniqueCArray
Dense array with the sorted unique values of the array.
- unique_indexCArray, optional
The indices of the first occurrences of the unique values in the (flattened) original array. Only provided if return_index is True.
- unique_countsCArray, optional
The number of times each unique item appears in the original array. Only provided if return_counts is True.
Examples
>>> from secml.array import CArray
>>> print(CArray([[1,0,2],[2,0,3]]).unique()) CArray([0 1 2 3])
>>> print(CArray([1,2,2,3,3], tosparse=True).unique()) CArray([1 2 3])
>>> u, u_idx, u_inv = CArray([[2,2,3,3]]).unique(return_index=True, return_inverse=True) >>> print(u) # unique CArray([2 3]) >>> print(u_idx) # unique_indices CArray([0 2]) >>> print(u[u_inv]) # original (flattened) array reconstructed from unique_inverse CArray([2 2 3 3])
>>> u, u_counts = CArray([[2,2,2,3,3]]).unique(return_counts=True) >>> print(u_counts) # The number of times each unique item appears CArray([3 2])
-
classmethod
zeros
(shape, dtype=<class 'float'>, sparse=False)[source]¶ Return a new array of given shape and type, filled with zeros.
- Parameters
- shapeint or tuple
Shape of the new array, e.g., 2 or (2,3).
- dtypestr or dtype, optional
The desired data-type for the array. Default is float.
- sparsebool, optional
If False (default) a dense array will be returned. Otherwise, a sparse array of zeros is created. Note that sparse arrays with only zeros appear empty when printing.
- Returns
- CArray
Array of zeros with the given properties.
Examples
>>> from secml.array import CArray
>>> print(CArray.zeros(2)) CArray([0. 0.])
>>> array = CArray.zeros((2,1), dtype=int, sparse=True) >>> print(array) # sparse arrays with only zeros appear empty... CArray() >>> print(array.shape) (2, 1)
array_utils¶
-
secml.array.array_utils.
is_vector_index
(idx)[source]¶ Check if input index is valid for vector-like arrays.
An array is vector-like when 1-Dimensional or 2-Dimensional with shape[0] == 1.
- Parameters
- idxint, bool, slice
Index to check.
- Returns
- out_checkbool
Return True if input is a valid index for any axis with size 1, else False.
-
secml.array.array_utils.
tuple_atomic_tolist
(idx)[source]¶ Convert tuple atomic elements to list.
- Atomic objects converted:
int, np.integer
bool, np.bool_
- Parameters
- idxtuple
Tuple which elements have to be converted.
- Returns
- out_tupletuple
Converted tuple.
-
secml.array.array_utils.
tuple_sequence_tondarray
(idx)[source]¶ Convert sequences inside tuple to ndarray.
- A sequence can be:
int, np.integer
bool, np.bool_
list
np.ndarray
CDense
CSparse (are converted to dense first)
CArray
- Parameters
- idxtuple
Tuple which elements have to be converted.
- Returns
- out_tupletuple
Converted tuple.
secml.data¶
secml.data.loader¶
CDataLoader¶
-
class
secml.data.loader.c_dataloader.
CDataLoader
[source]¶ Bases:
secml.core.c_creator.CCreator
Interface for Dataset loaders.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, \*args, \*\*kwargs)Loads a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CDataLoaderCIFAR¶
-
class
secml.data.loader.c_dataloader_cifar.
CDataLoaderCIFAR
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads the CIFAR tiny images datasets.
Available at: https://www.cs.toronto.edu/~kriz/cifar.html
- Attributes
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self[, val_size])Load all images of the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
abstract property
data_md5
¶ MD5 digest of the datafile. Specific for each dataset type.
- Returns
- str
Expected MD5 digest of the dataset file.
-
abstract property
data_path
¶ URL of the data directory. Specific for each dataset type.
- Returns
- str
Path to the folder where dataset data is stored.
-
abstract property
data_url
¶ URL of the datafile. Specific for each dataset type.
- Returns
- str
URL of the remote datafile with dataset data.
-
abstract
load
(self, val_size=0)[source]¶ Load all images of the dataset.
Each image is flattened. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image. Dtype of images is uint8. Dtype of labels is int32.
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘class_names’: dictionary with the original name of each class.
- Parameters
- val_sizeint, optional
Size of the validation set. Default 0, so no validation dataset will be returned.
- Returns
- training_setCDataset
Training set.
- test_setCDataset
Test set.
- validation_setCDataset, optional
Validation set. Returned only if val_size > 0.
-
class
secml.data.loader.c_dataloader_cifar.
CDataLoaderCIFAR10
[source]¶ Bases:
secml.data.loader.c_dataloader_cifar.CDataLoaderCIFAR
Loads the CIFAR-10 tiny images dataset.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
Available at: https://www.cs.toronto.edu/~kriz/cifar.html
- Attributes
class_type
‘CIFAR-10’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self[, val_size])Load all images of the dataset.Load all images of the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
data_md5
¶ MD5 digest of the datafile.
- Returns
- str
Expected MD5 digest of the dataset file.
-
property
data_path
¶ URL of the data directory.
- Returns
- str
Path to the folder where dataset data is stored.
-
property
data_url
¶ URL of the remote datafile.
- Returns
- str
URL of the remote datafile with dataset data.
-
load
(self, val_size=0)[source]¶ Load all images of the dataset.Load all images of the dataset.
Each image is flattened. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image. Dtype of images is uint8. Dtype of labels is int32.
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘class_names’: dictionary with the original name of each class.
- Parameters
- val_sizeint, optional
Size of the validation set. Default 0, so no validation dataset will be returned.
- Returns
- training_setCDataset
Training set.
- test_setCDataset
Test set.
- validation_setCDataset, optional
Validation set. Returned only if val_size > 0.
-
class
secml.data.loader.c_dataloader_cifar.
CDataLoaderCIFAR100
[source]¶ Bases:
secml.data.loader.c_dataloader_cifar.CDataLoaderCIFAR
Loads the CIFAR-100 tiny images dataset.
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a “fine” label (the class to which it belongs) and a “coarse” label (the superclass to which it belongs).
Available at: https://www.cs.toronto.edu/~kriz/cifar.html
- Attributes
class_type
‘CIFAR-100’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self[, val_size])Load all images of the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
data_md5
¶ MD5 digest of the datafile.
- Returns
- str
Expected MD5 digest of the dataset file.
-
property
data_path
¶ URL of the data directory.
- Returns
- str
Path to the folder where dataset data is stored.
-
property
data_url
¶ URL of the remote datafile.
- Returns
- str
URL of the remote datafile with dataset data.
-
load
(self, val_size=0)[source]¶ Load all images of the dataset.
Each image is flattened. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image. Dtype of images is uint8. Dtype of labels is int32.
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘class_names’: dictionary with the original name of each class.
- Parameters
- val_sizeint, optional
Size of the validation set. Default 0, so no validation dataset will be returned.
- Returns
- training_setCDataset
Training set.
- test_setCDataset
Test set.
- validation_setCDataset, optional
Validation set. Returned only if val_size > 0.
CDataLoaderICubWorld¶
-
class
secml.data.loader.c_dataloader_icubworld.
CDataLoaderICubWorld
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Interface for loaders of iCubWorld datasets.
iCubWorld is a set of computer vision datasets for robotic applications, developed by Istituto Italiano di Tecnologia (IIT), Genova, Italy.
REF: https://robotology.github.io/iCubWorld
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, \*args, \*\*kwargs)Loads a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_icubworld.
CDataLoaderICubWorld28
[source]¶ Bases:
secml.data.loader.c_dataloader_icubworld.CDataLoaderICubWorld
Loader for iCubWorld28 dataset.
The dataset consists in 28 objects divided in 7 categories, where each category includes 4 objects. For each object there are 4 different acquisition days for training and 4 for testing, with ~150 frames per acquisition.
- Attributes
class_type
‘icubworld28’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, ds_type[, day, icub7, …])Load the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
load
(self, ds_type, day='day4', icub7=False, resize_shape=(128, 128), crop_shape=None, normalize=True)[source]¶ Load the dataset.
The pre-cropped version of the images is loaded, with size 128 x 128. An additional resize/crop shape could be passed as input if needed.
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘y_orig’: CArray with the original labels of the objects.
- Parameters
- ds_typestr
Identifier of the dataset to download, either ‘train’ or ‘test’.
- daystr, optional
Acquisition day from which to load the images. Default ‘day4’. The available options are: ‘day1’, ‘day2’, ‘day3’, ‘day4’.
- icub7bool or int, optional
If True, load a reduced dataset with 7 objects by taking the 3rd object for each category. Default False. If int, the Nth object for each category will be loaded.
- resize_shapetuple, optional
Images will be resized to (height, width) shape. Default (128, 128).
- crop_shapetuple or None, optional
If a tuple, a crop of (height, width) shape will be extracted from the center of each image. Default None.
- normalizebool, optional
If True, images are normalized between 0-1. Default True.
- Returns
- CDataset
Output dataset.
CDataLoaderImgClients¶
-
class
secml.data.loader.c_dataloader_imgclients.
CDataLoaderImgClients
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads a dataset of images and corresponding labels from ‘clients.txt’.
- Attributes
class_type
‘img-clients’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, ds_path, img_format[, …])Load all images of specified format inside given path.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
load
(self, ds_path, img_format, label_dtype=None, load_data=True)[source]¶ Load all images of specified format inside given path.
- Extra dataset attributes:
‘id’: last ds_path folder.
‘img_w’, ‘img_h’: size of the images in pixels.
‘img_c’: images number of channels.
Any other custom attribute is retrieved from ‘attributes.txt’ file. Only attributes of str type are currently supported.
- Parameters
- ds_pathstr
Full path to dataset folder.
- img_formatstr
Format of the files to load.
- label_dtypestr or dtype, optional
Datatype of the labels. If None, labels will be strings.
- load_databool, optional
If True (default) features will be stored. Otherwise store the paths to the files with dtype=object.
CDataLoaderImgFolders¶
-
class
secml.data.loader.c_dataloader_imgfolders.
CDataLoaderImgFolders
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads a dataset of images where clients are specified as different folders.
- Attributes
class_type
‘img-folders’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, ds_path, img_format[, label_re, …])Load all images of specified format inside given path.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
load
(self, ds_path, img_format, label_re=None, label_dtype=None, load_data=True)[source]¶ Load all images of specified format inside given path.
- The following custom CDataset attributes are available:
‘id’: last ds_path folder.
‘img_w’, ‘img_h’: size of the images in pixels.
‘img_c’: images number of channels.
Any other custom attribute is retrieved from ‘attributes.txt’ file. Only attributes of str type are currently supported.
Any other custom attribute is retrieved from ‘attributes.txt’ file.
- Parameters
- ds_pathstr
Full path to dataset folder.
- img_formatstr
Format of the files to load.
- label_rere, optional
Regular expression that identify the correct label. If None, the whole name of the leaf folder will be used as label.
- label_dtypestr or dtype, optional
Datatype of the labels. If None, labels will be strings.
- load_databool, optional
If True (default) features will be stored. Otherwise store the paths to the files with dtype=object.
CDataLoaderLFW¶
-
class
secml.data.loader.c_dataloader_lfw.
CDataLoaderLFW
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads the LFW Labeled Faces in the Wild dataset.
This dataset is a collection of JPEG pictures of famous people collected on the internet, all details are available on the official website:
Each picture is centered on a single face. Each pixel of each channel (color in RGB) is encoded by a float in range 0.0 - 1.0.
- The task is called Face Recognition (or Identification):
given the picture of a face, find the name of the person given a training set (gallery).
This implementation uses sklearn.datasets.fetch_lfw_people module.
- Attributes
class_type
‘lfw’Defines class type.
Methods
Cleans temporary files created by the DB loader.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self[, min_faces_per_person, funneled, …])Load LFW dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
static
clean_tmp
()[source]¶ Cleans temporary files created by the DB loader.
This method deletes the joblib-related files created while loading the database.
Does not delete the downloaded database archive.
-
load
(self, min_faces_per_person=None, funneled=True, color=False)[source]¶ Load LFW dataset.
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘y_names’: tuple with the name string for each class.
- Parameters
- min_faces_per_personint or None, optional
The extracted dataset will only retain pictures of people that have at least min_faces_per_person different pictures. Default None, so all db images are returned.
- funneledbool, optional
Download and use the images aligned with deep funneling. Default True.
- colorbool, optional
Keep the 3 RGB channels instead of averaging them to a single gray level channel. Default False.
CDataLoaderMNIST¶
-
class
secml.data.loader.c_dataloader_mnist.
CDataLoaderMNIST
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads the MNIST Handwritten Digits dataset.
This dataset has a training set of 60,000 examples, and a test set of 10,000 examples. All images are 28 x 28 black and white 8bit (0 - 255).
Available at: http://yann.lecun.com/exdb/mnist/
- Attributes
class_type
‘mnist’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, ds[, digits, num_samples])Load all images of specified format inside given path.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
load
(self, ds, digits=(0, 1, 2, 3, 4, 5, 6, 7, 8, 9), num_samples=None)[source]¶ Load all images of specified format inside given path.
Adapted from: http://cvxopt.org/_downloads/mnist.py
- Extra dataset attributes:
‘img_w’, ‘img_h’: size of the images in pixels.
‘y_original’: array with the original labels (before renumbering)
- Parameters
- dsstr
Identifier of the dataset to download, either ‘training’ or ‘testing’.
- digitstuple
Tuple with the digits to load. By default all digits are loaded.
- num_samplesint or None, optional
Number of expected samples in resulting ds. If int, an equal number of samples will be taken from each class until num_samples have been loaded. If None, all samples will be loaded.
CDataLoaderSkLearn¶
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandom
(n_samples=100, n_features=20, n_informative=2, n_redundant=2, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Class for loading random data.
Generate a random n-class classification problem.
This initially creates clusters of points normally distributed (std=1) about vertices of a 2 * class_sep-sided hypercube, and assigns an equal number of clusters to each class.
It introduces interdependence between these features and adds various types of further noise to the data.
Prior to shuffling, X stacks a number of these primary “informative” features, “redundant” linear combinations of these, “repeated” duplicates of sampled features, and arbitrary noise for and remaining features.
- Parameters
- n_samplesint, optional (default=100)
The number of samples.
- n_featuresint, optional (default=20)
The total number of features. These comprise n_informative informative features, n_redundant redundant features, n_repeated duplicated features and
n_features - n_informative - n_redundant - n_repeated
useless features drawn at random.- n_informativeint, optional (default=2)
The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension n_informative. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.
- n_redundantint, optional (default=2)
The number of redundant features. These features are generated as random linear combinations of the informative features.
- n_repeatedint, optional (default=0)
The number of duplicated features, drawn randomly from the informative and the redundant features.
- n_classesint, optional (default=2)
The number of classes (or labels) of the classification problem.
- n_clusters_per_classint, optional (default=2)
The number of clusters per class.
- weightslist of floats or None (default=None)
The proportions of samples assigned to each class. If None, then classes are balanced. Note that if
len(weights) == n_classes - 1
, then the last class weight is automatically inferred. More than n_samples samples may be returned if the sum of weights exceeds 1.- flip_yfloat, optional (default=0.01)
The fraction of samples whose class are randomly exchanged.
- class_sepfloat, optional (default=1.0)
The factor multiplying the hypercube dimension.
- hypercubebool, optional (default=True)
If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.
- shiftfloat, array of shape [n_features] or None, optional (default=0.0)
Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].
- scalefloat, array of shape [n_features] or None, optional (default=1.0)
Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘classification’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomRegression
(n_samples=100, n_features=100, n_informative=10, n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0, random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Generate a random regression problem.
The input set can either be well conditioned (by default) or have a low rank-fat tail singular profile.
The output is generated by applying a (potentially biased) random linear regression model with n_informative nonzero regressors to the previously generated input and some gaussian centered noise with some adjustable scale.
- Parameters
- n_samplesint, optional (default=100)
The number of samples.
- n_featuresint, optional (default=100)
The number of features.
- n_informativeint, optional (default=10)
The number of informative features, i.e., the number of features used to build the linear model used to generate the output.
- n_targetsint, optional (default=1)
The number of regression targets, i.e., the dimension of the y output vector associated with a sample. By default, the output is a scalar.
- biasfloat, optional (default=0.0)
The bias term in the underlying linear model.
- effective_rankint or None, optional (default=None)
- if not None:
The approximate number of singular vectors required to explain most of the input data by linear combinations. Using this kind ofsingular spectrum in the input allows the generator to reproduce the correlations often observed in practice.
- if None:
The input set is well conditioned, centered and gaussian with unit variance.
- tail_strengthfloat between 0.0 and 1.0, optional (default=0.5)
The relative importance of the fat noisy tail of the singular values profile if effective_rank is not None.
- noisefloat, optional (default=0.0)
The standard deviation of the gaussian noise applied to the output.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘regression’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomBlobs
(n_samples=100, n_features=2, centers=3, cluster_std=1.0, center_box=(-10.0, 10.0), random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Generate isotropic Gaussian blobs for clustering.
- Parameters
- n_samplesint, optional (default=100)
The total number of points equally divided among clusters.
- n_featuresint, optional (default=2)
The number of features for each sample. This parameter will not be considered if centers is different from None
- centersint or array of shape [n_centers, n_features]
The number of centers to generate (default=3), or the fixed center locations as list of tuples
- cluster_std: float or sequence of floats, optional (default=1.0)
The standard deviation of the clusters.
- center_boxpair of floats (min, max), optional (default=(-10.0, 10.0))
The bounding box for each cluster center when centers are generated at random.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘blobs’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomBlobsRegression
(n_samples=100, cluster_std=(1.0, 1.0), bias=1.0, w=(2.0, -1.0), centers=([0, 0], [-1, -1]), random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
This class loads blobs regression.
- Parameters
- n_samplesint, optional (default=100)
The total number of points equally divided among clusters.
- centersint or array of shape [n_centers, n_features], optional (default=3)
The number of centers to generate, or the fixed center locations.
- cluster_std: list of floats, optional (default=(1.0,1.0))
The standard deviation of the clusters.
- biasbias that will sum to the function
- wthe height of every gaussian
- centers: list of tuple optional (default=([1,1],[-1,-1]))
The bounding box for each cluster center when centers are generated at random.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘blobs-regression’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomCircles
(n_samples=100, noise=None, factor=0.8, random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Make a large circle containing a smaller circle in 2d.
- Parameters
- n_samplesint, optional (default=100)
The total number of points generated.
- noisedouble or None (default=None)
Standard deviation of Gaussian noise added to the data.
- factordouble < 1 (default=.8)
Scale factor between inner and outer circle.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘circles’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomCircleRegression
(n_samples=100, noise=None, factor=0.8, random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Make a large circle containing a smaller circle in 2d.
- Parameters
- n_samplesint, optional (default=100)
The total number of points generated.
- noisedouble or None (default=None)
Standard deviation of Gaussian noise added to the data.
- factordouble < 1 (default=.8)
Scale factor between inner and outer circle.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘circles-regression’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomMoons
(n_samples=100, noise=None, random_state=None)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Make two interleaving half circles.
- Parameters
- n_samplesint, optional (default=100)
The total number of points generated.
- noisedouble or None (default=None)
Standard deviation of Gaussian noise added to the data.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
class_type
‘moons’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLRandomBinary
(n_samples=100, n_features=2)[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Generate random binary data.
- Parameters
- n_samplesint, optional (default=100)
The total number of points generated.
- n_featuresint, optional (default=2)
The total number of features
- Attributes
class_type
‘binary’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.data.loader.c_dataloader_sklearn.
CDLIris
(class_list=None, zero_one=False)[source]¶ Bases:
secml.data.loader.c_dataloader_sklearn.CDLRandomToy
Loads Iris dataset.
The iris dataset is a classic and very easy multi-class classification dataset.
Classes
3
Samples per class
50
Samples total
150
Dimensionality
4
Features
real, positive
- Parameters
- class_listlist of str (default None)
Each string is the name of data’s class that we want in the new dataset. If None every class will be keep
- zero_onebool
If is true, and class list is equal to two, will be assigned 0 at the label with lower value, 1 to the other.
- Attributes
class_type
‘iris’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
toy
= 'iris'¶
-
class
secml.data.loader.c_dataloader_sklearn.
CDLDigits
(class_list=None, zero_one=False)[source]¶ Bases:
secml.data.loader.c_dataloader_sklearn.CDLRandomToy
Loads Digits dataset.
The digits dataset is a classic and very easy multi-class classification dataset. Each datapoint is a 8x8 image of a digit.
Classes
10
Samples per class
~180
Samples total
1797
Dimensionality
64
Features
integers 0-16
- Parameters
- class_listlist of str (default None)
Each string is the name of data’s class that we want in the new dataset. If None every class will be keep
- zero_onebool
If is true, and class list is equal to two, will be assigned 0 at the label with lower value, 1 to the other.
- Attributes
class_type
‘digits’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
toy
= 'digits'¶
-
class
secml.data.loader.c_dataloader_sklearn.
CDLBoston
(class_list=None, zero_one=False)[source]¶ Bases:
secml.data.loader.c_dataloader_sklearn.CDLRandomToy
Loads Boston dataset.
Boston house-prices dataset, useful for regression.
Samples total
506
Dimensionality
13
Features
real, positive
Targets
real 5. - 50.
- Parameters
- class_listlist of str (default None)
Each string is the name of data’s class that we want in the new dataset. If None every class will be keep
- zero_onebool
If is true, and class list is equal to two, will be assigned 0 at the label with lower value, 1 to the other.
- Attributes
class_type
‘boston’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
toy
= 'boston'¶
-
class
secml.data.loader.c_dataloader_sklearn.
CDLDiabetes
(class_list=None, zero_one=False)[source]¶ Bases:
secml.data.loader.c_dataloader_sklearn.CDLRandomToy
Loads Diabetes dataset.
Diabetes dataset, useful for regression.
Samples total
442
Dimensionality
10
Features
real, -.2 < x < .2
Targets
integer 25 - 346
- Parameters
- class_listlist of str (default None)
Each string is the name of data’s class that we want in the new dataset. If None every class will be keep
- zero_onebool
If is true, and class list is equal to two, will be assigned 0 at the label with lower value, 1 to the other.
- Attributes
class_type
‘diabetes’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self)Loads the dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
toy
= 'diabetes'¶
CDataLoaderSvmLight¶
-
class
secml.data.loader.c_dataloader_svmlight.
CDataLoaderSvmLight
[source]¶ Bases:
secml.data.loader.c_dataloader.CDataLoader
Loads and Saves data in svmlight / libsvm format.
- Attributes
class_type
‘svmlight’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dump
(d, f[, zero_based, comment])Dumps a dataset in the svmlight / libsvm file format.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(self, file_path[, dtype_samples, …])Loads a dataset from the svmlight / libsvm format and returns a sparse dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
static
dump
(d, f, zero_based=True, comment=None)[source]¶ Dumps a dataset in the svmlight / libsvm file format.
This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable to predict.
- Parameters
- dCDataset
Contain dataset with patterns and labels that we want store.
- fString
Path to file were we want store dataset into format svmlight or libsvm.
- zero_basedbool, optional
Whether column indices should be written zero-based (True, default) or one-based (False).
- commentstring, optional
Comment to insert at the top of the file. This should be either a Unicode string, which will be encoded as UTF-8, or an ASCII byte string. If a comment is given, then it will be preceded by one that identifies the file as having been dumped by scikit-learn. Note that not all tools grok comments in SVMlight files.
Examples
>>> from secml.data.loader import CDataLoaderSvmLight >>> from secml.array import CArray >>> patterns = CArray([[1,0,2], [4,0,5]]) >>> labels = CArray([0,1]) >>> CDataLoaderSvmLight.dump(CDataset(patterns,labels), "myfile.libsvm")
-
load
(self, file_path, dtype_samples=<class 'float'>, dtype_labels=<class 'float'>, n_features=None, zero_based=True, remove_all_zero=False, multilabel=False, load_infos=False)[source]¶ Loads a dataset from the svmlight / libsvm format and returns a sparse dataset.
Datasets must have only numerical feature indices and for every pattern indices must be ordered.
- Extra dataset attributes:
‘infos’, CArray with inline comment for each sample.
- Parameters
- file_pathString
Path to file were dataset are stored into format svmlight or libsvm.
- dtype_samplesstr or dtype, optional
Data-type to which the samples should be casted. Default is float.
- dtype_labelsstr or dtype, optional
Data-type to which the labels should be casted. Default is float.
- n_featuresNone or int, optional
The number of features to use. If None (default), it will be inferred. This argument is useful to load several files that are subsets of a bigger sliced dataset: each subset might not have examples of every feature, hence the inferred shape might vary from one slice to another.
- zero_based: bool, optional
Whether column indices are zero-based (True, default) or one-based (False). If column indices are set to be one-based, they are transformed to zero-based to match Python/NumPy conventions.
- remove_all_zero: boolean, optional, default True
If True every feature which is zero for every pattern will be removed from dataset.
- multilabelboolean, optional
True if every sample can have more than one label. Default False. (see http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
- load_infosbool, optional
If True, inline comments will be loaded from the svmlight file and stored in the infos CDataset parameter (as CArray). Default False.
- Returns
- datasetCDataset
Dataset object that contain patterns and labels. If remove_all_zero is set to True, the returned dataset will have the new argument idx_mapping with the mapping of the returned features to the original features’s indices.
Examples
>>> from secml.data.loader import CDataLoaderSvmLight >>> from secml.array import CArray >>> patterns = CArray ([[1,0,2], [4,0,5]]) >>> labels = CArray ([0, 1]) >>> CDataLoaderSvmLight().dump(CDataset(patterns,labels), "myfile.libsvm") >>> new_dataset = CDataLoaderSvmLight().load("myfile.libsvm", remove_all_zero=True) >>> print(new_dataset.X) CArray( (0, 1) 2.0 (0, 0) 1.0 (1, 1) 5.0 (1, 0) 4.0) >>> print(new_dataset.Y) CArray([0. 1.]) >>> print(new_dataset.header.idx_mapping) CArray([0 2])
loader_utils¶
-
secml.data.loader.loader_utils.
resize_img
(img, shape)[source]¶ Resize input image to desired shape.
If the input image is bigger than desired, the LANCZOS filter will be used. If calculates the output pixel value using a truncated sinc filter on all pixels that may contribute to the output value.
Otherwise, a LINEAR filter will be used. It calculates the output pixel value using linear interpolation on all pixels that may contribute to the output value.
- Parameters
- imgPIL.Image.Image
Image to be resized.
- shapetuple
Desired output image dimensions (height, width).
- Returns
- PIL.Image
Resized image.
-
secml.data.loader.loader_utils.
crop_img
(img, crop)[source]¶ Extract a center crop of the input image.
- Parameters
- imgPIL.Image.Image
Image to be cropped.
- croptuple
Dimensions of the desired crop (height, width).
- Returns
- PIL.Image
Cropped image.
Notes
The image center will be computed by rounding the coordinates if necessary. Python round default behavior is toward the closest even decimal.
secml.data.selection¶
CPrototypesSelector¶
-
class
secml.data.selection.c_prototypes_selector.
CPrototypesSelector
[source]¶ Bases:
secml.core.c_creator.CCreator
Selection of Prototypes.
Prototype selection methods help reducing the number of samples in a dataset by carefully selecting a subset of prototypes.
[Rb0529608e100-1] A good selection strategy should satisfy the following three conditions. First, if some prototypes are similar-that is, if they are close in the space of strings-their distances to a sample string should vary only little. Hence, in this case, some of the respective vector components are redundant. Consequently, a selection algorithm should avoid redundancies. Secondly, to include as much structural information as possible in the prototypes, they should be uniformly distributed over the whole set of patterns. Thirdly, since outliers are likely to introduce noise and distortions, a selection algorithm should disregard outliers.
References
- Rb0529608e100-1
Spillmann, Barbara, et al. “Transforming strings to vector spaces using prototype selection.” Structural, Syntactic, and Statistical Pattern Recognition. Springer Berlin Heidelberg, 2006. 287-296.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
sel_idx
Returns an array with the indices of the selected prototypes.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes)Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
sel_idx
¶ Returns an array with the indices of the selected prototypes.
CPSBorder¶
-
class
secml.data.selection.c_ps_border.
CPSBorder
[source]¶ Bases:
secml.data.selection.c_prototypes_selector.CPrototypesSelector
Selection of Prototypes using border strategy.
Selects the prototypes from the borders of the dataset.
References
Spillmann, Barbara, et al. “Transforming strings to vector spaces using prototype selection.” Structural, Syntactic, and Statistical Pattern Recognition. Springer Berlin Heidelberg, 2006. 287-296.
- Attributes
class_type
‘border’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes)Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CPSCenter¶
-
class
secml.data.selection.c_ps_center.
CPSCenter
[source]¶ Bases:
secml.data.selection.c_prototypes_selector.CPrototypesSelector
Selection of Prototypes using center strategy.
Selects the prototypes from the center of the dataset.
References
Spillmann, Barbara, et al. “Transforming strings to vector spaces using prototype selection.” Structural, Syntactic, and Statistical Pattern Recognition. Springer Berlin Heidelberg, 2006. 287-296.
- Attributes
class_type
‘center’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes)Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CPSKMedians¶
-
class
secml.data.selection.c_ps_kmedians.
CPSKMedians
[source]¶ Bases:
secml.data.selection.c_prototypes_selector.CPrototypesSelector
Selection of Prototypes using K-Medians strategy.
Runs a k-means clustering to obtain a set of clusters from the dataset. Then selects the prototypes as their set medians.
References
Spillmann, Barbara, et al. “Transforming strings to vector spaces using prototype selection.” Structural, Syntactic, and Statistical Pattern Recognition. Springer Berlin Heidelberg, 2006. 287-296.
- Attributes
class_type
‘k-medians’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes)Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CPSRandom¶
-
class
secml.data.selection.c_ps_random.
CPSRandom
[source]¶ Bases:
secml.data.selection.c_prototypes_selector.CPrototypesSelector
Selection of Prototypes using random strategy.
- Attributes
class_type
‘random’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes[, …])Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
select
(self, dataset, n_prototypes, random_state=None)[source]¶ Selects the prototypes from input dataset.
- Parameters
- datasetCDataset
Dataset from which prototypes should be selected
- n_prototypesint
Number of prototypes to be selected.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Returns
- reduced_dsCDataset
Dataset with selected prototypes.
CPSSpanning¶
-
class
secml.data.selection.c_ps_spanning.
CPSSpanning
[source]¶ Bases:
secml.data.selection.c_prototypes_selector.CPrototypesSelector
Selection of Prototypes using spanning strategy.
Selects the first prototype as the dataset median, and the remaining ones iteratively, by maximizing the distance to the set of previously-selected prototypes.
References
Spillmann, Barbara, et al. “Transforming strings to vector spaces using prototype selection.” Structural, Syntactic, and Statistical Pattern Recognition. Springer Berlin Heidelberg, 2006. 287-296.
- Attributes
class_type
‘spanning’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
select
(self, dataset, n_prototypes)Selects the prototypes from input dataset.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
secml.data.splitter¶
CDataSplitter¶
-
class
secml.data.splitter.c_datasplitter.
CDataSplitter
(num_folds=3, random_state=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Abstract class that defines basic methods for dataset splitting.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This corresponds to the size of tr_idx and ts_idx lists.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- Attributes
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
-
abstract
compute_indices
(self, dataset)[source]¶ Compute training set and test set indices for each fold.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- CDataSplitter
Instance of the dataset splitter with tr/ts indices.
-
split
(self, dataset)[source]¶ Returns a list of split datasets.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- split_dslist of tuple
List of tuples (training set, test set), one for each fold.
-
property
tr_idx
¶ List of training idx obtained with the split of the data.
-
property
ts_idx
¶ List of test idx obtained with the split of the data.
CDataSplitterKFold¶
-
class
secml.data.splitter.c_datasplitter_kfold.
CDataSplitterKFold
(num_folds=3, random_state=None)[source]¶ Bases:
secml.data.splitter.c_datasplitter.CDataSplitter
K-Folds dataset splitting.
Provides train/test indices to split data in train and test sets. Split dataset into ‘num_folds’ consecutive folds (with shuffling).
Each fold is then used a validation set once while the k - 1 remaining fold form the training set.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This correspond to the size of tr_idx and ts_idx lists.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
Examples
>>> from secml.data import CDataset >>> from secml.data.splitter import CDataSplitterKFold
>>> ds = CDataset([[1,2],[3,4],[5,6]],[1,0,1]) >>> kfold = CDataSplitterKFold(num_folds=3, random_state=0).compute_indices(ds) >>> print(kfold.num_folds) 3 >>> print(kfold.tr_idx) [CArray(2,)(dense: [0 1]), CArray(2,)(dense: [0 2]), CArray(2,)(dense: [1 2])] >>> print(kfold.ts_idx) [CArray(1,)(dense: [2]), CArray(1,)(dense: [1]), CArray(1,)(dense: [0])]
- Attributes
class_type
‘kfold’Defines class type.
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
CDataSplitterLabelKFold¶
-
class
secml.data.splitter.c_datasplitter_labelkfold.
CDataSplitterLabelKFold
(num_folds=3)[source]¶ Bases:
secml.data.splitter.c_datasplitter.CDataSplitter
K-Folds dataset splitting with non-overlapping labels.
The same label will not appear in two different folds (the number of distinct labels has to be at least equal to the number of folds).
The folds are approximately balanced in the sense that the number of distinct labels is approximately the same in each fold.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This correspond to the size of tr_idx and ts_idx lists.
Examples
>>> from secml.data import CDataset >>> from secml.data import CDataset >>> from secml.data.splitter import CDataSplitterLabelKFold >>> ds = CDataset([[1,2],[3,4],[5,6],[7,8]], [1,0,1,2]) >>> kfold = CDataSplitterLabelKFold(num_folds=3).compute_indices(ds) >>> print(kfold.num_folds) 3 >>> print(kfold.tr_idx) [CArray(2,)(dense: [1 3]), CArray(3,)(dense: [0 1 2]), CArray(3,)(dense: [0 2 3])] >>> print(kfold.ts_idx) [CArray(2,)(dense: [0 2]), CArray(1,)(dense: [3]), CArray(1,)(dense: [1])]
- Attributes
class_type
‘label-kfold’Defines class type.
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
CDataSplitterOpenWorldKFold¶
-
class
secml.data.splitter.c_datasplitter_openworld.
CDataSplitterOpenWorldKFold
(num_folds=3, n_train_samples=5, n_train_classes=None, random_state=None)[source]¶ Bases:
secml.data.splitter.c_datasplitter.CDataSplitter
Open World K-Folds dataset splitting.
Provides train/test indices to split data in train and test sets.
In an Open World setting, half (or custom number) of the dataset classes are used for training, while all dataset classes are tested.
Split dataset into ‘num_folds’ consecutive folds (with shuffling).
Each fold is then used a validation set once while the k - 1 remaining fold form the training set.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This correspond to the size of tr_idx and ts_idx lists.
- n_train_samplesint, optional
Number of training samples per client. Default 5.
- n_train_classesint or None
Number of dataset classes to use as training. If not specified half of dataset classes are used (floored).
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
Examples
>>> from secml.data import CDataset >>> from secml.data.splitter import CDataSplitterOpenWorldKFold
>>> ds = CDataset([[1,2],[3,4],[5,6],[10,20],[30,40],[50,60], ... [100,200],[300,400]],[1,0,1,2,0,1,0,2]) >>> kfold = CDataSplitterOpenWorldKFold( ... num_folds=3, n_train_samples=2, random_state=0).compute_indices(ds) >>> kfold.num_folds 3 >>> print(kfold.tr_idx) [CArray(2,)(dense: [2 5]), CArray(2,)(dense: [1 4]), CArray(2,)(dense: [0 2])] >>> print(kfold.ts_idx) [CArray(6,)(dense: [0 1 3 4 6 7]), CArray(6,)(dense: [0 2 3 5 6 7]), CArray(6,)(dense: [1 3 4 5 6 7])] >>> print(kfold.tr_classes) # Class 2 is skipped as there are not enough samples (at least 3) [CArray(1,)(dense: [1]), CArray(1,)(dense: [0]), CArray(1,)(dense: [1])]
- Attributes
class_type
‘open-world-kfold’Defines class type.
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
-
compute_indices
(self, dataset)[source]¶ Compute training set and test set indices for each fold.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- CDataSplitter
Instance of the dataset splitter with tr/ts indices.
-
property
tr_classes
¶ List of training classes obtained with the split of the data.
CDataSplitterShuffle¶
-
class
secml.data.splitter.c_datasplitter_shuffle.
CDataSplitterShuffle
(num_folds=3, train_size=None, test_size=None, random_state=None)[source]¶ Bases:
secml.data.splitter.c_datasplitter.CDataSplitter
Random permutation dataset splitting.
Yields indices to split data into training and test sets.
Note: contrary to other dataset splitting strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This correspond to the size of tr_idx and ts_idx lists.
- train_sizefloat, int, or None, optional
If None (default), the value is automatically set to the complement of the test size. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples.
- test_sizefloat, int, or None, optional
If None (default), the value is automatically set to the complement of the train size. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
Notes
train_size and test_size could not be both None. If one is set to None the other should be a float, representing a percentage, or an integer.
Examples
>>> from secml.data import CDataset >>> from secml.data.splitter import CDataSplitterShuffle
>>> ds = CDataset([[1,2],[3,4],[5,6]],[1,0,1]) >>> shuffle = CDataSplitterShuffle(num_folds=3, train_size=0.5, random_state=0).compute_indices(ds) >>> shuffle.num_folds 3 >>> shuffle.tr_idx [CArray(1,)(dense: [0]), CArray(1,)(dense: [1]), CArray(1,)(dense: [1])] >>> shuffle.ts_idx [CArray(2,)(dense: [2 1]), CArray(2,)(dense: [2 0]), CArray(2,)(dense: [0 2])]
>>> # Setting the train_size or the test_size to an arbitrary percentage >>> shuffle = CDataSplitterShuffle(num_folds=3, train_size=0.2, random_state=0).compute_indices(ds) >>> shuffle.num_folds 3 >>> shuffle.tr_idx [CArray(0,)(dense: []), CArray(0,)(dense: []), CArray(0,)(dense: [])] >>> shuffle.ts_idx [CArray(3,)(dense: [2 1 0]), CArray(3,)(dense: [2 0 1]), CArray(3,)(dense: [0 2 1])]
- Attributes
class_type
‘shuffle’Defines class type.
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
CDataSplitterStratifiedKFold¶
-
class
secml.data.splitter.c_datasplitter_stratkfold.
CDataSplitterStratifiedKFold
(num_folds=3, random_state=None)[source]¶ Bases:
secml.data.splitter.c_datasplitter.CDataSplitter
Stratified K-Folds dataset splitting.
Provides train/test indices to split data in train test sets.
This dataset splitting object is a variation of KFold, which returns stratified folds. The folds are made by preserving the percentage of samples for each class.
- Parameters
- num_foldsint, optional
Number of folds to create. Default 3. This correspond to the size of tr_idx and ts_idx lists. For stratified K-Fold, this cannot be higher than the minimum number of samples per class in the dataset.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
Examples
>>> from secml.data import CDataset >>> from secml.data.splitter import CDataSplitterStratifiedKFold
>>> ds = CDataset([[1,2],[3,4],[5,6],[7,8]],[1,0,0,1]) >>> stratkfold = CDataSplitterStratifiedKFold(num_folds=2, random_state=0).compute_indices(ds) >>> stratkfold.num_folds # Cannot be higher than the number of samples per class 2 >>> stratkfold.tr_idx [CArray(2,)(dense: [1 3]), CArray(2,)(dense: [0 2])] >>> stratkfold.ts_idx [CArray(2,)(dense: [0 2]), CArray(2,)(dense: [1 3])]
- Attributes
class_type
‘strat-kfold’Defines class type.
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Returns a list of split datasets.
timed
([msg])Timer decorator.
CTrainTestSplit¶
-
class
secml.data.splitter.c_train_test_split.
CTrainTestSplit
(train_size=None, test_size=None, random_state=None, shuffle=True)[source]¶ Bases:
secml.core.c_creator.CCreator
Train and Test Sets splitter.
Split dataset into random train and test subsets.
Quick utility that wraps CDataSplitterShuffle().compute_indices(ds)) for splitting (and optionally subsampling) data in a oneliner.
- Parameters
- train_sizefloat, int, or None, optional
If None (default), the value is automatically set to the complement of the test size. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples.
- test_sizefloat, int, or None, optional
If None (default), the value is automatically set to the complement of the train size. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- shufflebool, optional
Whether or not to shuffle the data before splitting. If shuffle=False then stratify must be None. Default True.
Notes
train_size and test_size could not be both None. If one is set to None the other should be a float, representing a percentage, or an integer.
Examples
>>> from secml.data import CDataset >>> from secml.data.splitter import CTrainTestSplit
>>> ds = CDataset([[1,2],[3,4],[5,6],[7,8]],[1,0,1,1]) >>> tr, ts = CTrainTestSplit(train_size=0.5, random_state=0).split(ds) >>> tr.num_samples 2 >>> ts.num_samples 2
>>> # Get splitting indices without shuffle >>> tr_idx, ts_idx = CTrainTestSplit(train_size=0.25, ... random_state=0, shuffle=False).compute_indices(ds) >>> tr_idx CArray(1,)(dense: [0]) >>> ts_idx CArray(3,)(dense: [1 2 3])
>>> # At least one sample is needed for each set >>> tr, ts = CTrainTestSplit(train_size=0.2, random_state=0).split(ds) Traceback (most recent call last): ... ValueError: train_size should be at least 1 or 0.25
- Attributes
Methods
compute_indices
(self, dataset)Compute training set and test set indices for each fold.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Split dataset into training set and test set.
timed
([msg])Timer decorator.
-
compute_indices
(self, dataset)[source]¶ Compute training set and test set indices for each fold.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- tr_idx, ts_idxCArray
Flat arrays with the tr/ts indices.
-
split
(self, dataset)[source]¶ Split dataset into training set and test set.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- ds_train, ds_testCDataset
Train and Test datasets.
-
property
tr_idx
¶ Training set indices obtained with the split of the data.
-
property
ts_idx
¶ Test set indices obtained with the split of the data.
CChronologicalSplitter¶
-
class
secml.data.splitter.c_chronological_splitter.
CChronologicalSplitter
(th_timestamp, train_size=1.0, test_size=1.0, random_state=None, shuffle=True)[source]¶ Bases:
secml.core.c_creator.CCreator
Dataset splitter based on timestamps.
- Split dataset into train and test subsets,
using a timestamp as split point.
A dataset containing timestamp and timestamp_fmt header attributes is required.
- Parameters
- th_timestampstr
The split point in time between training and test set. Samples having timestamp <= th_timestamp will be put in the training set, while samples with timestamp > th_timestamp will be used for the test set. The timestamp must follow the ISO 8601 format. Any incomplete timestamp will be parsed too.
- train_sizefloat or int, optional
If float, should be between 0.0 and 1.0 and represent the proportion of the samples having timestamp <= th_timestamp to include in the train split. Default 1.0. If int, represents the absolute number of train samples.
- test_sizefloat or int, optional
If float, should be between 0.0 and 1.0 and represent the proportion of the samples having timestamp > th_timestamp to include in the test split. Default 1.0. If int, represents the absolute number of test samples.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, is the RandomState instance used by np.random.
- shufflebool, optional
Whether or not to shuffle the data before splitting. If shuffle=False then stratify must be None. Default True.
- Attributes
Methods
compute_indices
(self, dataset)Compute training set and test set indices.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
split
(self, dataset)Split dataset into training set and test set.
timed
([msg])Timer decorator.
-
compute_indices
(self, dataset)[source]¶ Compute training set and test set indices.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- tr_idx, ts_idxCArray
Flat arrays with the tr/ts indices.
-
split
(self, dataset)[source]¶ Split dataset into training set and test set.
- Parameters
- datasetCDataset
Dataset to split.
- Returns
- ds_train, ds_testCDataset
Train and Test datasets.
-
property
tr_idx
¶ Training set indices obtained with the split of the data.
-
property
ts_idx
¶ Test set indices obtained with the split of the data.
CDataset¶
-
class
secml.data.c_dataset.
CDataset
(x, y, header=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Creates a new dataset.
A dataset consists in a 2-Dimensional patterns array, dense or sparse format, with one pattern for each row and a flat dense array with each pattern’s label.
- Parameters
- xarray_like or CArray
Dataset patterns, one for each row. Array is converted to 2-Dimensions before storing.
- yarray_like or CArray
Dataset labels. Array is converted to dense format and flattened before storing.
- headerCDatasetHeader or None, optional
The header for the dataset. Will define any extra parameter. See CDatasetHeader docs for more informations.
Examples
>>> from secml.data import CDataset
>>> ds = CDataset([[1,2],[3,4],[5,6]],[1,0,1]) >>> print(ds.X) CArray([[1 2] [3 4] [5 6]]) >>> print(ds.Y) CArray([1 0 1])
>>> ds = CDataset([1,2,3],1) # Patterns will be converted to 2-Dims >>> print(ds.X) CArray([[1 2 3]]) >>> print(ds.Y) CArray([1])
>>> from secml.array import CArray >>> ds = CDataset(CArray([[1,0],[0,4],[1,0]],tosparse=True), CArray([1,0,1],tosparse=True)) >>> print(ds.X) CArray( (0, 0) 1 (1, 1) 4 (2, 0) 1) >>> print(ds.Y) CArray([1 0 1])
The number of labels must be equal to the number of samples
>>> ds = CDataset([[1,2],[3,4]],1) Traceback (most recent call last): ... ValueError: number of labels (1) must be equal to the number of samples (2).
>>> from secml.data import CDatasetHeader >>> ds = CDataset([1,2,3], 1, CDatasetHeader(id='mydataset', age=34)) # 2 extra attributes >>> print(ds.header.id) mydataset >>> print(ds.header.age) 34
- Attributes
X
Dataset Patterns.
Y
Dataset Labels.
class_type
Defines class type.
classes
Classes (unique).
header
Dataset header.
isdense
True if patterns are stored in dense format, else False.
issparse
True if patterns are stored in sparse format, else False.
logger
Logger for current object.
num_classes
Number of classes.
num_features
Number of features.
num_labels
Returns dataset’s number of labels.
num_samples
Number of patterns.
verbose
Verbosity level of logger output.
Methods
append
(self, dataset)Append input dataset to current dataset.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_bounds
(self[, offset])Return dataset boundaries plus an offset.
get_class_from_type
(class_type)Return the class associated with input type.
get_labels_onehot
(self)Return dataset labels in one-hot encoding.
get_labels_ovr
(self, pos_label)Return dataset labels in one-vs-rest encoding.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
todense
(self)Convert dataset’s patterns to dense format.
tosparse
(self)Convert dataset’s patterns to sparse format.
-
property
X
¶ Dataset Patterns.
-
property
Y
¶ Dataset Labels.
-
append
(self, dataset)[source]¶ Append input dataset to current dataset.
- Parameters
- datasetCDataset
Dataset to append. Patterns are appended on first axis (axis=0) so the number of features must be equal to dataset.num_features. If current dataset format is sparse, dense dataset to append will be converted to sparse and vice-versa.
- Returns
- CDataset
A new Dataset resulting of appending new data to existing data. Format of resulting dataset is equal to current dataset format.
See also
CArray.append
More informations about arrays append.
Notes
Append does not occur in-place: a new dataset is allocated and filled.
Examples
>>> from secml.data import CDataset
>>> ds = CDataset([[1,2],[3,4],[5,6]],[1,0,1]) >>> ds_new = ds.append(CDataset([[10,20],[30,40],[50,60]],[1,0,1])) >>> print(ds_new.X) CArray([[ 1 2] [ 3 4] [ 5 6] [10 20] [30 40] [50 60]]) >>> print(ds_new.Y) CArray([1 0 1 1 0 1])
>>> ds_new = ds.append(CDataset([[10,20],[30,40],[50,60]],[1,0,1]).tosparse()) >>> print(ds_new.X) CArray([[ 1 2] [ 3 4] [ 5 6] [10 20] [30 40] [50 60]]) >>> print(ds_new.Y) CArray([1 0 1 1 0 1])
-
property
classes
¶ Classes (unique).
-
get_bounds
(self, offset=0.0)[source]¶ Return dataset boundaries plus an offset.
- Parameters
- offsetfloat
Quantity to be added as an offset. Default 0.
- Returns
- boundarylist of tuple
Every tuple contain min and max feature value plus an offset for corresponding coordinate.
Examples
>>> from secml.array import CArray >>> from secml.data import CDataset
>>> ds = CDataset([[1,2,3],[4,5,6]], [1,2]) >>> ds.get_bounds() [(1.0, 4.0), (2.0, 5.0), (3.0, 6.0)]
-
get_labels_onehot
(self)[source]¶ Return dataset labels in one-hot encoding.
- Returns
- binary_labelsCArray
A (num_samples, num_classes) array with the dataset labels one-hot encoded.
Examples
>>> ds = CDataset([[11,22],[33,44],[55,66],[77,88]], [1,0,2,1]) >>> print(ds.get_labels_onehot()) CArray([[0 1 0] [1 0 0] [0 0 1] [0 1 0]])
-
get_labels_ovr
(self, pos_label)[source]¶ Return dataset labels in one-vs-rest encoding.
- Parameters
- pos_labelscalar or str
Label of the class to consider as positive.
- Returns
- CArray
Flat array with 1 when the class label is equal to input positive class’s label, else 0.
Examples
>>> ds = CDataset([[11,22],[33,44],[55,66],[77,88]], [1,0,2,1]) >>> print(ds.get_labels_ovr(2)) CArray([0 0 1 0]) >>> print(ds.get_labels_ovr(1)) CArray([1 0 0 1])
-
property
header
¶ Dataset header.
-
property
isdense
¶ True if patterns are stored in dense format, else False.
-
property
issparse
¶ True if patterns are stored in sparse format, else False.
-
property
num_classes
¶ Number of classes.
-
property
num_features
¶ Number of features.
-
property
num_labels
¶ Returns dataset’s number of labels.
-
property
num_samples
¶ Number of patterns.
-
todense
(self)[source]¶ Convert dataset’s patterns to dense format.
- Returns
- CDataset
A new CDataset with same patterns converted to dense format. Copy is avoided if possible.
Examples
>>> from secml.data import CDataset
>>> ds = CDataset(CArray([[1,2],[3,4],[5,6]], tosparse=True),[1,0,1]).todense() >>> print(ds.X) CArray([[1 2] [3 4] [5 6]])
-
tosparse
(self)[source]¶ Convert dataset’s patterns to sparse format.
- Returns
- CDataset
A new CDataset with same patterns converted to sparse format. Copy is avoided if possible.
Examples
>>> from secml.data import CDataset
>>> ds = CDataset([[1,2],[3,4],[5,6]],[1,0,1]).tosparse() >>> print(ds.X) CArray( (0, 0) 1 (0, 1) 2 (1, 0) 3 (1, 1) 4 (2, 0) 5 (2, 1) 6) >>> print(ds.Y) CArray([1 0 1])
CDatasetHeader¶
-
class
secml.data.c_dataset_header.
CDatasetHeader
(**kwargs)[source]¶ Bases:
secml.core.c_creator.CCreator
Creates a new dataset header.
Parameters to be included into the header could be defined as keyword init arguments or by setting them as new public header attributes.
Immutable objects (scalar, string, tuple, dictionary) will be passed as they are while indexing the header. Arrays will be indexed and the result of indexing will be returned.
- To extract a dictionary with the entire set of attributes,
use .get_params().
- Parameters
- kwargsany, optional
Any extra attribute of the dataset. Could be an immutable object (scalar, tuple, dict, str), or a vector-like CArray. Lists are automatically converted to vector-like CArrays.
Examples
>>> from secml.data import CDatasetHeader >>> from secml.array import CArray
>>> ds_header = CDatasetHeader(id='mydataset', colors=CArray([1,2,3]))
>>> print(ds_header.id) mydataset >>> print(ds_header.colors) CArray([1 2 3])
>>> ds_header.age = 32 >>> print(ds_header.age) 32
- Attributes
class_type
Defines class type.
logger
Logger for current object.
num_samples
The number of samples for which the header defines extra params.
verbose
Verbosity level of logger output.
Methods
append
(self, header)Append input header to current header.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
append
(self, header)[source]¶ Append input header to current header.
- Parameters
- headerCDatasetHeader
Header to append. Only attributes which are arrays are merged. Other attributes are set if not already defined in the current header. Otherwise, the value of the attributes in the input header should be equal to the value of the same attribute in the current header.
- Returns
- CDatasetHeader
See also
CArray.append
More informations about arrays append.
Notes
Append does not occur in-place: a new header is allocated and filled.
Examples
>>> from secml.data import CDatasetHeader >>> from secml.array import CArray
>>> ds_header1 = CDatasetHeader(id={'a': 0, 'b': 2}, a=2, age=CArray([1,2,3])) >>> ds_header2 = CDatasetHeader(id={'a': 0, 'b': 2}, b=4, age=CArray([1,2,3]))
>>> ds_merged = ds_header1.append(ds_header2) >>> ds_merged.age CArray(6,)(dense: [1 2 3 1 2 3]) >>> ds_merged.id {'a': 0, 'b': 2} >>> ds_merged.a 2 >>> ds_merged.b 4
-
property
num_samples
¶ The number of samples for which the header defines extra params.
data_utils¶
-
secml.data.data_utils.
label_binarize_onehot
(y)[source]¶ Return dataset labels in one-hot encoding.
- Parameters
- yCArray
Array with the labels to encode. Only integer labels are supported.
- Returns
- binary_labelsCArray
A (num_samples, num_classes) array with the labels one-hot encoded.
Examples
>>> a = CArray([1,0,2,1]) >>> print(label_binarize_onehot(a)) CArray([[0 1 0] [1 0 0] [0 0 1] [0 1 0]])
secml.ml¶
Machine Learning
secml.ml.classifiers¶
secml.ml.classifiers.multiclass¶
-
class
secml.ml.classifiers.multiclass.c_classifier_multi.
CClassifierMulticlass
(classifier, preprocess=None, **clf_params)[source]¶ Bases:
secml.ml.classifiers.c_classifier.CClassifier
Generic interface for Multiclass Classifiers.
- Parameters
- classifierCClassifier.__class__
Unbound (not initialized) CClassifier subclass.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- clf_paramskwargs
Any other construction parameter for the binary classifiers.
- Attributes
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
classifier
Returns the class of the binary classifier used.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
num_classifiers
Returns the number of instanced binary classifiers.
verbose
Verbosity level of logger output.
Methods
apply_method
(self, method, \*args, \*\*kwargs)Apply input method to all trained classifiers.
binarize_dataset
(class_idx, dataset)Returns the dataset needed by the class_idx binary classifier.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
prepare
(self, num_classes)Creates num_classes copies of the binary classifier.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
apply_method
(self, method, *args, **kwargs)[source]¶ Apply input method to all trained classifiers.
Useful to perform a routine after training (e.g. reduction, optim)
method is an unbound method to apply, e.g. CClassiferSVM.set Any other argument for method can be passed in.
-
abstract static
binarize_dataset
(class_idx, dataset)[source]¶ Returns the dataset needed by the class_idx binary classifier.
- Parameters
- class_idxint
Index of the target class.
- datasetCDataset
Dataset to binarize.
- Returns
- bin_datasetCDataset
Binarized dataset.
-
property
classifier
¶ Returns the class of the binary classifier used.
-
estimate_parameters
(self, dataset, parameters, splitter, metric, pick='first', perf_evaluator='xval', n_jobs=1)[source]¶ Estimate parameter that give better result respect a chose metric.
- Parameters
- datasetCDataset
Dataset to be used for evaluating parameters.
- parametersdict
Dictionary with each entry as {parameter: list of values to test}. Example: {‘C’: [1, 10, 100], ‘gamma’: list(10.0 ** CArray.arange(-4, 4))}
- splitterCDataSplitter or str
Object to use for splitting the dataset into train and validation. A splitter type can be passed as string, in this case all default parameters will be used. For data splitters, num_folds is set to 3 by default. See CDataSplitter docs for more informations.
- metricCMetric or str
Object with the metric to use while evaluating the performance. A metric type can be passed as string, in this case all default parameters will be used. See CMetric docs for more informations.
- pick{‘first’, ‘last’, ‘random’}, optional
Defines which of the best parameters set pick. Usually, ‘first’ correspond to the smallest parameters while ‘last’ correspond to the biggest. The order is consistent to the parameters dict passed as input.
- perf_evaluatorCPerfEvaluator or str, optional
Performance Evaluator to use. Default ‘xval’.
- n_jobsint, optional
Number of parallel workers to use for performance evaluation. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- best_parametersdict
Dictionary of best parameters found through performance evaluation.
-
property
num_classifiers
¶ Returns the number of instanced binary classifiers.
Returns 1 until .fit(dataset) or .prepare(num_classes) is called.
-
prepare
(self, num_classes)[source]¶ Creates num_classes copies of the binary classifier.
Creates enough deepcopies of the binary classifier until num_classes binary classifiers are instanced. If num_classes < self.num_classifiers, classifiers in excess are deleted.
- Parameters
- num_classesint
Number of binary classifiers to instance.
-
set
(self, param_name, param_value, copy=False)[source]¶ Set a parameter that has a specific name to a specific value.
Only parameters, i.e. PUBLIC or READ/WRITE attributes, can be set. RW parameters must be set using their real name, e.g. use attr instead of _rw_attr.
If setting is performed before training, the parameter to set must be a known .classifier attribute or a known attribute of any parameter already set during or after construction.
If possible, a reference to the parameter to set is assigned. Use copy=True to always make a deepcopy before set.
- Parameters
- param_namestr
Name of the parameter to set.
- param_valueany
Value to set for the parameter. Using a tuple, one value for each binary classifier can be specified.
- copybool
By default (False) a reference to the parameter to assign is set. If True or a reference cannot be extracted, a deepcopy of the parameter is done first.
-
property
verbose
¶ Verbosity level of logger output.
- Available levels are:
0 = no verbose output 1 = info-level logging 2 = debug-level logging
-
class
secml.ml.classifiers.multiclass.c_classifier_multi_ova.
CClassifierMulticlassOVA
(classifier, preprocess=None, **clf_params)[source]¶ Bases:
secml.ml.classifiers.multiclass.c_classifier_multi.CClassifierMulticlass
,secml.ml.classifiers.multiclass.mixin_classifier_gradient_multiclass_ova.CClassifierGradientMulticlassOVAMixin
OVA (One-Vs-All) Multiclass Classifier.
- Parameters
- classifierunbound class
Unbound (not initialized) CClassifier subclass.
- kwargsany
Any other construction parameter for each OVA classifier.
- Attributes
class_type
‘ova’Defines class type.
Methods
apply_method
(self, method, \*args, \*\*kwargs)Apply input method to all trained classifiers.
binarize_dataset
(class_idx, dataset)Returns the dataset needed by the class_idx binary classifier.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x, y)Derivative of the decision function w.r.t.
grad_f_x
(self, x, y, \*\*kwargs)Derivative of the classifier decision function w.r.t.
grad_loss_params
(self, x, y[, loss])Derivative of a given loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
prepare
(self, num_classes)Creates num_classes copies of the binary classifier.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
secml.ml.classifiers.reject¶
-
class
secml.ml.classifiers.reject.c_classifier_reject.
CClassifierReject
(preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier.CClassifier
Abstract class that defines basic methods for Classifiers with reject.
A classifier assign a label (class) to new patterns using the informations learned from training set.
This interface implements a set of generic methods for training and classification that can be used for every algorithms. However, all of them can be reimplemented if specific routines are needed.
- Parameters
- preprocessstr or CNormalizer
Features preprocess to applied to input data. Can be a CNormalizer subclass or a string with the desired preprocess type. If None, input data is used as is.
- Attributes
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function, …])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
abstract
predict
(self, x, return_decision_function=False, n_jobs=1)[source]¶ Perform classification of each pattern in x.
If a preprocess has been specified, input is normalized before classification.
- Parameters
- xCArray
Array with new patterns to classify, 2-Dimensional of shape (n_patterns, n_features).
- return_decision_functionbool, optional
Whether to return the decision_function value along with predictions. Default False.
- n_jobsint, optional
Number of parallel workers to use for classification. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- labelsCArray
Flat dense array of shape (n_patterns,) with the label assigned to each test pattern. The classification label is the label of the class associated with the highest score. The rejected samples have label -1.
- scoresCArray, optional
Array of shape (n_patterns, n_classes) with classification score of each test pattern with respect to each training class. Will be returned only if return_decision_function is True.
-
class
secml.ml.classifiers.reject.c_classifier_reject_threshold.
CClassifierRejectThreshold
(clf, threshold, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.reject.c_classifier_reject.CClassifierReject
,secml.ml.classifiers.reject.mixin_classifier_gradient_reject_threshold.CClassifierGradientRejectThresholdMixin
- Abstract class that defines basic methods for Classifiers with reject
based on a certain threshold.
A classifier assign a label (class) to new patterns using the informations learned from training set.
The samples for which the higher score is under a certain threshold are rejected by the classifier.
- Parameters
- clfCClassifier
Classifier to which we would like to apply a reject threshold.
- thresholdfloat
Rejection threshold.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
clf
Returns the inner classifier.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
threshold
Returns the rejection threshold.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x, y)Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x, y)Derivative of the decision function w.r.t.
grad_f_x
(self, x, y, \*\*kwargs)Derivative of the classifier decision function w.r.t.
grad_loss_params
(self, x, y[, loss])Derivative of a given loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function, …])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
classes
¶ Return the list of classes on which training has been performed.
-
property
clf
¶ Returns the inner classifier.
-
decision_function
(self, x, y)[source]¶ Computes the decision function for each pattern in x.
The discriminant function of the reject class is a vector with all its values equal to :math:` heta`, being :math:` heta` the reject threshold.
If a preprocess has been specified, input is normalized before computing the decision function.
- Parameters
- xCArray
Array with new patterns to classify, 2-Dimensional of shape (n_patterns, n_features).
- yint
Index of the class wrt the gradient must be computed, -1 to compute it w.r.t. the reject class
- Returns
- scoreCArray
Value of the decision function for each test pattern. Dense flat array of shape (n_patterns,).
-
fit
(self, dataset, n_jobs=1)[source]¶ Trains the classifier.
If a preprocess has been specified, input is normalized before training.
- Parameters
- datasetCDataset
Training set. Must be a
CDataset
instance with patterns data and corresponding labels.- n_jobsint, optional
Number of parallel workers to use for training the classifier. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- trained_clsCClassifier
Instance of the classifier trained using input dataset.
-
property
n_classes
¶ Number of classes of training dataset.
-
predict
(self, x, return_decision_function=False, n_jobs=<no value>)[source]¶ Perform classification of each pattern in x.
The score matrix of this classifier is equal to the predicted outputs plus a column (corresponding to the reject class) with all its values equal to
, being
the reject threshold.
The predicted class is therefore:
where
correspond to the rejection class (i.e.,
) only when the maximum taken over the other classes (excluding the reject one) is not greater than the reject threshold
.
If a preprocess has been specified, input is normalized before classification.
- Parameters
- xCArray
Array with new patterns to classify, 2-Dimensional of shape (n_patterns, n_features).
- return_decision_functionbool, optional
Whether to return the decision_function value along with predictions. Default False.
- n_jobsint, optional
Number of parallel workers to use for classification. Default _NoValue. Cannot be higher than processor’s number of cores.
- Returns
- labelsCArray
Flat dense array of shape (n_patterns,) with the label assigned to each test pattern. The classification label is the label of the class associated with the highest score. The samples for which the label is equal -1 are the ones rejected by the classifier
- scoresCArray, optional
Array of shape (n_patterns, n_classes) with classification score of each test pattern with respect to each training class. Will be returned only if return_decision_function is True.
-
property
threshold
¶ Returns the rejection threshold.
secml.ml.classifiers.loss¶
-
class
secml.ml.classifiers.loss.c_loss.
CLoss
[source]¶ Bases:
secml.core.c_creator.CCreator
Interface for loss functions.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
suitable_for
Defines which problem the loss is suitable for.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score)Computes the derivative of the loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score)Computes the value of the loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score)[source]¶ Computes the derivative of the loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets.
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
abstract
loss
(self, y_true, score)[source]¶ Computes the value of the loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets.
- Returns
- CArray
Loss function. Vector-like array.
-
abstract property
suitable_for
¶ Defines which problem the loss is suitable for.
Accepted values: - classification - regression
-
class
secml.ml.classifiers.loss.c_loss.
CLossClassification
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLoss
Interface for loss functions suitable for classification problems.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label])Computes the derivative of the loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label])Computes the value of the loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=None)[source]¶ Computes the derivative of the loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,).
- pos_labelint or None, optional
Default None, meaning that the function derivative is computed for each sample wrt the corresponding true label. Otherwise, this is the class wrt compute the derivative. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
abstract
loss
(self, y_true, score, pos_label=None)[source]¶ Computes the value of the loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,).
- pos_labelint or None, optional
Default None, meaning that the function is computed for each sample wrt the corresponding true label. Otherwise, this is the class wrt compute the loss function. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Loss function. Vector-like array.
-
suitable_for
= 'classification'¶
-
class
secml.ml.classifiers.loss.c_loss.
CLossRegression
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLoss
Interface for loss functions suitable for regression problems.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score)Computes the derivative of the loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score)Computes the value of the loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
abstract
dloss
(self, y_true, score)[source]¶ Computes the derivative of the loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
abstract
loss
(self, y_true, score)[source]¶ Computes the value of the loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Loss function. Vector-like array.
-
suitable_for
= 'regression'¶
-
class
secml.ml.classifiers.loss.c_loss_cross_entropy.
CLossCrossEntropy
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossClassification
Cross Entropy Loss Function (Log Loss).
Cross entropy indicates the distance between what the model believes the output distribution should be, and what the original distribution really is.
The cross entropy loss is defined as (for sample i):
- Attributes
class_type
‘cross-entropy’Defines class type.
- suitable_for‘classification’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label])Computes gradient of the Cross Entropy loss w.r.t.the classifier
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label])Computes the value of the Cross Entropy loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=None)[source]¶ - Computes gradient of the Cross Entropy loss w.r.t.the classifier
decision function corresponding to class label pos_label.
- Assuming pos_label to be i, the derivative is:
p_i - t_i, t_i = 1 if i is equal to y_true_i, 0 otherwise
Then, the elements corresponding to y_true (if pos_label is None) or pos_label will be returned.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes).
- pos_labelint or None, optional
The class wrt compute the loss function. Default None, meaning that the function is computed for each sample wrt the corresponding true label.
- Returns
- CArray
Loss function. Vector-like array.
-
loss
(self, y_true, score, pos_label=<no value>)[source]¶ Computes the value of the Cross Entropy loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes).
- Returns
- CArray
Loss function. Vector-like array.
Notes
Differently from other loss functions, CrossEntropyLoss requires the full array (n_samples, n_classes) of predicted outputs.
-
class
secml.ml.classifiers.loss.c_loss_epsilon_insensitive.
CLossEpsilonInsensitive
(epsilon=0.1)[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossRegression
Epsilon-Insensitive Loss Function.
Any difference between the current prediction and the ground truth is ignored if is less than the epsilon threshold.
Epsilon-Insensitive loss is used by support vector regression.
The Epsilon-Insensitive loss is defined as:
- Attributes
class_type
‘e-insensitive’Defines class type.
- suitable_for‘regression’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score)Computes the derivative of the epsilon-insensitive loss function
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score)Computes the value of the epsilon-insensitive loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score)[source]¶ - Computes the derivative of the epsilon-insensitive loss function
with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
property
epsilon
¶ Threshold parameter epsilon.
-
loss
(self, y_true, score)[source]¶ Computes the value of the epsilon-insensitive loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_epsilon_insensitive.
CLossEpsilonInsensitiveSquared
(epsilon=0.1)[source]¶ Bases:
secml.ml.classifiers.loss.c_loss_epsilon_insensitive.CLossEpsilonInsensitive
Squared Epsilon-Insensitive Loss Function.
Any difference between the current prediction and the ground truth is ignored if is less than the epsilon threshold.
The Squared Epsilon-Insensitive loss is defined as:
- Attributes
class_type
‘e-insensitive-squared’Defines class type.
- suitable_for‘regression’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score)Computes the derivative of the squared epsilon-insensitive
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score)Computes the value of the squared epsilon-insensitive loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score)[source]¶ - Computes the derivative of the squared epsilon-insensitive
loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
loss
(self, y_true, score)[source]¶ Computes the value of the squared epsilon-insensitive loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_hinge.
CLossHinge
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossClassification
Hinge Loss Function.
- The function computes the average distance between the model and
the data using hinge loss, a one-sided metric that considers only prediction errors.
- Hinge loss is used in maximal margin classifiers such as
support vector machines.
- After converting the labels to {-1, +1},
then the hinge loss is defined as:
- Attributes
class_type
‘hinge’Defines class type.
- suitable_for‘classification’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label])Computes the derivative of the hinge loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label])Computes the value of the hinge loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=1)[source]¶ Computes the derivative of the hinge loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function derivative. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
loss
(self, y_true, score, pos_label=1)[source]¶ Computes the value of the hinge loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_hinge.
CLossHingeSquared
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossClassification
Squared Hinge Loss Function.
The function computes the average distance between the model and the data using hinge loss, a one-sided metric that considers only prediction errors.
After converting the labels to {-1, +1}, then the hinge loss is defined as:
- Attributes
class_type
‘hinge-squared’Defines class type.
- suitable_for‘classification’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label])Computes the derivative of the squared hinge loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label])Computes the value of the squared hinge loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=1)[source]¶ Computes the derivative of the squared hinge loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function derivative. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
loss
(self, y_true, score, pos_label=1)[source]¶ Computes the value of the squared hinge loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_logistic.
CLossLogistic
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossClassification
Logistic loss function.
- Attributes
class_type
‘log’Defines class type.
- suitable_for‘classification’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label, bound])Computes the derivative of the hinge loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label, bound])Computes the value of the logistic loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=1, bound=10)[source]¶ Computes the derivative of the hinge loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function derivative. Default 1. If score is a 1-D flat array, this parameter is ignored.
- boundscalar or None, optional
Set an upper bound for a linear approximation when -y*s is large to avoid numerical overflows. 10 is a generally acceptable -> log(1+exp(10)) = 10.000045
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
loss
(self, y_true, score, pos_label=1, bound=10)[source]¶ Computes the value of the logistic loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function. Default 1. If score is a 1-D flat array, this parameter is ignored.
- boundscalar or None, optional
Set an upper bound for a linear approximation when -y*s is large to avoid numerical overflows. 10 is a generally acceptable -> log(1+exp(10)) = 10.000045
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_squared.
CLossQuadratic
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossRegression
Quadratic Loss Function (Ordinary Least Squares).
The quadratic loss is defined as:
- Attributes
class_type
‘quadratic’Defines class type.
- suitable_for‘regression’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score)Computes the derivative of the quadratic loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score)Computes the value of the quadratic loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score)[source]¶ Computes the derivative of the quadratic loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. Vector-like array of shape (n_samples,).
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_loss_squared.
CLossSquare
[source]¶ Bases:
secml.ml.classifiers.loss.c_loss.CLossClassification
Square Loss Function.
The square loss is defined as:
- Attributes
class_type
‘square’Defines class type.
- suitable_for‘classification’
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dloss
(self, y_true, score[, pos_label])Computes the derivative of the square loss function with respect to score.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loss
(self, y_true, score[, pos_label])Computes the value of the squared epsilon-insensitive loss function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dloss
(self, y_true, score, pos_label=1)[source]¶ Computes the derivative of the square loss function with respect to score.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function derivative. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Derivative of the loss function. Vector-like array.
-
loss
(self, y_true, score, pos_label=1)[source]¶ Computes the value of the squared epsilon-insensitive loss function.
- Parameters
- y_trueCArray
Ground truth (correct), targets. Vector-like array.
- scoreCArray
Outputs (predicted), targets. 2-D array of shape (n_samples, n_classes) or 1-D flat array of shape (n_samples,). If 1-D array, the probabilities provided are assumed to be that of the positive class.
- pos_label{0, 1}, optional
The class wrt compute the loss function. Default 1. If score is a 1-D flat array, this parameter is ignored.
- Returns
- CArray
Loss function. Vector-like array.
-
class
secml.ml.classifiers.loss.c_softmax.
CSoftmax
[source]¶ Bases:
secml.core.c_creator.CCreator
Softmax function.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, s, y)Gradient of the softmax function.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
softmax
(self, s)Apply the softmax function to input.
timed
([msg])Timer decorator.
-
gradient
(self, s, y)[source]¶ Gradient of the softmax function.
The derivative of the y-th output of the softmax function w.r.t. all the inputs is given by:
- Parameters
- sCArray
2-D array of shape (1, n_classes) with input data.
- yint
The class wrt compute the gradient.
- Returns
- CArray
Softmax function gradient. Vector-like array.
-
softmax
(self, s)[source]¶ Apply the softmax function to input.
The softmax function is defined for the vector s and for the i-th class as:
- Parameters
- sCArray
2-D array of shape (n_samples, n_classes) with input data.
- Returns
- CArray
Softmax function. Same shape of input array.
Examples
>>> from secml.array import CArray >>> from secml.ml.classifiers.loss import CSoftmax
>>> a = CArray([[1, 2, 3], [2, 4, 5]]) >>> print(CSoftmax().softmax(a)) CArray([[0.090031 0.244728 0.665241] [0.035119 0.259496 0.705385]])
secml.ml.classifiers.regularizer¶
-
class
secml.ml.classifiers.regularizer.c_regularizer.
CRegularizer
[source]¶ Bases:
secml.core.c_creator.CCreator
Abstract class that defines basic methods for regularizer functions.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dregularizer
(self, \*args, \*\*kwargs)Gets the derivative of regularizer.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
regularizer
(self, \*args, \*\*kwargs)Gets value of regularizer.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.ml.classifiers.regularizer.c_regularizer_elastic_net.
CRegularizerElasticNet
(l1_ratio=0.15)[source]¶ Bases:
secml.ml.classifiers.regularizer.c_regularizer.CRegularizer
ElasticNet Regularizer.
A convex combination of L2 and L1, where
is given by 1 - l1_ratio.
ElasticNet Regularizer is given by:
- Attributes
class_type
‘elastic-net’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dregularizer
(self, w)Returns the derivative of the elastic-net regularizer
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
regularizer
(self, w)Returns ElasticNet Regularizer.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
dregularizer
(self, w)[source]¶ Returns the derivative of the elastic-net regularizer
- Parameters
- wCArray
Vector-like array.
-
property
l1_ratio
¶ Get l1-ratio.
-
class
secml.ml.classifiers.regularizer.c_regularizer_l1.
CRegularizerL1
[source]¶ Bases:
secml.ml.classifiers.regularizer.c_regularizer.CRegularizer
Norm-L1 Regularizer.
This function leads to sparse solutions.
L1 Regularizer is given by:
- Attributes
class_type
‘l1’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dregularizer
(self, w)Returns Norm-L1 derivative.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
regularizer
(self, w)Returns Norm-L1.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.ml.classifiers.regularizer.c_regularizer_l2.
CRegularizerL2
[source]¶ Bases:
secml.ml.classifiers.regularizer.c_regularizer.CRegularizer
Norm-L2 Regularizer.
L2 Regularizer is given by:
- Attributes
class_type
‘l2’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
dregularizer
(self, w)Return Norm-L2 derivative.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
regularizer
(self, w)Returns Norm-L2.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CClassifier¶
-
class
secml.ml.classifiers.c_classifier.
CClassifier
(preprocess=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Abstract class that defines basic methods for Classifiers.
A classifier assign a label (class) to new patterns using the informations learned from training set.
This interface implements a set of generic methods for training and classification that can be used for every algorithms. However, all of them can be reimplemented if specific routines are needed.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
classes
¶ Return the list of classes on which training has been performed.
-
decision_function
(self, x, y=None)[source]¶ Computes the decision function for each pattern in x.
If a preprocess has been specified, input is normalized before computing the decision function.
Note
The actual decision function should be implemented inside
_decision_function
method.- Parameters
- xCArray
Array with new patterns to classify, 2-Dimensional of shape (n_patterns, n_features).
- yint or None, optional
The label of the class wrt the function should be calculated. If None, return the output for all classes.
- Returns
- scoreCArray
Value of the decision function for each test pattern. Dense flat array of shape (n_samples,) if y is not None, otherwise a (n_samples, n_classes) array.
-
estimate_parameters
(self, dataset, parameters, splitter, metric, pick='first', perf_evaluator='xval', n_jobs=1)[source]¶ Estimate parameter that give better result respect a chose metric.
- Parameters
- datasetCDataset
Dataset to be used for evaluating parameters.
- parametersdict
Dictionary with each entry as {parameter: list of values to test}. Example: {‘C’: [1, 10, 100], ‘gamma’: list(10.0 ** CArray.arange(-4, 4))}
- splitterCDataSplitter or str
Object to use for splitting the dataset into train and validation. A splitter type can be passed as string, in this case all default parameters will be used. For data splitters, num_folds is set to 3 by default. See CDataSplitter docs for more informations.
- metricCMetric or str
Object with the metric to use while evaluating the performance. A metric type can be passed as string, in this case all default parameters will be used. See CMetric docs for more informations.
- pick{‘first’, ‘last’, ‘random’}, optional
Defines which of the best parameters set pick. Usually, ‘first’ correspond to the smallest parameters while ‘last’ correspond to the biggest. The order is consistent to the parameters dict passed as input.
- perf_evaluatorCPerfEvaluator or str, optional
Performance Evaluator to use. Default ‘xval’.
- n_jobsint, optional
Number of parallel workers to use for performance evaluation. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- best_parametersdict
Dictionary of best parameters found through performance evaluation.
-
fit
(self, dataset, n_jobs=1)[source]¶ Trains the classifier.
If a preprocess has been specified, input is normalized before training.
For multiclass case see .CClassifierMulticlass.
- Parameters
- datasetCDataset
Training set. Must be a
CDataset
instance with patterns data and corresponding labels.- n_jobsint
Number of parallel workers to use for training the classifier. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- trained_clsCClassifier
Instance of the classifier trained using input dataset.
-
is_fitted
(self)[source]¶ Return True if the classifier is trained (fitted).
- Returns
- bool
True or False depending on the result of the call to check_is_fitted.
-
property
n_classes
¶ Number of classes of training dataset.
-
property
n_features
¶ Number of features (before preprocessing).
-
predict
(self, x, return_decision_function=False)[source]¶ Perform classification of each pattern in x.
If preprocess has been specified, input is normalized before classification.
- Parameters
- xCArray
Array with new patterns to classify, 2-Dimensional of shape (n_patterns, n_features).
- return_decision_functionbool, optional
Whether to return the decision_function value along with predictions. Default False.
- Returns
- labelsCArray
Flat dense array of shape (n_patterns,) with the label assigned to each test pattern. The classification label is the label of the class associated with the highest score.
- scoresCArray, optional
Array of shape (n_patterns, n_classes) with classification score of each test pattern with respect to each training class. Will be returned only if return_decision_function is True.
CClassifierLinear¶
-
class
secml.ml.classifiers.c_classifier_linear.
CClassifierLinear
(preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier.CClassifier
Abstract class that defines basic methods for linear classifiers.
A linear classifier assign a label (class) to new patterns computing the inner product between the patterns and a vector of weights for each training set feature.
This interface implements a set of generic methods for training and classification that can be used for every linear model.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
b
Bias calculated from training data.
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
verbose
Verbosity level of logger output.
w
Vector with each feature’s weight (dense or sparse).
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the linear classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)Return True as the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
b
¶ Bias calculated from training data.
-
fit
(self, dataset, n_jobs=1)[source]¶ Trains the linear classifier.
If a preprocess has been specified, input is normalized before training.
Training on 2nd class is avoided to speed up classification.
- Parameters
- datasetCDataset
Binary (2-classes) training set. Must be a
CDataset
instance with patterns data and corresponding labels.- n_jobsint
Number of parallel workers to use for training the classifier. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- trained_clsCClassifier
Instance of the classifier trained using input dataset.
-
property
w
¶ Vector with each feature’s weight (dense or sparse).
CClassifierSkLearn¶
-
class
secml.ml.classifiers.c_classifier_sklearn.
CClassifierSkLearn
(sklearn_model, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier.CClassifier
Generic wrapper for SkLearn classifiers.
- Attributes
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
logger
Logger for current object.
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class and SkLearn model parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CClassifierDecisionTree¶
-
class
secml.ml.classifiers.c_classifier_decision_tree.
CClassifierDecisionTree
(criterion='gini', splitter='best', max_depth=None, min_samples_split=2, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_sklearn.CClassifierSkLearn
Decision Tree Classifier.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘dec-tree’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class and SkLearn model parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CClassifierKDE¶
-
class
secml.ml.classifiers.c_classifier_kde.
CClassifierKDE
(kernel=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier.CClassifier
,secml.ml.classifiers.gradients.mixin_classifier_gradient_kde.CClassifierGradientKDEMixin
Kernel Density Estimator
- Parameters
- kernelNone or CKernel subclass, optional
Instance of a CKernel subclass to be used for computing similarity between patterns. If None (default), a linear SVM will be created.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
See also
CKernel
Pairwise kernels and metrics.
- Attributes
class_type
‘kde’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x, y)Derivative of the decision function w.r.t.
grad_f_x
(self, x, y, \*\*kwargs)Derivative of the classifier decision function w.r.t.
grad_loss_params
(self, x, y[, loss])Derivative of a given loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_kernel_linear
(self)Return True if the kernel is None or linear.
is_linear
(self)Return True if the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
kernel
¶ Kernel function (None if a linear classifier).
-
property
training_samples
¶
CClassifierKNN¶
-
class
secml.ml.classifiers.c_classifier_knn.
CClassifierKNN
(n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_sklearn.CClassifierSkLearn
K Neighbors Classifiers.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘knn’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class and SkLearn model parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
kneighbors
(self, x[, num_samples])Find the training samples nearest to x
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
kneighbors
(self, x, num_samples=None)[source]¶ Find the training samples nearest to x
- Parameters
- xCArray
The query point or points.
- num_samples: int or None
Number of neighbors to get. if None, use n_neighbors
- Returns
- distCArray
Array representing the lengths to points
- index_point: CArray
Indices of the nearest points in the training set
- tr_dataset.X: CArray
Training samples
CClassifierLogistic¶
-
class
secml.ml.classifiers.c_classifier_logistic.
CClassifierLogistic
(C=1.0, max_iter=100, random_seed=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_linear.CClassifierLinear
,secml.ml.classifiers.gradients.mixin_classifier_gradient_logistic.CClassifierGradientLogisticMixin
Logistic Regression (aka logit, MaxEnt) classifier.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
C
Penalty parameter C of the error term.
b
Bias calculated from training data.
class_type
Defines class type.
classes
Return the list of classes on which training has been performed.
logger
Logger for current object.
- max_iter
n_classes
Number of classes of training dataset.
n_features
Number of features (before preprocessing).
- random_seed
verbose
Verbosity level of logger output.
w
Vector with each feature’s weight (dense or sparse).
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the linear classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)Return True as the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
C
¶ Penalty parameter C of the error term.
-
property
max_iter
¶
-
property
random_seed
¶
CClassifierMCSLinear¶
-
class
secml.ml.classifiers.c_classifier_mcs_linear.
CClassifierMCSLinear
(classifier, num_classifiers=10, max_samples=1.0, max_features=1.0, random_state=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_linear.CClassifierLinear
,secml.ml.classifiers.gradients.mixin_classifier_gradient_linear.CClassifierGradientLinearMixin
MCS averaging a set of LINEAR classifiers.
Eventually, one yields a linear classifier itself, where w (b) is the average of the feature weights (bias) of the base classifiers.
- Parameters
- classifierCClassifierLinear
Instance of the linear classifier to be used in the MCS.
- num_classifiersint, optional
Number of linear classifiers to fit, default 10.
- max_samplesfloat, optional
Percentage of the samples to use for training, range [0, 1.0]. Default 1.0 (all the samples).
- max_featuresfloat, optional
Percentage of the features to use for training, range [0, 1.0]. Default 1.0 (all the features.
- random_stateint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘mcs-linear’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the linear classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)Return True as the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
classifier
¶ Instance of the linear classifier used in the MCS.
-
property
max_features
¶
-
property
max_samples
¶ Percentage of the samples to use for training.
-
property
n_classifiers
¶ Number of linear classifiers to fit.
CClassifierNearestCentroid¶
-
class
secml.ml.classifiers.c_classifier_nearest_centroid.
CClassifierNearestCentroid
(metric='euclidean', shrink_threshold=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_sklearn.CClassifierSkLearn
CClassifierNearestCentroid.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘nrst-centroid’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class and SkLearn model parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
centroids
¶
-
property
metric
¶
CClassifierRandomForest¶
-
class
secml.ml.classifiers.c_classifier_random_forest.
CClassifierRandomForest
(n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2, random_state=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_sklearn.CClassifierSkLearn
Random Forest classifier.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘random-forest’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class and SkLearn model parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_linear
(self)True for linear classifiers, False otherwise.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CClassifierRidge¶
-
class
secml.ml.classifiers.c_classifier_ridge.
CClassifierRidge
(alpha=1.0, kernel=None, max_iter=100000.0, class_weight=None, tol=0.0001, fit_intercept=True, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_linear.CClassifierLinear
,secml.ml.classifiers.gradients.mixin_classifier_gradient_ridge.CClassifierGradientRidgeMixin
Ridge Classifier.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘ridge’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the linear classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t. the classifier
hessian_tr_params
(self, x[, y])Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_kernel_linear
(self)Return True if the kernel is None or linear.
is_linear
(self)Return True if the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
C
¶ Constant that multiplies the regularization term.
Equal to 1 / alpha.
-
property
alpha
¶ Returns the Constant that multiplies the regularization term.
-
property
class_weight
¶ Weight of each training class.
-
property
kernel
¶ Kernel function.
-
property
n_tr_samples
¶ Returns the number of training samples.
CClassifierSGD¶
-
class
secml.ml.classifiers.c_classifier_sgd.
CClassifierSGD
(loss, regularizer, kernel=None, alpha=0.01, fit_intercept=True, max_iter=1000, tol=-inf, shuffle=True, learning_rate='optimal', eta0=10.0, power_t=0.5, class_weight=None, warm_start=False, average=False, random_state=None, preprocess=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_linear.CClassifierLinear
,secml.ml.classifiers.gradients.mixin_classifier_gradient_sgd.CClassifierGradientSGDMixin
Stochastic Gradient Descent Classifier.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘sgd’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Trains the linear classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the classifier loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective function w.r.t.
hessian_tr_params
(self, x, y)Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_kernel_linear
(self)Return True if the kernel is None or linear.
is_linear
(self)Return True if the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
C
¶ Constant that multiplies the regularization term.
Equal to 1 / alpha.
-
property
alpha
¶ Returns the Constant that multiplies the regularization term.
-
property
average
¶ When set to True, computes the averaged SGD weights. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples.
-
property
class_weight
¶ Weight of each training class.
-
property
eta0
¶ The initial learning rate for the invscaling learning rate. Default is 10.0 (corresponding to sqrt(1.0/sqrt(alpha)), with alpha=0.0001).
-
property
kernel
¶ Kernel function.
-
property
loss
¶ Returns the loss function used by classifier.
-
property
n_tr_samples
¶ Returns the number of training samples.
-
property
power_t
¶ The exponent for inverse scaling learning rate.
-
property
regularizer
¶ Returns the regularizer function used by classifier.
CClassifierSVM¶
-
class
secml.ml.classifiers.c_classifier_svm.
CClassifierSVM
(kernel=None, C=1.0, class_weight=None, preprocess=None, grad_sampling=1.0, store_dual_vars=None)[source]¶ Bases:
secml.ml.classifiers.c_classifier_linear.CClassifierLinear
,secml.ml.classifiers.gradients.mixin_classifier_gradient_svm.CClassifierGradientSVMMixin
Support Vector Machine (SVM) classifier.
- Parameters
- kernelNone or CKernel subclass, optional
Instance of a CKernel subclass to be used for computing similarity between patterns. If None (default), a linear SVM will be created.
- Cfloat, optional
Penalty parameter C of the error term. Default 1.0.
- class_weight{dict, ‘balanced’, None}, optional
Set the parameter C of class i to class_weight[i] * C. If not given (default), all classes are supposed to have weight one. The ‘balanced’ mode uses the values of labels to automatically adjust weights inversely proportional to class frequencies as n_samples / (n_classes * np.bincount(y)).
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- grad_samplingfloat
Percentage in (0.0, 1.0] of the alpha weights to be considered when computing the classifier gradient.
See also
CKernel
Pairwise kernels and metrics.
CClassifierLinear
Common interface for linear classifiers.
Notes
Current implementation relies on
sklearn.svm.SVC
for the training step.- Attributes
class_type
‘svm’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
decision_function
(self, x[, y])Computes the decision function for each pattern in x.
deepcopy
(self)Returns a deep copy of current class.
estimate_parameters
(self, dataset, …[, …])Estimate parameter that give better result respect a chose metric.
fit
(self, dataset[, n_jobs])Fit the SVM classifier.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
grad_f_params
(self, x[, y])Derivative of the decision function w.r.t.
grad_f_x
(self[, x, y])Computes the gradient of the classifier’s output wrt input.
grad_loss_params
(self, x, y[, loss])Derivative of the loss w.r.t.
grad_tr_params
(self, x, y)Derivative of the classifier training objective w.r.t.
hessian_tr_params
(self[, x, y])Hessian of the training objective w.r.t.
is_fitted
(self)Return True if the classifier is trained (fitted).
is_kernel_linear
(self)Return True if the kernel is None or linear.
is_linear
(self)Return True if the classifier is linear.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
predict
(self, x[, return_decision_function])Perform classification of each pattern in x.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
sv_margin
(self[, tol])Margin Support Vectors.
sv_margin_idx
(self[, tol])Indices of Margin Support Vectors.
sv_margin_y
(self[, tol])Margin Support Vectors class (-1/+1).
timed
([msg])Timer decorator.
-
property
C
¶ Penalty parameter C of the error term.
-
property
alpha
¶ Signed coefficients of the SVs in the decision function.
-
property
class_weight
¶ Weight of each training class.
-
fit
(self, dataset, n_jobs=1)[source]¶ Fit the SVM classifier.
We use
sklearn.svm.SVC
for weights and Support Vectors computation. The routine will set alpha, sv, sv_idx and b parameters. For linear SVM (i.e. if kernel is None) we also store the ‘w’ flat vector with each feature’s weight.If a preprocess has been specified, input is normalized before computing the decision function.
- Parameters
- datasetCDataset
Binary (2-classes) Training set. Must be a
CDataset
instance with patterns data and corresponding labels.- n_jobsint, optional
Number of parallel workers to use for training the classifier. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- trained_clsCClassifierSVM
Instance of the SVM classifier trained using input dataset.
-
property
grad_sampling
¶ Percentage of samples for approximate gradient.
-
property
kernel
¶ Kernel function (None if a linear classifier).
-
property
n_sv
¶ Return the number of support vectors.
In the 1st and in the 2nd column is stored the number of SVs for the negative and positive class respectively.
-
property
store_dual_vars
¶ Controls the store of dual space variables (SVs and alphas).
By default is None and dual variables are stored only if kernel is not None. If set to True, dual variables are stored even if kernel is None (linear SVM). If kernel is not None, cannot be set to False.
-
property
sv
¶ Support Vectors.
-
property
sv_idx
¶ Indices of Support Vectors within the training dataset.
-
sv_margin
(self, tol=1e-06)[source]¶ Margin Support Vectors.
- Parameters
- tolfloat
Alpha value threshold for considering a Support Vector on the margin.
- Returns
- CArray or None
Margin support vector, 2D CArray. If no margin support vector are found, return None.
- indicesCArray or None
Flat array with the indices of the margin support vectors. If no margin support vector are found, return None.
clf_utils¶
-
secml.ml.classifiers.clf_utils.
check_binary_labels
(labels)[source]¶ Check if input labels are binary {0, +1}.
- Parameters
- labelsCArray or int
Binary labels to be converted. As of PRALib convention, binary labels are {0, +1}.
- Raises
- ValueError
If input labels are not binary.
-
secml.ml.classifiers.clf_utils.
convert_binary_labels
(labels)[source]¶ Convert input binary labels to {-1, +1}.
- Parameters
- labelsCArray or int
Binary labels in {0, +1} to be converted to {-1, +1}.
- Returns
- converted_labelsCArray or int
Binary labels converted to {-1, +1}.
Examples
>>> from secml.ml.classifiers.clf_utils import convert_binary_labels >>> from secml.array import CArray
>>> print(convert_binary_labels(0)) -1
>>> print(convert_binary_labels(CArray([0,1,1,1,0,0]))) CArray([-1 1 1 1 -1 -1])
secml.ml.features¶
secml.ml.features.normalization¶
-
class
secml.ml.features.normalization.c_normalizer.
CNormalizer
(preprocess=None)[source]¶ Bases:
secml.ml.features.c_preprocess.CPreProcess
Common interface for normalization preprocessing algorithms.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
preprocess
Inner preprocessor (if any).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
is_linear
(self)Returns True for linear normalizers.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
class
secml.ml.features.normalization.c_normalizer_linear.
CNormalizerLinear
(preprocess=None)[source]¶ Bases:
secml.ml.features.normalization.c_normalizer.CNormalizer
Standardizes array by linearly scaling each feature.
Input data must have one row for each patterns, so features to scale are on each array’s column.
The standardization is given by:
X_scaled = m * X(axis=0) + q
where m, q are specific constants for each normalization.
Warning
Currently only few linear normalizers work with sparse arrays.
Notes
Only arrays of dense form are supported.
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). This means that normalizing a flat vector is equivalent to transform array.atleast_2d(). To obtain a numpy-style normalization of flat vectors, transpose array first.
- Attributes
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
is_linear
(self)Returns True for linear normalizers.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
abstract property
b
¶ Returns the bias of the linear normalizer.
-
abstract property
w
¶ Returns the step of the linear normalizer.
-
class
secml.ml.features.normalization.c_normalizer_mean_std.
CNormalizerMeanSTD
(mean=None, std=None, with_std=True, preprocess=None)[source]¶ Bases:
secml.ml.features.normalization.c_normalizer_linear.CNormalizerLinear
Normalize with given mean and standard deviation.
If mean/std are tuples of multiple values, input is expected to be uniformly splittable in a number of channels equal to the number of values in the tuples. Both input tuples must have the same length.
Result will be: (input[channel] - mean[channel]) / std[channel]
If mean and std are None, values to use as mean and std will be computed from data. The result wil be an array with 0 mean or/and unit variance (if with_std parameter is True, default). In this case, the standard deviation calculated by numpy is the maximum likelihood estimate, i.e. the second moment of the set of values about their mean. See also
CArray.std
for more informations.- Parameters
- meanscalar or tuple of scalars or None, optional
Mean to use for normalization. If a tuple, each value represent a channel of the input. The number of features of the training data should be divisible by the number of values of the tuple. If a scalar, the same value is applied to all features. If None, mean is computed from training data. Cannot be None if std is not None and with_std is True.
- stdscalar or tuple of scalars or None, optional
Variance to use for normalization. If a tuple, each value represent a channel of the input. The number of features of the training data should be divisible by the number of values of the tuple. If a scalar, the same value is applied to all features. If None, std is computed from training data. Cannot be None if mean is not None and with_std is True.
- with_stdbool, optional
If True (default), normalizer scales array using std too. If False, std parameter is ignored.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘mean-std’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
is_linear
(self)Returns True for linear normalizers.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
property
b
¶ Returns the bias of the linear normalizer.
-
property
mean
¶ Mean to use for normalization.
One value for each training array feature.
-
property
std
¶ Variance to use for normalization.
One value for each training array feature.
-
property
w
¶ Returns the slope of the linear normalizer.
-
property
with_std
¶ True if normalizer should transform array using variance too.
-
class
secml.ml.features.normalization.c_normalizer_minmax.
CNormalizerMinMax
(feature_range=None, preprocess=None)[source]¶ Bases:
secml.ml.features.normalization.c_normalizer_linear.CNormalizerLinear
Standardizes array by scaling each feature to a given range.
This estimator scales and translates each feature individually such that it is in the given range on the training array, i.e. between zero and one.
Input data must have one row for each patterns, so features to scale are on each array’s column.
The standardization is given by:
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min
where min, max = feature_range.
- Parameters
- feature_rangetuple of scalars or None, optional
Desired range of transformed data, tuple of 2 scalars where feature_range[0] is the minimum and feature_range[1] is the maximum value. If feature_range is None, features will be scaled using (0., 1.) range.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). This means that normalizing a flat vector is equivalent to transform array.atleast_2d(). To obtain a numpy-style normalization of flat vectors, transpose array first.
Examples
>>> from secml.array import CArray >>> from secml.ml.features.normalization import CNormalizerMinMax >>> array = CArray([[1., -1., 2.], [2., 0., 0.], [0., 1., -1.]])
>>> print(CNormalizerMinMax().fit_transform(array)) CArray([[0.5 0. 1. ] [1. 0.5 0.333333] [0. 1. 0. ]])
>>> print(CNormalizerMinMax(feature_range=(-1,1)).fit_transform(array)) CArray([[ 0. -1. 1. ] [ 1. 0. -0.333333] [-1. 1. -1. ]])
- Attributes
class_type
‘min-max’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
is_linear
(self)Returns True for linear normalizers.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
property
b
¶ Returns the bias of the linear normalizer.
-
property
feature_range
¶ Desired range of transformed data.
-
property
max
¶ Maximum of training array per feature.
- Returns
- train_maxCArray
Flat dense array with the maximum of each feature of the training array. If the scaler has not been trained yet, returns None.
-
property
min
¶ Minimum of training array per feature.
- Returns
- train_minCArray
Flat dense array with the minimum of each feature of the training array. If the scaler has not been trained yet, returns None.
-
property
w
¶ Returns the slope of the linear normalizer.
-
class
secml.ml.features.normalization.c_normalizer_unitnorm.
CNormalizerUnitNorm
(order=2, preprocess=None)[source]¶ Bases:
secml.ml.features.normalization.c_normalizer.CNormalizer
Normalize patterns individually to unit norm.
Each pattern (i.e. each row of the data matrix) with at least one non zero component is rescaled independently of other patterns so that its norm (l1 or l2) equals one.
For the Row normalizer, no training routine is needed, so using fit_normalize() method is suggested for clarity. Use fit() method, which does nothing, only to streamline a pipelined environment.
- Parameters
- order{1, 2}, optional
Order of the norm to normalize each pattern with. Only 1 (‘l1’) and 2 (‘l2’) norm are supported. 2 (‘l2’) is default. For sparse arrays, only 2nd order norm is supported.
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
Notes
Differently from numpy, we manage flat vectors as 2-Dimensional of shape (1, array.size). This means that normalizing a flat vector is equivalent to transform array.atleast_2d(). To obtain a numpy-style normalization of flat vectors, transpose array first.
Examples
>>> from secml.array import CArray >>> from secml.ml.features.normalization import CNormalizerUnitNorm >>> array = CArray([[1., -1., 2.], [2., 0., 0.], [0., 1., -1.]])
>>> dense_normalized = CNormalizerUnitNorm().fit_transform(array) >>> print(dense_normalized) CArray([[ 0.408248 -0.408248 0.816497] [ 1. 0. 0. ] [ 0. 0.707107 -0.707107]])
>>> print(CNormalizerUnitNorm(order=1).fit_transform(array)) CArray([[ 0.25 -0.25 0.5 ] [ 1. 0. 0. ] [ 0. 0.5 -0.5 ]])
- Attributes
class_type
‘unit-norm’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
is_linear
(self)Returns True for linear normalizers.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
property
norm
¶ Returns the norm of each training array’s patterns.
-
property
order
¶ Returns the order of the norm used for patterns normalization.
secml.ml.features.reduction¶
-
class
secml.ml.features.reduction.c_reducer.
CReducer
(preprocess=None)[source]¶ Bases:
secml.ml.features.c_preprocess.CPreProcess
Interface for feature dimensionality reduction algorithms.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
preprocess
Inner preprocessor (if any).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
class
secml.ml.features.reduction.c_reducer_lda.
CLDA
(n_components=None, preprocess=None)[source]¶ Bases:
secml.ml.features.reduction.c_reducer.CReducer
Linear Discriminant Analysis (LDA).
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘lda’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
property
classes
¶ Unique targets used for training.
-
property
eigenvec
¶ Eigenvectors estimated from the training data. Is a matrix of shape: n_eigenvectors * n_features.
-
property
mean
¶ Per-feature empirical mean, estimated from the training data.
-
class
secml.ml.features.reduction.c_reducer_pca.
CPCA
(n_components=None, preprocess=None)[source]¶ Bases:
secml.ml.features.reduction.c_reducer.CReducer
Principal Component Analysis (PCA).
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
‘pca’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
property
components
¶ Eigenvectors of inverse training array.
-
property
eigenval
¶ Eigenvalues estimated from the training data.
-
property
eigenvec
¶ Eigenvectors estimated from the training data.
-
property
explained_variance
¶ Variance explained by each of the selected components.
-
property
explained_variance_ratio
¶ Percentage of variance explained by each of the selected components.
If n_components is None, then all components are stored and the sum of explained variances is equal to 1.0
-
property
mean
¶ Per-feature empirical mean, estimated from the training data.
CPreProcess¶
-
class
secml.ml.features.c_preprocess.
CPreProcess
(preprocess=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Common interface for feature preprocessing algorithms.
- Parameters
- preprocessCPreProcess or str or None, optional
Features preprocess to be applied to input data. Can be a CPreProcess subclass or a string with the type of the desired preprocessor. If None, input data is used as is.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
preprocess
Inner preprocessor (if any).
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
create_chain
(class_items, kwargs_list)Creates a chain of preprocessors.
deepcopy
(self)Returns a deep copy of current class.
fit
(self, x[, y])Fit transformation algorithm.
fit_transform
(self, x[, y])Fit preprocessor using data and then transform data.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x[, w])Returns the preprocessor gradient wrt data.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
revert
(self, x)Revert data to original form.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
transform
(self, x)Apply the transformation algorithm on data.
-
static
create_chain
(class_items, kwargs_list)[source]¶ Creates a chain of preprocessors.
- Parameters
- class_itemslist of str or class instances
A list of mixed class types or CPreProcess instances. The object created with the first type/instance of the list will be the preprocess of the object created using the second type/instance in the list and so on until the end of the list.
- kwargs_listlist of dict
A list of dictionaries, one for each item in class_items, to specify any additional argument for each specific preprocessor.
- Returns
- CPreProcess
The chain of preprocessors.
-
fit
(self, x, y=None)[source]¶ Fit transformation algorithm.
- Parameters
- xCArray
Array to be used for training. Shape of input array depends on the algorithm itself.
- yCArray or None, optional
Flat array with the label of each pattern. Can be None if not required by the preprocessing algorithm.
- Returns
- CPreProcess
Instance of the trained preprocessor.
-
fit_transform
(self, x, y=None)[source]¶ Fit preprocessor using data and then transform data.
This method is equivalent to call fit(data) and transform(data) in sequence, but it’s useful when data is both the training array and the array to be transformed.
- Parameters
- xCArray
Array to be transformed. Each row must correspond to one single patterns, so each column is a different feature.
- yCArray or None, optional
Flat array with the label of each pattern. Can be None if not required by the preprocessing algorithm.
- Returns
- CArray
Transformed input data.
-
gradient
(self, x, w=None)[source]¶ Returns the preprocessor gradient wrt data.
- Parameters
- xCArray
Data array, 2-Dimensional or ravel.
- wCArray or None, optional
If CArray, will be left-multiplied to the gradient of the preprocessor.
- Returns
- gradientCArray
Gradient of the preprocessor wrt input data. Array of shape (x.shape[1], x.shape[1]) if w is None, otherwise an array of shape (w.shape[0], x.shape[1]). If w.shape[0] is 1, result will be raveled.
-
property
preprocess
¶ Inner preprocessor (if any).
-
revert
(self, x)[source]¶ Revert data to original form.
- Parameters
- xCArray
Transformed array to be reverted to original form. Shape of input array depends on the algorithm itself.
- Returns
- CArray
Original input data.
Warning
Reverting a transformed array is not always possible. See description of each preprocessor for details.
secml.ml.kernel¶
CKernel¶
-
class
secml.ml.kernel.c_kernel.
CKernel
(batch_size=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Abstract class that defines basic methods for kernels.
A kernel is a pairwise metric that compute the distance between sets of patterns.
Kernels can be considered similarity measures, i.e. s(a, b) > s(a, c) if objects a and b are considered “more similar” than objects a and c. A kernel must also be positive semi-definite.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
gradient
(self, x, v)[source]¶ Calculates kernel gradient wrt vector ‘v’.
- Parameters
- xCArray
First array of shape (n_x, n_features).
- vCArray
Second array of shape (n_features, ) or (1, n_features).
- Returns
- kernel_gradientCArray
Kernel gradient of u with respect to vector v. Array of shape (n_x, n_features) if n_x > 1, else a flattened array of shape (n_features, ).
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel import CKernelRBF
>>> array = CArray([[15,25],[45,55]]) >>> vector = CArray([2,5]) >>> print(CKernelRBF(gamma=1e-4).gradient(array, vector)) CArray([[0.002456 0.003779] [0.005567 0.006473]])
>>> print(CKernelRBF().gradient(vector, vector)) CArray([0. 0.])
-
k
(self, x, y=None)[source]¶ Compute kernel between x and y.
- Parameters
- xCArray
First array of shape (n_x, n_features).
- yCArray, optional
Second array of shape (n_y, n_features). If not specified, the kernel k(x,x) is computed.
- Returns
- kernelCArray or scalar
Kernel between x and y. Array of shape (n_x, n_y) or scalar if both x and y are vector-like.
Notes
We use a batching strategy to optimize memory consumption during kernel computation. However, the parameter batch_size should be chosen wisely: a small cache can highly improve memory consumption but can significantly slow down the computation process.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel import CKernelRBF
>>> array1 = CArray([[15,25],[45,55]]) >>> array2 = CArray([[10,20],[40,50]]) >>> print(CKernelRBF().k(array1, array2)) CArray([[1.92875e-22 0.00000e+00] [0.00000e+00 1.92875e-22]])
>>> print(CKernelRBF().k(array1)) CArray([[1. 0.] [0. 1.]])
>>> vector = CArray([15,25]) >>> print(CKernelRBF().k(vector, array1)) CArray([[1. 0.]]) >>> print(CKernelRBF().k(array1, vector)) CArray([[1.] [0.]]) >>> print(CKernelRBF().k(vector, vector)) 1.0
CKernelChebyshevDistance¶
-
class
secml.ml.kernel.c_kernel_chebyshev_distance.
CKernelChebyshevDistance
(gamma=1.0, batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Chebyshev distances kernel.
Given matrices X and Y, this is computed as:
K(x, y) = max(|x - y|)
for each pair of rows in X and in Y.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_chebyshev_distance import CKernelChebyshevDistance
>>> print(CKernelChebyshevDistance().k(CArray([[1,2],[3,4]]), CArray([[5,6],[7,8]]))) CArray([[4. 6.] [2. 4.]])
>>> print(CKernelChebyshevDistance().k(CArray([[1,2],[3,4]]))) CArray([[0. 2.] [2. 0.]])
- Attributes
class_type
‘chebyshev-dist’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
property
gamma
¶ Gamma parameter.
CKernelEuclidean¶
-
class
secml.ml.kernel.c_kernel_euclidean.
CKernelEuclidean
(batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Euclidean distances kernel.
Given matrices X and Y, this is computed by:
K(x, y) = sqrt(dot(x, x) - 2 * dot(x, y) + dot(y, y))
for each pair of rows in X and in Y. If parameter squared is True (default False), sqrt() operation is avoided.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_euclidean import CKernelEuclidean
>>> print(CKernelEuclidean().k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[20.124612 47.801674] [17.464249 45. ]])
>>> print(CKernelEuclidean().k(CArray([[1,2],[3,4]]))) CArray([[0. 2.828427] [2.828427 0. ]])
- Attributes
class_type
‘euclidean’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v[, squared])Calculates Euclidean distances kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
gradient
(self, x, v, squared=False)[source]¶ Calculates Euclidean distances kernel gradient wrt vector ‘v’.
The gradient of Euclidean distances kernel is given by:
dK(x,v)/dv = - (x - v) / k(x,v) if squared = False (default) dK(x,v)/dv = - 2 * (x - v) if squared = True
- Parameters
- xCArray
First array of shape (n_x, n_features).
- vCArray
Second array of shape (n_features, ) or (1, n_features).
- squaredbool, optional
If True, return squared Euclidean distances. Default False
- Returns
- kernel_gradientCArray
Kernel gradient of x with respect to vector v. Array of shape (n_x, n_features) if n_x > 1, else a flattened array of shape (n_features, ).
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_euclidean import CKernelEuclidean
>>> array = CArray([[15,25],[45,55]]) >>> vector = CArray([2,5]) >>> print(CKernelEuclidean().gradient(array, vector)) CArray([[-0.544988 -0.838444] [-0.652039 -0.758185]])
>>> print(CKernelEuclidean().gradient(array, vector, squared=True)) CArray([[ -26 -40] [ -86 -100]])
>>> print(CKernelEuclidean().gradient(vector, vector)) CArray([0. 0.])
CKernelHamming¶
-
class
secml.ml.kernel.c_kernel_hamming.
CKernelHamming
(batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Hamming distance kernel.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_hamming import CKernelHamming
>>> print(CKernelHamming().k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[1. 1.] [1. 1.]])
>>> print(CKernelHamming().k(CArray([[1,2],[3,4]]))) CArray([[0. 1.] [1. 0.]])
- Attributes
class_type
‘hamming’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
CKernelHistIntersect¶
-
class
secml.ml.kernel.c_kernel_histintersect.
CKernelHistIntersect
(batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Histogram Intersection Kernel.
Given matrices X and Y, this is computed by:
K(x, y) = sum^n_i ( min(x[i], y[i]) )
for each pair of rows in X and in Y.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_histintersect import CKernelHistIntersect
>>> print(CKernelHistIntersect().k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[3. 3.] [7. 7.]])
>>> print(CKernelHistIntersect().k(CArray([[1,2],[3,4]]))) CArray([[3. 3.] [3. 7.]])
- Attributes
class_type
‘hist-intersect’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
CKernelLaplacian¶
-
class
secml.ml.kernel.c_kernel_laplacian.
CKernelLaplacian
(gamma=1.0, batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Laplacian Kernel.
Given matrices X and Y, this is computed by:
K(x, y) = exp(-gamma |x-y|)
for each pair of rows in X and in Y.
- Parameters
- gammafloat
Default is 1.0.
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_laplacian import CKernelLaplacian
>>> print(CKernelLaplacian(gamma=0.01).k(CArray([[1,2],[3,4]]), CArray([[10,0],[0,40]]))) CArray([[0.895834 0.677057] [0.895834 0.677057]])
>>> print(CKernelLaplacian().k(CArray([[1,2],[3,4]]))) CArray([[1. 0.018316] [0.018316 1. ]])
- Attributes
class_type
‘laplacian’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates laplacian kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
property
gamma
¶ Gamma parameter.
-
gradient
(self, x, v)[source]¶ Calculates laplacian kernel gradient wrt vector ‘v’.
The gradient of laplacian kernel is given by:
dK(x,v)/dv = gamma * k(x,v) * sign(x - v)
- Parameters
- xCArray
First array of shape (n_x, n_features).
- vCArray
Second array of shape (n_features, ) or (1, n_features).
- Returns
- kernel_gradientCArray
Kernel gradient of x with respect to vector v. Array of shape (n_x, n_features) if n_x > 1, else a flattened array of shape (n_features, ).
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_laplacian import CKernelLaplacian
>>> array = CArray([[15,0], [0,55]]) >>> vector = CArray([2,5]) >>> print(CKernelLaplacian(gamma=0.01).gradient(array, vector)) CArray([[ 0.008353 -0.008353] [-0.005945 0.005945]])
>>> print(CKernelLaplacian().gradient(vector, vector)) CArray([0. 0.])
CKernelLinear¶
-
class
secml.ml.kernel.c_kernel_linear.
CKernelLinear
(batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Linear kernel.
Given matrices X and Y, this is computed by:
K(x, y) = x * y^T
for each pair of rows in X and in Y.
- Parameters
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_linear import CKernelLinear
>>> print(CKernelLinear().k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[ 50. 110.] [110. 250.]])
>>> print(CKernelLinear().k(CArray([[1,2],[3,4]]))) CArray([[ 5. 11.] [11. 25.]])
- Attributes
class_type
‘linear’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
CKernelPoly¶
-
class
secml.ml.kernel.c_kernel_poly.
CKernelPoly
(degree=2, gamma=1.0, coef0=1.0, batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Polynomial kernel.
Given matrices X and Y, this is computed by:
K(x, y) = (coef0 + gamma * <x, y>)^degree
for each pair of rows in X and in Y.
- Parameters
- degreeint, optional
Kernel degree. Default 2.
- gammafloat, optional
Free parameter to be used for balancing. Default 1.0.
- coef0float, optional
Free parameter used for trading off the influence of higher-order versus lower-order terms in the kernel. Default 1.0.
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_poly import CKernelPoly
>>> print(CKernelPoly(degree=3, gamma=0.001, coef0=2).k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[ 8.615125 9.393931] [ 9.393931 11.390625]])
>>> print(CKernelPoly().k(CArray([[1,2],[3,4]]))) CArray([[ 36. 144.] [144. 676.]])
- Attributes
class_type
‘poly’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
property
coef0
¶ Coef0 parameter.
-
property
degree
¶ Degree parameter.
-
property
gamma
¶ Gamma parameter.
CKernelRBF¶
-
class
secml.ml.kernel.c_kernel_rbf.
CKernelRBF
(gamma=1.0, batch_size=None)[source]¶ Bases:
secml.ml.kernel.c_kernel.CKernel
Radial basis function (RBF) kernel.
Given matrices X and Y, this is computed by:
K(x, y) = exp(-gamma ||x-y||^2)
for each pair of rows in X and in Y.
- Parameters
- gammafloat
Default is 1.0. Equals to -0.5 * sigma^-2 in the standard formulation of rbf kernel, it is a free parameter to be used for balancing.
- batch_sizeint or None, optional
Size of the batch used for kernel computation. Default None.
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_rbf import CKernelRBF
>>> print(CKernelRBF(gamma=0.001).k(CArray([[1,2],[3,4]]), CArray([[10,20],[30,40]]))) CArray([[0.666977 0.101774] [0.737123 0.131994]])
>>> print(CKernelRBF().k(CArray([[1,2],[3,4]]))) CArray([[1.000000e+00 3.354626e-04] [3.354626e-04 1.000000e+00]])
- Attributes
class_type
‘rbf’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, v)Calculates RBF kernel gradient wrt vector ‘v’.
k
(self, x[, y])Compute kernel between x and y.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
similarity
(self, x[, y])Computes kernel.
timed
([msg])Timer decorator.
-
property
gamma
¶ Gamma parameter.
-
gradient
(self, x, v)[source]¶ Calculates RBF kernel gradient wrt vector ‘v’.
The gradient of RBF kernel is given by:
dK(x,v)/dv = 2 * gamma * k(x,v) * (x - v)
- Parameters
- xCArray or array_like
First array of shape (n_x, n_features).
- vCArray or array_like
Second array of shape (n_features, ) or (1, n_features).
- Returns
- kernel_gradientCArray
Kernel gradient of x with respect to vector v. Array of shape (n_x, n_features) if n_x > 1, else a flattened array of shape (n_features, ).
Examples
>>> from secml.array import CArray >>> from secml.ml.kernel.c_kernel_rbf import CKernelRBF
>>> array = CArray([[15,25],[45,55]]) >>> vector = CArray([2,5]) >>> print(CKernelRBF(gamma=1e-4).gradient(array, vector)) CArray([[0.002456 0.003779] [0.005567 0.006473]])
>>> print(CKernelRBF().gradient(vector, vector)) CArray([0. 0.])
secml.ml.peval¶
secml.ml.peval.metrics¶
-
class
secml.ml.peval.metrics.c_metric.
CMetric
[source]¶ Bases:
secml.core.c_creator.CCreator
Performance evaluation metrics.
Utility functions to measure classification performance. Some metrics might require probability estimates of the positive class, confidence values, or binary decisions values.
Each metric can be use either y_true (true ground labels) or y_pred (predicted labels) or score (predicted scores) or other data as inputs. Check documentation of each metric for more informations.
Examples
>>> from secml.ml.peval.metrics import CMetric >>> from secml.array import CArray
>>> peval = CMetric.create('accuracy') >>> print(peval.performance_score(y_true=CArray([0, 1, 2, 3]), y_pred=CArray([0, 1, 1, 3]))) 0.75
>>> peval = CMetric.create('tpr-at-fpr', fpr=0.1) >>> print(peval.performance_score(y_true=CArray([0, 1, 0, 0]), score=CArray([1, 1, 0, 0]))) 0.3
- Attributes
- best_valuebest metric value. This is commonly a scalar (0.0 or 1.0).
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= None¶
-
performance_score
(self, y_true=None, y_pred=None, score=None, **kwargs)[source]¶ Compute the performance metric.
- Each metric can use as input either:
y_true (true ground labels)
y_pred (predicted labels)
score (predicted scores)
or any other data
Check documentation of each metric for more informations.
If not all the required data is passed, TypeError will be raised.
-
class
secml.ml.peval.metrics.c_metric_accuracy.
CMetricAccuracy
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Accuracy.
Accuracy score is the percentage (inside 0/1 range) of correctly predicted labels.
- The metric uses:
y_true (true ground labels)
y_pred (predicted labels)
Examples
>>> from secml.ml.peval.metrics import CMetricAccuracy >>> from secml.array import CArray
>>> peval = CMetricAccuracy() >>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3]))) 0.75
- Attributes
class_type
‘accuracy’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_metric_auc.
CMetricAUC
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Area Under (ROC) Curve.
AUC is computed using the trapezoidal rule.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Notes
This implementation is restricted to the binary classification task.
Examples
>>> from secml.ml.peval.metrics import CMetricAUC >>> from secml.array import CArray
>>> peval = CMetricAUC() >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.5
- Attributes
class_type
‘auc’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_metric_auc_wmw.
CMetricAUCWMW
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Area Under (ROC) Curve with Wilcoxon-Mann-Whitney statistic.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Notes
This implementation is restricted to the binary classification task.
Examples
>>> from secml.ml.peval.metrics import CMetricAUCWMW >>> from secml.array import CArray
>>> peval = CMetricAUCWMW() >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.5
- Attributes
class_type
‘auc-wmw’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_confusion_matrix.
CMetricConfusionMatrix
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
- Attributes
- best_value
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.ml.peval.metrics.c_metric_f1.
CMetricF1
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: F1.
The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.
The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is:
F1 = 2 * (precision * recall) / (precision + recall)
- The metric uses:
y_true (true ground labels)
y_pred (predicted labels)
Examples
>>> from secml.ml.peval.metrics import CMetricF1 >>> from secml.array import CArray
>>> peval = CMetricF1() >>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3]))) 0.6666666666666666
- Attributes
class_type
‘f1’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_metric_mae.
CMetricMAE
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Mean Absolute Error.
Regression loss of ground truth (correct labels) and the predicted regression score.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Examples
>>> from secml.ml.peval.metrics import CMetricMAE >>> from secml.array import CArray
>>> peval = CMetricMAE() >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.25
- Attributes
class_type
‘mae’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 0.0¶
-
class
secml.ml.peval.metrics.c_metric_mse.
CMetricMSE
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Mean Squared Error.
Regression loss of ground truth (correct labels) and the predicted regression score.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Examples
>>> from secml.ml.peval.metrics import CMetricMSE >>> from secml.array import CArray
>>> peval = CMetricMSE() >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.25
- Attributes
class_type
‘mse’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 0.0¶
-
class
secml.ml.peval.metrics.c_metric_pauc.
CMetricPartialAUC
(fpr=0.01, n_points=1000)[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Partial Area Under (ROC) Curve.
ROC is only considered between 0 and fpr False Positive Rate.
AUC is computed using the trapezoidal rule.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Notes
This implementation is restricted to the binary classification task.
Examples
>>> from secml.ml.peval.metrics import CMetricPartialAUC >>> from secml.array import CArray
>>> peval = CMetricPartialAUC(fpr=0.5) >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.125
- Attributes
class_type
‘pauc’Defines class type.
- fprfloat
Desired False Positive Rate in the interval [0,1]. Default 0.01 (1%)
- n_pointsint
Number of points to be used when interpolating the partial ROC. Higher points means more accurate values but slower computation. Default 1000.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_metric_precision.
CMetricPrecision
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Precision.
The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
- The metric uses:
y_true (true ground labels)
y_pred (predicted labels)
Examples
>>> from secml.ml.peval.metrics import CMetricPrecision >>> from secml.array import CArray
>>> peval = CMetricPrecision() >>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3]))) 0.625
- Attributes
class_type
‘precision’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_metric_recall.
CMetricRecall
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Recall (True Positive Rate).
The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. This is equivalent to True Positive Rate.
- The metric uses:
y_true (true ground labels)
y_pred (predicted labels)
Examples
>>> from secml.ml.peval.metrics import CMetricRecall >>> from secml.array import CArray
>>> peval = CMetricRecall() >>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3]))) 0.75
- Attributes
class_type
‘recall’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
-
class
secml.ml.peval.metrics.c_roc.
CBaseRoc
[source]¶ Bases:
object
Computes the receiver operating characteristic curve, or ROC curve.
This base class manage a single classifier output (a single repetition).
See also
CRoc
class that fully supports ROC repetitions.
Methods
compute
(self, y_true, score[, positive_label])Compute TPR/FPR for classifier output.
reset
(self)Reset stored data.
-
compute
(self, y_true, score, positive_label=None)[source]¶ Compute TPR/FPR for classifier output.
- Parameters
- y_trueCArray
Flat array with true binary labels in range {0, 1} for each patterns or a single array. If labels are not binary, pos_label should be explicitly given.
- scoreCArray
Flat array with target scores for each pattern, can either be probability estimates of the positive class or confidence values.
- positive_labelint, optional
Label to consider as positive (others are considered negative).
- Returns
- single_rocCBaseRoc
Instance of the roc curve (tpr, fpr, th).
-
property
fpr
¶ False Positive Rates.
- Flat array with increasing False Positive Rates. Element i
is the False Positive Rate of predictions with score >= thresholds[i].
-
property
th
¶ Thresholds.
- Flat array with decreasing thresholds on the decision function
used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.
-
property
tpr
¶ True Positive Rates.
- Flat array with increasing True Positive Rates. Element i
is the True Positive Rate of predictions with score >= thresholds[i].
-
class
secml.ml.peval.metrics.c_roc.
CRoc
[source]¶ Bases:
secml.ml.peval.metrics.c_roc.CBaseRoc
Computes the receiver operating characteristic curve, or ROC curve.
“A receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of True Positive Rates out of the Positives (TPR = True Positive Rate) vs. the fraction of False Positives out of the Negatives (FPR = False Positive Rate), at various threshold settings. TPR is also known as sensitivity, and FPR is one minus the specificity or true negative rate.”
The class manage different repetitions of the same classification output.
- Attributes
fpr
False Positive Rates.
has_mean
True if average has been computed for all ROCs.
has_std_dev
True if standard deviation has been computed for all ROCs.
mean_fpr
Averaged False Positive Rates.
mean_tpr
Averaged True Positive Rates.
n_reps
Return the number of computed ROC.
std_dev_tpr
Standard deviation of True Positive Rates.
th
Thresholds.
tpr
True Positive Rates.
Methods
average
(self[, n_points, return_std])Compute the average of computed ROC curves.
compute
(self, y_true, score[, positive_label])Compute ROC curve using input True labels and Classification Scores.
reset
(self)Reset stored data.
-
average
(self, n_points=1000, return_std=False)[source]¶ Compute the average of computed ROC curves.
The average ROC is reset each time .compute_roc is called.
- Parameters
- n_pointsint, optional
Default 1000, is the number of points to be used for interpolation.
- return_stdbool, optional
If True, standard deviation of True Positive Rates will be returned.
- Returns
- mean_fprCArray
- Flat array with increasing False Positive Rates averaged over all
available repetitions. Element i is the false positive rate of predictions with score >= thresholds[i].
- mean_tprCArray
- Flat array with increasing True Positive Rates averaged over all
available repetitions. Element i is the true positive rate of predictions with score >= thresholds[i].
- std_dev_tprCArray
Flat array with standard deviation of True Positive Rates. Only if return_std is True.
-
compute
(self, y_true, score, positive_label=None)[source]¶ Compute ROC curve using input True labels and Classification Scores.
For multi-class data, label to be considered positive should specified.
If y_true and score are both lists (with same length), one roc curve for each pair is returned. If y_true is a single array, one roc curve for each (y_true, score[i]) is returned.
Each time the function is called, result is appended to tpr,`fpr`, and thr class attributes. Returned ROCs are the only associated with LATEST input data.
- Parameters
- y_trueCArray, list
List of flat arrays with true binary labels in range {0, 1} for each patterns or a single array. If a single array, one curve is returned for each (y_true, score[i]) pair. If labels are not binary, pos_label should be explicitly given.
- scoreCArray, list
List of flat array with target scores for each pattern, can either be probability estimates of the positive class or confidence values. If y_true is a single array, one curve is returned for each (y_true, score[i]) pair.
- positive_labelint, optional
Label to consider as positive (others are considered negative).
- Returns
- fprCArray or list
- Flat array with increasing False Positive Rates or a list with
one array for each repetition. Element i is the False Positive Rate of predictions with score >= thresholds[i]
- tprCArray or list
- Flat array with increasing True Positive Rates or a list with
one array for each repetition. Element i is the True Positive Rate of predictions with score >= thresholds[i].
- thCArray or list
- Flat array with decreasing thresholds on the decision function
used to compute fpr and tpr or a list with one array for each repetition. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.
-
property
fpr
¶ False Positive Rates.
- Flat array with increasing False Positive Rates or a list with
one array for each repetition. Element i is the False Positive Rate of predictions with score >= thresholds[i].
-
property
has_mean
¶ True if average has been computed for all ROCs.
-
property
has_std_dev
¶ True if standard deviation has been computed for all ROCs.
-
property
mean_fpr
¶ Averaged False Positive Rates.
- Flat array with increasing False Positive Rates averaged over all
available repetitions. Element i is the false positive rate of predictions with score >= thresholds[i].
-
property
mean_tpr
¶ Averaged True Positive Rates.
- Flat array with increasing True Positive Rates averaged over all
available repetitions. Element i is the True Positive Rate of predictions with score >= thresholds[i].
-
property
n_reps
¶ Return the number of computed ROC.
-
property
std_dev_tpr
¶ Standard deviation of True Positive Rates.
-
property
th
¶ Thresholds.
- Flat array with decreasing thresholds on the decision function
used to compute fpr and tpr or a list with one array for each repetition. thresholds[0] represents no instances being predicted and is arbitrarily set to max(score) + 1e-3.
-
property
tpr
¶ True Positive Rates.
- Flat array with increasing True Positive Rates or a list with
one array for each repetition. Element i is the True Positive Rate of predictions with score >= thresholds[i].
-
secml.ml.peval.metrics.c_roc.
average
(fpr, tpr, n_points=1000)[source]¶ Compute the average of the input tpr/fpr pairs.
- Parameters
- fpr, tprCArray or list of CArray
CArray or list of CArrays with False/True Positive Rates as output of .CRoc.
- n_pointsint, optional
Default 1000, is the number of points to be used for interpolation.
- Returns
- mean_fprCArray
Flat array with increasing False Positive Rates averaged over all available repetitions. Element i is the False Positive Rate of predictions with score >= thresholds[i].
- mean_tprCArray
Flat array with increasing True Positive Rates averaged over all available repetitions. Element i is the True Positive Rate of predictions with score >= thresholds[i].
- std_dev_tprCArray
Flat array with standard deviation of True Positive Rates.
-
secml.ml.peval.metrics.c_roc.
refine_roc
(fpr, tpr, th)[source]¶ Function to ensure the bounds of a ROC.
The first and last points should be (0,0) and (1,1) respectively.
- Parameters
- fprCArray
False Positive Rates, as returned by .BaseRoc.compute().
- tprCArray
True Positive Rates, as returned by .BaseRoc.compute().
- thCArray
Thresholds, as returned by .BaseRoc.compute().
-
class
secml.ml.peval.metrics.c_metric_test_error.
CMetricTestError
[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: Test Error.
Test Error score is the percentage (inside 0/1 range) of wrongly predicted labels (inverse of accuracy).
- The metric uses:
y_true (true ground labels)
y_pred (predicted labels)
Examples
>>> from secml.ml.peval.metrics import CMetricTestError >>> from secml.array import CArray
>>> peval = CMetricTestError() >>> print(peval.performance_score(CArray([0, 1, 2, 3]), CArray([0, 1, 1, 3]))) 0.25
- Attributes
class_type
‘test-error’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 0.0¶
-
class
secml.ml.peval.metrics.c_metric_tpr_at_fpr.
CMetricTPRatFPR
(fpr=0.01)[source]¶ Bases:
secml.ml.peval.metrics.c_metric.CMetric
Performance evaluation metric: True Positive Rate @ False Positive Rate.
- The metric uses:
y_true (true ground labels)
score (estimated target values)
Notes
This implementation is restricted to the binary classification task.
Examples
>>> from secml.ml.peval.metrics import CMetricTPRatFPR >>> from secml.array import CArray
>>> peval = CMetricTPRatFPR(fpr=0.5) >>> print(peval.performance_score(CArray([0, 1, 0, 0]), score=CArray([0, 0, 0, 0]))) 0.5
- Attributes
class_type
‘tpr-at-fpr’Defines class type.
- fprfloat
Desired False Positive Rate in the interval [0,1]. Default 0.01 (1%)
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
performance_score
(self[, y_true, y_pred, score])Compute the performance metric.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
best_value
= 1.0¶
CPerfEvaluator¶
-
class
secml.ml.peval.c_perfevaluator.
CPerfEvaluator
(splitter, metric)[source]¶ Bases:
secml.core.c_creator.CCreator
Evaluate the best parameters for input estimator.
- Parameters
- splitterCDataSplitter or str
Object to use for splitting the dataset into train and validation.
- metricCMetric or str
Name of the metric that we want maximize / minimize.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
compute_performance
(self, estimator, dataset)Compute estimator performance on input dataset.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
evaluate_params
(self, estimator, dataset, …)Evaluate parameters for input estimator on input dataset.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
abstract
compute_performance
(self, estimator, dataset)[source]¶ Compute estimator performance on input dataset.
This must be reimplemented by subclasses.
- Parameters
- estimatorCClassifier
The classifier that we want evaluate.
- datasetCDataset
Dataset that we want use for evaluate the classifier.
- Returns
- scorefloat
Performance score of estimator.
-
evaluate_params
(self, estimator, dataset, parameters, pick='first', n_jobs=1)[source]¶ Evaluate parameters for input estimator on input dataset.
- Parameters
- estimatorCClassifier
The classifier for witch we want chose best parameters.
- datasetCDataset
Dataset to be used for evaluating parameters.
- parametersdict
Dictionary with each entry as {parameter: list of values to test}.
- pick{‘first’, ‘last’, ‘random’}, optional
Defines which of the best parameters set pick. Usually, ‘first’ (default) correspond to the smallest parameters while ‘last’ correspond to the biggest. The order is consistent to the parameters dict passed as input.
- n_jobsint, optional
Number of parallel workers to use. Default 1. Cannot be higher than processor’s number of cores.
- Returns
- best_param_dictdict
A dictionary with the best value for each evaluated parameter.
- best_valueany
Metric value obtained on validation set by the estimator.
CPerfEvaluatorXVal¶
-
class
secml.ml.peval.c_perfevaluator_xval.
CPerfEvaluatorXVal
(splitter, metric)[source]¶ Bases:
secml.ml.peval.c_perfevaluator.CPerfEvaluator
Evaluate the best estimator parameters using Cross-Validation.
- Parameters
- splitterCXVal or str
XVal object to be used for splitting the dataset into train and validation.
- metricCMetric or str
Name of the metric that we want maximize / minimize.
- Attributes
class_type
‘xval’Defines class type.
Methods
compute_performance
(self, estimator, dataset)Split data in folds and return the mean estimator performance.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
evaluate_params
(self, estimator, dataset, …)Evaluate parameters for input estimator on input dataset.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
compute_performance
(self, estimator, dataset)[source]¶ Split data in folds and return the mean estimator performance.
- Parameters
- estimatorCClassifier
The Classifier that we want evaluate
- datasetCDataset
Dataset that we want use for evaluate the classifier
- Returns
- scorefloat
Mean performance score of estimator computed on the K-Folds.
CPerfEvaluatorXValMulticlass¶
-
class
secml.ml.peval.c_perfevaluator_xval_multiclass.
CPerfEvaluatorXValMulticlass
(splitter, metric)[source]¶ Bases:
secml.ml.peval.c_perfevaluator.CPerfEvaluator
Evaluate the best parameters for each single binary classifier using Cross-Validation.
- Parameters
- splitterCXVal or str
XVal object to be used for splitting the dataset into train and validation.
- metricCMetric or str
Name of the metric that we want maximize / minimize.
- Attributes
class_type
‘xval-multiclass’Defines class type.
Methods
compute_performance
(self, estimator, dataset)Split data in folds and return the mean estimator performance.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
evaluate_params
(self, estimator, dataset, …)Evaluate parameters for input estimator on input dataset.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
compute_performance
(self, estimator, dataset)[source]¶ Split data in folds and return the mean estimator performance.
- Parameters
- estimatorCClassifier
The Classifier that we want evaluate
- datasetCDataset
Dataset that we want use for evaluate the classifier
- Returns
- scoreslist
Mean performance score of each binary estimator computed on the K-Folds.
secml.ml.stats¶
CDensityEstimation¶
-
class
secml.ml.stats.c_density_estimation.
CDensityEstimation
(bandwidth=1.0, algorithm='auto', kernel='gaussian', metric='euclidean', atol=0, rtol=1e-08, breadth_first=True, leaf_size=40, metric_params=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Kernel Density Estimation
- Parameters
- bandwidthfloat, optional
The bandwidth of the kernel. Default 1.
- algorithmstr, optional
The tree algorithm to use. Valid options are [‘kd_tree’|’ball_tree’|’auto’]. Default is ‘auto’.
- kernelstr, optional
The kernel to use. Valid kernels are [‘gaussian’|’tophat’|’epanechnikov’|’exponential’|’linear’|’cosine’]. Default is ‘gaussian’.
- metricstr, optional
The distance metric to use. Note that not all metrics are valid with all algorithms. Refer to the documentation of BallTree and KDTree for a description of available algorithms. Note that the normalization of the density output is correct only for the Euclidean distance metric. Default is ‘euclidean’.
- atolfloat, optional
The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.
- rtolfloat, optional
The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
- breadth_firstbool, optional
If true (default), use a breadth-first approach to the problem. Otherwise use a depth-first approach.
- leaf_sizeint, optional
Specify the leaf size of the underlying tree. See BallTree or KDTree for details. Default is 40.
- metric_paramsdict, optional
Additional parameters to be passed to the tree for use with the metric. For more information, see the documentation of BallTree or KDTree.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
estimate_density
(self, x[, n_points])Estimate density of input array.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CDistributionGaussian¶
-
class
secml.ml.stats.c_distribution_gaussian.
CDistributionGaussian
(mean=0, cov=1)[source]¶ Bases:
secml.core.c_creator.CCreator
A multivariate normal random variable.
- Parameters
- meanscalar, optional
Mean of the distribution (default zero)
- covarray_like or scalar, optional
Covariance matrix of the distribution (default one)
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
logpdf
(self, data)Log of the probability density function.
pdf
(self, data)Probability density function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
secml.adv¶
Adversarial Machine Learning
secml.adv.attacks¶
secml.adv.attacks.evasion¶
-
class
secml.adv.attacks.evasion.c_attack_evasion.
CAttackEvasion
(classifier, surrogate_classifier, surrogate_data=None, y_target=None)[source]¶ Bases:
secml.adv.attacks.c_attack.CAttack
Interface for Evasion attacks.
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
ub
Returns ub
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
objective_function
(self, x)[source]¶ Objective function.
- Parameters
- xCArray
Array with points on which the objective function should be computed.
- Returns
- CArray
Value of the objective function on each point.
-
run
(self, x, y, ds_init=None, *args, **kargs)[source]¶ Runs evasion on a dataset.
- Parameters
- xCArray
Data points.
- yCArray
True labels.
- ds_initCDataset
Dataset for warm starts.
- Returns
- y_predCArray
Predicted labels for all ds samples by target classifier.
- scoresCArray
Scores for all ds samples by target classifier.
- adv_dsCDataset
Dataset of manipulated samples.
- f_objfloat
Average value of the objective function computed on each data point.
-
class
secml.adv.attacks.evasion.c_attack_evasion_pgd.
CAttackEvasionPGD
(classifier, surrogate_classifier, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=<no value>, y_target=None, attack_classes='all', solver_params=None)[source]¶ Bases:
secml.adv.attacks.evasion.c_attack_evasion_pgd_ls.CAttackEvasionPGDLS
Evasion attacks using Projected Gradient Descent.
- This class implements the maximum-confidence evasion attacks proposed in:
https://arxiv.org/abs/1708.06939, ICCV W. ViPAR, 2017.
- This is the multi-class extension of our original work in:
https://arxiv.org/abs/1708.06131, ECML 2013, implemented using a standard projected gradient solver.
It can also be used on sparse, high-dimensional feature spaces, using an L1 constraint on the manipulation of samples to preserve sparsity, as we did for crafting adversarial Android malware in:
https://arxiv.org/abs/1704.08996, IEEE TDSC 2017.
- For more on evasion attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- Attributes
class_type
‘e-pgd-ls’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.adv.attacks.evasion.c_attack_evasion_pgd_ls.
CAttackEvasionPGDLS
(classifier, surrogate_classifier, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_params=None)[source]¶ Bases:
secml.adv.attacks.evasion.c_attack_evasion.CAttackEvasion
Evasion attacks using Projected Gradient Descent with Line Search.
- This class implements the maximum-confidence evasion attacks proposed in:
https://arxiv.org/abs/1708.06939, ICCV W. ViPAR, 2017.
- This is the multi-class extension of our original work in:
https://arxiv.org/abs/1708.06131, ECML 2013,
implemented using a custom projected gradient solver that uses line search in each iteration to save gradient computations and speed up the attack.
It can also be used on sparse, high-dimensional feature spaces, using an L1 constraint on the manipulation of samples to preserve sparsity, as we did for crafting adversarial Android malware in:
https://arxiv.org/abs/1704.08996, IEEE TDSC 2017.
- For more on evasion attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- discrete: True/False (default: false).
If True, input space is considered discrete (integer-valued), otherwise continuous.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- Attributes
class_type
‘e-pgd-ls’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
objective_function
(self, x)Objective function.
run
(self, x, y[, ds_init])Runs evasion on a dataset.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
y_target
¶
secml.adv.attacks.poisoning¶
-
class
secml.adv.attacks.poisoning.c_attack_poisoning.
CAttackPoisoning
(classifier, training_data, surrogate_classifier, val, surrogate_data=None, distance='l2', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.c_attack.CAttack
Interface for poisoning attacks.
- Parameters
- classifierCClassifier
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- valCDataset
Validation set.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
add_discrete_perturbation
-
property
n_points
¶ Returns the number of poisoning points.
-
property
random_seed
¶ Returns the attacker’s validation data
-
run
(self, x, y, ds_init=None, max_iter=1)[source]¶ Runs poisoning on multiple points.
It reads n_points (previously set), initializes xc, yc at random, and then optimizes the poisoning points xc.
- Parameters
- xCArray
Validation set for evaluating classifier performance. Note that this is not the validation data used by the attacker, which should be passed instead to CAttackPoisoning init.
- yCArray
Corresponding true labels for samples in x.
- ds_initCDataset or None, optional.
Dataset for warm start.
- max_iterint, optional
Number of iterations to re-optimize poisoning data. Default 1.
- Returns
- y_predpredicted labels for all val samples by targeted classifier
- scoresscores for all val samples by targeted classifier
- adv_xcmanipulated poisoning points xc (for subsequents warm starts)
- f_optfinal value of the objective function
-
property
training_data
¶ Returns the training set used to learn the targeted classifier
-
property
val
¶ Returns the attacker’s validation data
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_logistic_regression.
CAttackPoisoningLogisticRegression
(classifier, training_data, surrogate_classifier, val, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against logistic regression.
This is an implementation of the attack developed in Sect. 3.3 in https://www.usenix.org/conference/usenixsecurity19/presentation/demontis:
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th USENIX Security Symposium. USENIX Association, 2019.
- For more details on poisoning attacks, see also:
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/abs/1804.07933, ICML 2015
https://arxiv.org/pdf/1206.6389, ICML 2012
- Parameters
- classifierCClassifierLogistic
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- valCDataset
Validation set.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
add_discrete_perturbation
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_ridge.
CAttackPoisoningRidge
(classifier, training_data, surrogate_classifier, val, surrogate_data=None, distance='l2', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_type='pgd-ls', solver_params=None, init_type=None, random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against ridge regression.
This is an implementation of the attack developed in https://arxiv.org/abs/1804.07933:
H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli. Is feature selection secure against training data poisoning? In F. Bach and D. Blei, editors, JMLR W&CP, Proc. 32nd Int’l Conf. Mach. Learning (ICML), volume 37, pp. 1689-1698, 2015.
- For more details on poisoning attacks, see also:
https://arxiv.org/abs/1809.02861, USENIX Sec. 2019
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/pdf/1206.6389, ICML 2012
- Parameters
- classifierCClassifierRidge
Target classifier.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- valCDataset
Validation set.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
add_discrete_perturbation
-
class
secml.adv.attacks.poisoning.c_attack_poisoning_svm.
CAttackPoisoningSVM
(classifier, training_data, surrogate_classifier, val, surrogate_data=None, distance='l1', dmax=0, lb=0, ub=1, discrete=False, y_target=None, attack_classes='all', solver_type='pgd-ls', solver_params=None, init_type='random', random_seed=None)[source]¶ Bases:
secml.adv.attacks.poisoning.c_attack_poisoning.CAttackPoisoning
Poisoning attacks against Support Vector Machines (SVMs).
- This is an implementation of the attack in https://arxiv.org/pdf/1206.6389:
B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In J. Langford and J. Pineau, editors, 29th Int’l Conf. on Machine Learning, pages 1807-1814. Omnipress, 2012.
where the gradient is computed as described in Eq. (10) in https://www.usenix.org/conference/usenixsecurity19/presentation/demontis:
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th USENIX Security Symposium. USENIX Association, 2019.
- For more details on poisoning attacks, see also:
https://arxiv.org/abs/1804.00308, IEEE Symp. SP 2018
https://arxiv.org/abs/1712.03141, Patt. Rec. 2018
https://arxiv.org/abs/1708.08689, AISec 2017
https://arxiv.org/abs/1804.07933, ICML 2015
- Parameters
- classifierCClassifierSVM
Target classifier. If linear, requires store_dual_vars = True.
- training_dataCDataset
Dataset on which the the classifier has been trained on.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- valCDataset
Validation set.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_typestr or None, optional
Identifier of the solver to be used. Default ‘pgd-ls’.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- init_type{‘random’, ‘loss_based’}, optional
Strategy used to chose the initial random samples. Default ‘random’.
- random_seedint or None, optional
If int, random_state is the seed used by the random number generator. If None, no fixed seed will be set.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
n_points
Returns the number of poisoning points.
random_seed
Returns the attacker’s validation data
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
training_data
Returns the training set used to learn the targeted classifier
ub
Returns ub
val
Returns the attacker’s validation data
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
alpha_xc
(self, xc)- Parameters
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
run
(self, x, y[, ds_init, max_iter])Runs poisoning on multiple points.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
add_discrete_perturbation
CAttack¶
-
class
secml.adv.attacks.c_attack.
CAttack
(classifier, surrogate_classifier, surrogate_data=None, distance=None, dmax=None, lb=None, ub=None, discrete=False, y_target=None, attack_classes='all', solver_type=None, solver_params=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Interface class for evasion and poisoning attacks.
- Parameters
- classifierCClassifier
Target classifier.
- surrogate_classifierCClassifier
Surrogate classifier, assumed to be already trained.
- surrogate_dataCDataset or None, optional
Dataset on which the the surrogate classifier has been trained on. Is only required if the classifier is nonlinear.
- distance{‘l1’ or ‘l2’}, optional
Norm to use for computing the distance of the adversarial example from the original sample. Default ‘l2’.
- dmaxscalar, optional
Maximum value of the perturbation. Default 1.
- lb, ubint or CArray, optional
Lower/Upper bounds. If int, the same bound will be applied to all the features. If CArray, a different bound can be specified for each feature. Default lb = 0, ub = 1.
- discrete: True/False (default: false).
If True, input space is considered discrete (integer-valued), otherwise continuous.
- y_targetint or None, optional
If None an error-generic attack will be performed, else a error-specific attack to have the samples misclassified as belonging to the y_target class.
- attack_classes‘all’ or CArray, optional
- Array with the classes that can be manipulated by the attacker or
‘all’ (default) if all classes can be manipulated.
- solver_typestr or None, optional
Identifier of the solver to be used.
- solver_paramsdict or None, optional
Parameters for the solver. Default None, meaning that default parameters will be used.
- Attributes
- attack_classes
class_type
Defines class type.
classifier
Returns classifier
discrete
Returns True if feature space is discrete, False if continuous.
distance
todo
dmax
Returns dmax
- f_eval
- f_opt
- f_seq
- grad_eval
- issparse
lb
Returns lb
logger
Logger for current object.
- n_dim
- solver_params
- solver_type
surrogate_classifier
Returns surrogate classifier
surrogate_data
Returns surrogate data
ub
Returns ub
verbose
Verbosity level of logger output.
- x_opt
- x_seq
- y_target
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
is_attack_class
(self, y)Returns True/False if the input class can be attacked.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
run
(self, x, y[, ds_init])Perform attack for the i-th param name attack power.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
attack_classes
¶
-
property
classifier
¶ Returns classifier
-
property
discrete
¶ Returns True if feature space is discrete, False if continuous.
-
property
distance
¶ todo
-
property
dmax
¶ Returns dmax
-
property
f_eval
¶
-
property
f_opt
¶
-
property
f_seq
¶
-
property
grad_eval
¶
-
is_attack_class
(self, y)[source]¶ Returns True/False if the input class can be attacked.
- Parameters
- yint or CArray
CArray or single label of the class to to be checked.
- Returns
- bool or CArray
- True if class y can be manipulated by the attacker,
False otherwise. If CArray, a True/False value for each input label will be returned.
-
property
issparse
¶
-
property
lb
¶ Returns lb
-
property
n_dim
¶
-
abstract
run
(self, x, y, ds_init=None)[source]¶ Perform attack for the i-th param name attack power.
- Parameters
- xCArray
Initial sample.
- yint or CArray
The true label of x.
- ds_initCDataset or None, optional.
Dataset for warm start.
-
property
solver_params
¶
-
property
solver_type
¶
-
property
surrogate_classifier
¶ Returns surrogate classifier
-
property
surrogate_data
¶ Returns surrogate data
-
property
ub
¶ Returns ub
-
property
x_opt
¶
-
property
x_seq
¶
-
property
y_target
¶
secml.adv.seceval¶
CSecEval¶
-
class
secml.adv.seceval.c_sec_eval.
CSecEval
(attack, param_name, param_values, save_adv_ds=False)[source]¶ Bases:
secml.core.c_creator.CCreator
This class repeat the security evaluation (where security is measured with a given metric) while the power of the attacker increase.
- Parameters
- attackCAttack
Class that implements an attack (e.g evasion or poisoning)
- param_namestr
Name of the parameter that represents the increasingly attacker power.
- param_valuesCArray
Array that contains values that param_name will assumes during the attack (this define how the attacker power increases). If the first value is not zero, zero will be added as first value
- save_adv_dsbool, optional
If True, the samples at each parameter will be stored. Default False.
See also
CAttack
class that implements the attack.
- Attributes
attack
Return the attack object that is used from CSecEval to perform the attack.
class_type
Defines class type.
logger
Logger for current object.
save_adv_ds
Returns
sec_eval_data
Get a sec eval data objects.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
load_data
(self, path)Restore Sec Eval data from file.
run_sec_eval
(self, dataset)Performs attack while the power of the attacker (named param_name) increase.
save
(self, path)Save class object using pickle.
save_data
(self, path)Store Sec Eval data to file.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
attack
¶ Return the attack object that is used from CSecEval to perform the attack.
-
run_sec_eval
(self, dataset)[source]¶ Performs attack while the power of the attacker (named param_name) increase.
- Parameters
- datasetCDataset
Dataset that contain samples that will be manipulated from the attacker while his attack power increase
-
property
save_adv_ds
¶ - Returns
- True/False: whether to store or not the manipulated attack sample dataset
-
property
sec_eval_data
¶ Get a sec eval data objects. It contains the Security Evaluation Results.
- Returns
- sec_eval_data: CSecEvalData object
contains classifier security evaluation results
CSecEvalData¶
-
class
secml.adv.seceval.c_sec_eval_data.
CSecEvalData
[source]¶ Bases:
secml.core.c_creator.CCreator
This class is a container for data computed during Classifier Security Evaluation.
- Attributes
class_type
‘standard’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Load Security evaluation data from file.
save
(self, path)Load Security evaluation data from file.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
Y
¶ Returns the values of the dataset true labels.
-
property
Y_pred
¶ - Returns
- Y_predlist of CArray
Contain one element for each attack power value. Each element contain label assigned to all the dataset samples from the attack.
-
property
Y_target
¶ Returns the values of the desired predicted labels.
-
property
adv_ds
¶ - Returns
- adv_dslist of CDataset.
containing one dataset for each different parameter value.
-
property
fobj
¶ Return objective function values with the different attack power
-
classmethod
load
(path)[source]¶ Load Security evaluation data from file.
Save a python dict containing all the results.
-
property
param_name
¶ Returns the name of the parameter representing the attack strenght.
-
property
param_values
¶ Returns the values of the security-evaluation parameter.
-
save
(self, path)[source]¶ Load Security evaluation data from file.
Save a python dict containing all the results.
-
property
scores
¶ - Returns
- scores: list of CArray
Contain one element for each attack power value. Each element contain score assigned by the classifier to all the dataset samples.
-
property
time
¶ - Returns
- timeCArray (n_patterns, num parameter values)
Each array row contain the times of the attack for one samples. Each row element represent a different attack power.
secml.optim¶
Optimization
secml.optim.function¶
CFunction¶
-
class
secml.optim.function.c_function.
CFunction
(fun=None, gradient=None, n_dim=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Class that handles generic mathematical functions.
Either a function or its gradient can be passed in.
Number of expected space dimensions can be specified if applicable.
- Parameters
- funcallable or None
Any python callable. Required if gradient is None.
- gradientcallable or None
Any python callable that returns the gradient of fun. Required if fun is None.
- n_dimint or None, optional
Expected space dimensions.
- Attributes
class_type
‘generic’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
approx_fprime
(self, x, epsilon, *args, **kwargs)[source]¶ Finite-difference approximation of the gradient of a scalar function.
Wrapper for scipy function
scipy.optimize.approx_fprime
.- Parameters
- xCArray
The flat dense vector with the point at which to determine the gradient of fun.
- epsilonscalar or CArray
Increment of x to use for determining the function gradient. If a scalar, uses the same finite difference delta for all partial derivatives. If an array, should contain one value per element of x.
- args, kwargs
Any other arguments that are to be passed to fun.
- Returns
- gradCArray
The gradient of fun at x.
See also
check_grad
Check correctness of function gradient against
approx_fprime
.
Notes
The function gradient is determined by the forward finite difference formula:
fun(xk[i] + epsilon[i]) - f(xk[i]) fun'[i] = ----------------------------------- epsilon[i]
The main use of approx_fprime is to determine numerically the Jacobian of a function.
Examples
>>> from secml.array import CArray >>> from secml.optim.function import CFunction >>> from secml.core.constants import eps
>>> def func(x, c0, c1): ... "Coordinate vector `x` should be an array of size two." ... return c0 * x[0]**2 + c1*x[1]**2
>>> c0, c1 = (1, 200) >>> CFunction(func).approx_fprime(CArray.ones(2), [eps, (200 ** 0.5) * eps], c0, c1=c1) CArray(2,)(dense: [ 2. 400.000042])
-
check_grad
(self, x, epsilon, *args, **kwargs)[source]¶ - Check the correctness of a gradient function by comparing
it against a (forward) finite-difference approximation of the gradient.
- Parameters
- xCArray
Flat dense pattern to check function gradient against forward difference approximation of function gradient.
- epsilonscalar or CArray
Increment of x to use for determining the function gradient. If a scalar, uses the same finite difference delta for all partial derivatives. If an array, should contain one value per element of x.
- args, kwargs
Extra arguments passed to fun and fprime.
- Returns
- errfloat
The square root of the sum of squares (i.e. the l2-norm) of the difference between
fprime(x, *args)
and the finite difference approximation of fprime at the points x.
See also
approx_fprime
Finite-difference approximation of the gradient of a scalar function.
Notes
epsilon is the only keyword argument accepted by the function. Any other optional argument for fun and fprime should be passed as non-keyword.
Examples
>>> from secml.optim.function import CFunction >>> from secml.array import CArray
>>> def func(x): ... return x[0].item()**2 - 0.5 * x[1].item()**3 >>> def grad(x): ... return CArray([2 * x[0].item(), -1.5 * x[1].item()**2])
>>> fun = CFunction(func, grad) >>> fun.check_grad(CArray([1.5, -1.5]), epsilon=1e-8) 7.817837928307533e-08
-
fun
(self, x, *args, **kwargs)[source]¶ Evaluates function on x.
- Parameters
- xCArray
Argument of fun.
- args, kwargs
Other optional parameter of the function.
- Returns
- out_funscalar or CArray
Function output, scalar or CArray depending on the inner function.
-
fun_ndarray
(self, x, *args, **kwargs)[source]¶ Evaluates function on x (ndarray).
- Parameters
- xnp.ndarray
Argument of fun as ndarray.
- args, kwargs
Other optional parameter of the function.
- Returns
- out_funscalar or CArray
Function output, scalar or CArray depending on the inner function.
-
gradient
(self, x, *args, **kwargs)[source]¶ Evaluates gradient of function at point x.
- Parameters
- xCArray
Argument of gradient. Single point.
- args, kwargs
Other optional parameter of the function.
- Returns
- out_gradCArray
Array with gradient output.
-
gradient_ndarray
(self, x, *args, **kwargs)[source]¶ Evaluates gradient of function at point x (ndarray).
- Parameters
- xndarray
Argument of gradient.
- args, kwargs
Other optional parameter of the function.
- Returns
- out_gradndarray
Array with gradient output.
-
property
n_dim
¶ Returns the expected function’s space dimensions.
-
property
n_fun_eval
¶ Returns the number of function evaluations.
-
property
n_grad_eval
¶ Returns the number of gradient evaluations.
CFunctionLinear¶
-
class
secml.optim.function.c_function_linear.
CFunctionLinear
(b, c)[source]¶ Bases:
secml.optim.function.c_function.CFunction
- Implements linear functions of the form:
b’ x + c = 0
- Attributes
class_type
‘linear’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CFunctionQuadratic¶
-
class
secml.optim.function.c_function_quadratic.
CFunctionQuadratic
(A, b, c)[source]¶ Bases:
secml.optim.function.c_function.CFunction
- Implements quadratic functions of the form:
x’ A x + b’ x + c = 0
- Attributes
class_type
‘quadratic’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CFunctionRosenbrock¶
-
class
secml.optim.function.c_function_rosenbrock.
CFunctionRosenbrock
[source]¶ Bases:
secml.optim.function.c_function.CFunction
The Rosenbrock function.
Non-convex function introduced by Howard H. Rosenbrock in 1960. [R0d5c343f82c4-1] Also known as Rosenbrock’s valley or Rosenbrock’s banana function.
Global minimum f(x) = 0 @ x = (1, 1, …., 1).
Given by: .. math:
f(x) = \sum^{n-1}_{i=1} [100 * {(x_{i+1} - x_i^2)}^2 + (x_i - 1)^2]
References
- R0d5c343f82c4-1
Rosenbrock, HoHo. “An automatic method for finding the greatest or least value of a function.” The Computer Journal 3.3 (1960): 175-184.
- Attributes
class_type
‘rosenbrock’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
Value of the global minimum of the function.
global_min_x
([ndim])Global minimum point of the function.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CFunctionThreeHumpCamel¶
-
class
secml.optim.function.c_function_3hcamel.
CFunctionThreeHumpCamel
[source]¶ Bases:
secml.optim.function.c_function.CFunction
The Three-Hump Camel function.
2-Dimensional function.
Global minimum f(x) = 0 @ x = (0, 0).
Given by: .. math:
f(x) = 2 * x_0 ** 2 - 1.05 * x_0 ** 4 + x_0 ** 6 / 6 + x_0 * x_1 + x_1 ^ 2
- Attributes
class_type
‘3h-camel’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
Value of the global minimum of the function.
Global minimum point of the function.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CFunctionBeale¶
-
class
secml.optim.function.c_function_beale.
CFunctionBeale
[source]¶ Bases:
secml.optim.function.c_function.CFunction
The Beale function.
2-Dimensional, multimodal, with sharp peaks at the corners of the input domain.
Global minimum f(x) = 0 @ x = (3, 0.5).
Given by: .. math:
f(x) = (1.5 - x_0 + x_0 * x_1)^2 + (2.25 - x_0 + x_0 * x_1^2)^2 + (2.625 - x_0 + x_0 * x_1^3)^2
- Attributes
class_type
‘beale’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
Value of the global minimum of the function.
Global minimum point of the function.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
CFunctionMcCormick¶
-
class
secml.optim.function.c_function_mccormick.
CFunctionMcCormick
[source]¶ Bases:
secml.optim.function.c_function.CFunction
The McCormick function.
2-Dimensional function.
Global minimum f(x) = -1.9132 @ x = (-0.5472, -1.5472). This is on a compact domain (lb=[-1.5,-3], ub=[4,4])
Given by: .. math:
f(x) = sin(x_0 + x_1) + (x_0 - x_1)^2 - 1.5 * x_0 + 2.5 * x_1 + 1
- Attributes
class_type
mc-cormick’Defines class type.
Methods
approx_fprime
(self, x, epsilon, \*args, …)Finite-difference approximation of the gradient of a scalar function.
check_grad
(self, x, epsilon, \*args, \*\*kwargs)Check the correctness of a gradient function by comparing
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
fun
(self, x, \*args, \*\*kwargs)Evaluates function on x.
fun_ndarray
(self, x, \*args, \*\*kwargs)Evaluates function on x (ndarray).
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
Value of the global minimum of the function.
Global minimum point of the function.
gradient
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x.
gradient_ndarray
(self, x, \*args, \*\*kwargs)Evaluates gradient of function at point x (ndarray).
has_fun
(self)True if function has been set.
has_gradient
(self)True if gradient has been set.
is_equal
(self, x, val[, tol])Evaluates if function value is close to val within tol.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
reset_eval
(self)Reset the count of function and gradient of function evaluations.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
bounds
secml.optim.optimizers¶
secml.optim.optimizers.line_search¶
-
class
secml.optim.optimizers.line_search.c_line_search.
CLineSearch
(fun, constr=None, bounds=None, eta=0.0001, max_iter=20)[source]¶ Bases:
secml.core.c_creator.CCreator
Abstract class that implements line-search optimization algorithms.
Line-search algorithms optimize the objective function along a given direction in the feasible domain, potentially subject to constraints. The search is normally stopped when the objective improves at a satisfying level, to keep the search fast.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
minimize
(self, x, d, \*\*kwargs)Line search.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
class
secml.optim.optimizers.line_search.c_line_search_bisect.
CLineSearchBisect
(fun, constr=None, bounds=None, eta=0.0001, eta_min=0.1, eta_max=None, max_iter=20)[source]¶ Bases:
secml.optim.optimizers.line_search.c_line_search.CLineSearch
Binary line search.
- Attributes
class_type
‘bisect’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
minimize
(self, x, d[, fx, tol])Bisect line search (on discrete grid).
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
eta_max
¶
-
property
eta_min
¶
-
minimize
(self, x, d, fx=None, tol=0.0001, **kwargs)[source]¶ Bisect line search (on discrete grid).
The function fun( x + a*eta*d ) with a = {0, 1, 2, … } is minimized along the descent direction d.
If fun(x) >= 0 -> step_min = step else step_max = step
If eta_max is not None, it runs a bisect line search in [x + eta_min*d, x + eta_max*d]; otherwise, it runs an exponential line search in `[x + eta*d, …, x + eta_min*d, …]
- Parameters
- xCArray
The input point.
- dCArray
The descent direction along which fun(x) is minimized.
- fxint or float or None, optional
The current value of fun(x) (if available).
- tolfloat, optional
Tolerance for convergence to the local minimum.
- kwargsdict
Additional parameters required to evaluate fun(x, **kwargs).
- Returns
- x’CArray
Point x’ = x + eta * d that approximately solves min f(x + eta*d).
- fx’: int or float or None, optional
The value f(x’).
-
property
n_iter
¶
COptimizer¶
-
class
secml.optim.optimizers.c_optimizer.
COptimizer
(fun, constr=None, bounds=None)[source]¶ Bases:
secml.core.c_creator.CCreator
Interface for optimizers.
- Implements:
minimize f(x) s.t. gi(x) <= 0, i=1,…,m (inequality constraints)
hj(x) = 0, j = 1,…, n (equality constraints)
- Parameters
- funCFunction
The objective function to be optimized, along with 1st-order (Jacobian) and 2nd-order (Hessian) derivatives (if available).
- Attributes
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
maximize
(self, x_init[, args])Interface for maximizers.
minimize
(self, x_init[, args])Interface for minimizers.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
bounds
¶ Optimization bounds.
-
property
constr
¶ Optimization constraint.
-
property
f
¶ The objective function
-
property
f_eval
¶
-
property
f_opt
¶
-
property
f_seq
¶
-
property
grad_eval
¶
-
maximize
(self, x_init, args=(), **kwargs)[source]¶ Interface for maximizers.
- Implementing:
max fun(x) s.t. constraint
This is implemented by inverting the sign of fun and gradient and running the COptimizer.minimize().
- Parameters
- x_initCArray
The initial input point.
- argstuple, optional
Extra arguments passed to the objective function and its gradient.
- kwargs
Additional parameters of the minimization method.
-
abstract
minimize
(self, x_init, args=(), **kwargs)[source]¶ Interface for minimizers.
- Implementing:
min fun(x) s.t. constraint
- Parameters
- x_initCArray
The initial input point.
- argstuple, optional
Extra arguments passed to the objective function and its gradient.
- kwargs
Additional parameters of the minimization method.
-
property
n_dim
¶
-
property
x_opt
¶
-
property
x_seq
¶
COptimizerPGDLS¶
-
class
secml.optim.optimizers.c_optimizer_pgd_ls.
COptimizerPGDLS
(fun, constr=None, bounds=None, discrete=False, eta=0.001, eta_min=None, eta_max=None, max_iter=1000, eps=0.0001)[source]¶ Bases:
secml.optim.optimizers.c_optimizer.COptimizer
Solves the following problem:
min f(x) s.t. d(x,x0) <= dmax
x_lb <= x <= x_ub
f(x) is the objective function (either linear or nonlinear), d(x,x0) <= dmax is a distance constraint in feature space (l1 or l2), and x_lb <= x <= x_ub is a box constraint on x.
The solution algorithm is based on a line-search exploring one feature (i.e., dimension) at a time (for l1-constrained problems), or all features (for l2-constrained problems). This solver also works for discrete problems, where x is integer valued. In this case, exploration works by manipulating one feature at a time.
Differently from standard line searches, it explores a subset of n_dimensions at a time. In this sense, it is an extension of the classical line-search approach.
- Attributes
class_type
‘pgd-ls’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
maximize
(self, x_init[, args])Interface for maximizers.
minimize
(self, x_init[, args])Interface to minimizers implementing
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
discrete
¶ True if feature space is discrete, False if continuous.
-
property
eps
¶ Return tolerance value for stop criterion
-
property
eta
¶
-
property
eta_max
¶
-
property
eta_min
¶
-
property
max_iter
¶ Returns the maximum number of descent iterations
-
minimize
(self, x_init, args=(), **kwargs)[source]¶ - Interface to minimizers implementing
min fun(x) s.t. constraint
- Parameters
- x_initCArray
The initial input point.
- argstuple, optional
Extra arguments passed to the objective function and its gradient.
- Returns
- f_seqCArray
Array containing values of f during optimization.
- x_seqCArray
Array containing values of x during optimization.
COptimizerPGD¶
-
class
secml.optim.optimizers.c_optimizer_pgd.
COptimizerPGD
(fun, constr=None, bounds=None, eta=0.001, eps=0.0001, max_iter=200)[source]¶ Bases:
secml.optim.optimizers.c_optimizer.COptimizer
Solves the following problem:
min f(x) s.t. d(x,x0) <= dmax
x_lb <= x <= x_ub
f(x) is the objective function (either linear or nonlinear), d(x,x0) <= dmax is a distance constraint in feature space (l1 or l2), and x_lb <= x <= x_ub is a box constraint on x.
The solution algorithm is based on the classic gradient descent algorithm.
- Attributes
class_type
‘pgd’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
maximize
(self, x_init[, args])Interface for maximizers.
minimize
(self, x_init[, args])Interface to minimizers.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
eps
¶ Return tolerance value for stop criterion
-
property
eta
¶ Return gradient descent step
-
property
max_iter
¶ Returns the maximum number of gradient descent iteration
-
minimize
(self, x_init, args=(), **kwargs)[source]¶ Interface to minimizers.
- Implements:
min fun(x) s.t. constraint
- Parameters
- x_initCArray
The initial input point.
- argstuple, optional
Extra arguments passed to the objective function and its gradient.
- Returns
- f_seqCArray
Array containing values of f during optimization.
- x_seqCArray
Array containing values of x during optimization.
COptimizerScipy¶
-
class
secml.optim.optimizers.c_optimizer_scipy.
COptimizerScipy
(fun, constr=None, bounds=None)[source]¶ Bases:
secml.optim.optimizers.c_optimizer.COptimizer
Implements optimizers from scipy.
- Attributes
class_type
‘scipy-opt’Defines class type.
Methods
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
maximize
(self, x_init[, args])Interface for maximizers.
minimize
(self, x_init[, args])Minimize function.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
minimize
(self, x_init, args=(), **kwargs)[source]¶ Minimize function.
Wrapper of scipy.optimize.minimize.
- Parameters
- x_initCArray
Init point. Dense flat array of real elements of size ‘n’, where ‘n’ is the number of independent variables.
- argstuple, optional
Extra arguments passed to the objective function and its derivatives (fun, jac and hess functions).
- The following can be passed as optional keyword arguments:
- methodstr or callable, optional
- Type of solver. Should be one of
‘BFGS’ (see here)
‘L-BFGS-B’ (see here)
If not given, chosen to be one of
BFGS
orL-BFGS-B
depending if the problem has constraints or bounds. See c_optimizer_scipy.SUPPORTED_METHODS for the full list.- jac{‘2-point’, ‘3-point’, ‘cs’, bool}, optional
Method for computing the gradient vector. The function in self.fun.gradient will be used (if defined). Alternatively, the keywords {‘2-point’, ‘3-point’, ‘cs’} select a finite difference scheme for numerical estimation of the gradient. Options ‘3-point’ and ‘cs’ are available only to ‘trust-constr’. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function. If False, the gradient will be estimated using ‘2-point’ finite difference estimation.
- boundsscipy.optimize.Bounds, optional
A bound constraint in scipy.optimize format. If defined, bounds of COptimizerScipy will be ignored.
- tolfloat, optional
Tolerance for termination. For detailed control, use solver-specific options.
- optionsdict, optional
A dictionary of solver options. All methods accept the following generic options:
maxiter : int Maximum number of iterations to perform.
disp : bool Set to True to print convergence messages. Equivalent of setting COptimizerScipy.verbose = 2.
For method-specific options, see
show_options
.
- Returns
- xCArray
The solution of the optimization.
Examples
>>> from secml.array import CArray >>> from secml.optim.optimizers import COptimizerScipy >>> from secml.optim.function import CFunctionRosenbrock
>>> x_init = CArray([1.3, 0.7]) >>> opt = COptimizerScipy(CFunctionRosenbrock()) >>> x_opt = opt.minimize( ... x_init, method='BFGS', options={'gtol': 1e-6, 'disp': True}) Optimization terminated successfully. Current function value: 0.000000 Iterations: 32 Function evaluations: 39 Gradient evaluations: 39 >>> print(x_opt) CArray([1. 1.]) >>> print(opt.f_opt) 9.294383981640425e-19
secml.optim.constraints¶
CConstraint¶
-
class
secml.optim.constraints.c_constraint.
CConstraint
[source]¶ Bases:
secml.core.c_creator.CCreator
Interface for equality/inequality constraints.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
verbose
Verbosity level of logger output.
Methods
constraint
(self, x)Returns the value of the constraint for the sample x.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x)Returns the gradient of c(x) in x.
is_active
(self, x[, tol])Returns True if constraint is active.
is_violated
(self, x[, precision])Returns the violated status of the constraint for the sample x.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
projection
(self, x)Project x onto feasible domain / within the given constraint.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
constraint
(self, x)[source]¶ Returns the value of the constraint for the sample x.
- Parameters
- xCArray
Input sample.
- Returns
- float
Value of the constraint.
-
gradient
(self, x)[source]¶ Returns the gradient of c(x) in x.
- Parameters
- xCArray
Input sample.
- Returns
- CArray
The gradient of the constraint computed on x.
-
is_active
(self, x, tol=0.0001)[source]¶ Returns True if constraint is active.
A constraint is active if c(x) = 0.
By default we assume constraints of the form c(x) <= 0.
- Parameters
- xCArray
Input sample.
- tolfloat, optional
Tolerance to use for comparing c(x) against 0. Default 1e-4.
- Returns
- bool
True if constraint is active, False otherwise.
-
is_violated
(self, x, precision=4)[source]¶ Returns the violated status of the constraint for the sample x.
We assume the constraint violated if c(x) <= 0.
- Parameters
- xCArray
Input sample.
- precisionint, optional
Number of digits to check. Default 4.
- Returns
- bool
True if constraint is violated, False otherwise.
CConstraintBox¶
-
class
secml.optim.constraints.c_constraint_box.
CConstraintBox
(lb=None, ub=None)[source]¶ Bases:
secml.optim.constraints.c_constraint.CConstraint
Class that defines a box constraint.
- Parameters
- lb, ubscalar or CArray or None, optional
Bounds of the constraints. If scalar, the same bound will be applied to all features. If CArray, should contain a bound for each feature. If None, a +/- inf ub/lb bound will be used for all features.
- Attributes
class_type
‘box’Defines class type.
Methods
constraint
(self, x)Returns the value of the constraint for the sample x.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x)Returns the gradient of c(x) in x.
is_active
(self, x[, tol])Returns True if constraint is active.
is_violated
(self, x[, precision])Returns the violated status of the constraint for the sample x.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
projection
(self, x)Project x onto feasible domain / within the given constraint.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_center_radius
(self, c, r)Set constraint bounds in terms of center and radius.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
center
¶ Center of the constraint.
-
is_active
(self, x, tol=0.0001)[source]¶ Returns True if constraint is active.
A constraint is active if c(x) = 0.
By default we assume constraints of the form c(x) <= 0.
- Parameters
- xCArray
Input sample.
- tolfloat, optional
Tolerance to use for comparing c(x) against 0. Default 1e-4.
- Returns
- bool
True if constraint is active, False otherwise.
-
is_violated
(self, x, precision=4)[source]¶ Returns the violated status of the constraint for the sample x.
We assume the constraint violated if c(x) <= 0.
- Parameters
- xCArray
Input sample.
- precisionint, optional
Number of digits to check. Default 4.
- Returns
- bool
True if constraint is violated, False otherwise.
-
property
lb
¶ Lower bound.
-
property
radius
¶ Radius of the constraint.
-
set_center_radius
(self, c, r)[source]¶ Set constraint bounds in terms of center and radius.
- This method will transform the input center/radius as follows:
lb = center - radius ub = center + radius
- Parameters
- cscalar
Constraint center.
- rscalar
Constraint radius.
-
property
ub
¶ Upper bound.
CConstraintL1¶
-
class
secml.optim.constraints.c_constraint_l1.
CConstraintL1
(center=0, radius=1)[source]¶ Bases:
secml.optim.constraints.c_constraint.CConstraint
L1 Constraint.
- Parameters
- centerscalar or CArray, optional
Center of the constraint. Use an array to specify a different value for each dimension. Default 0.
- radiusscalar, optional
The semidiagonal of the constraint. Default 1.
- Attributes
class_type
‘l1’Defines class type.
Methods
constraint
(self, x)Returns the value of the constraint for the sample x.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x)Returns the gradient of c(x) in x.
is_active
(self, x[, tol])Returns True if constraint is active.
is_violated
(self, x[, precision])Returns the violated status of the constraint for the sample x.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
projection
(self, x)Project x onto feasible domain / within the given constraint.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
center
¶ Center of the constraint.
-
property
radius
¶ Semidiagonal of the constraint.
CConstraintL2¶
-
class
secml.optim.constraints.c_constraint_l2.
CConstraintL2
(center=0, radius=1)[source]¶ Bases:
secml.optim.constraints.c_constraint.CConstraint
L2 Constraint.
- Parameters
- centerscalar or CArray, optional
Center of the constraint. Use an array to specify a different value for each dimension. Default 0.
- radiusscalar, optional
The semidiagonal of the constraint. Default 1.
- Attributes
class_type
‘l2’Defines class type.
Methods
constraint
(self, x)Returns the value of the constraint for the sample x.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
gradient
(self, x)Returns the gradient of c(x) in x.
is_active
(self, x[, tol])Returns True if constraint is active.
is_violated
(self, x[, precision])Returns the violated status of the constraint for the sample x.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
projection
(self, x)Project x onto feasible domain / within the given constraint.
save
(self, path)Save class object using pickle.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
timed
([msg])Timer decorator.
-
property
center
¶ Center of the constraint.
-
property
radius
¶ Radius of the constraint.
secml.figure¶
CFigure¶
-
class
secml.figure.c_figure.
CFigure
(height=6, width=6, title='', fontsize=12, linewidth=2, markersize=7)[source]¶ Bases:
secml.core.c_creator.CCreator
Creates a Figure.
A Figure is a collection of subplots. The last active subplot can be accessed by calling CFigure.sp`, followed by the desired plotting function (plot, scatter, contour, etc.). Each subplot is identified by an index (grid slot) inside an imaginary grid (n_rows, n_columns, grid_slot), counting from left to right, from top to bottom. By default, a subplot is created in a single-row, single-column imaginary grid (1, 1, 1).
- Parameters
- heightscalar, optional
Height of the new figure. Default 6.
- widthscalar, optional
Width of the new figure. Default 6.
- titlestr, optional
Super title of the new figure. This is not the subplot title. To set a title for active subplot use
subtitle
. Default is to not set a super title.- linewidthfloat, optional
Define default linewidth for all subplots. Default 2.
- fontsizeint, optional
Define default fontsize for all subplots. Default 12.
- markersizescalar, optional
Define default markersize for all subplots. Default 7.
Notes
Not all matplotlib methods and/or parameters are currently available. If needed, directly access the matplotlib.Axes active subplot instance through the CFigure.sp._sp parameter.
Examples
>>> from secml.figure import CFigure
>>> fig = CFigure(fontsize=14) >>> fig.sp.plot([0, 1], color='red')
>>> fig.show() # This will open a new window with the figure
- Attributes
Methods
close
(self[, fig])Close current or input figure.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
get_class_from_type
(class_type)Return the class associated with input type.
get_default_params
(self)Return current defaults for the figure.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
save
(self, path)Save class object using pickle.
savefig
(self, fname[, dpi, facecolor, …])Save figure to disk.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
show
([block])Show all created figures.
subplot
(self[, n_rows, n_cols, grid_slot])Create a new subplot into specific position.
subplots_adjust
(self[, left, right, bottom, …])Tune the subplot layout.
tight_layout
(self[, pad, h_pad, w_pad, rect])Adjust space between plot and figure.
timed
([msg])Timer decorator.
title
(self, label, \*\*kwargs)Set the global title for current figure.
-
close
(self, fig=None)[source]¶ Close current or input figure.
- Parameters
- figCFigure or None
Handle to figure to close. If None (default), current figure is closed.
-
get_default_params
(self)[source]¶ Return current defaults for the figure.
- Returns
- default_parametersdict
Contain default parameters value set.
-
property
n_sp
¶ Returns the number of created subplots.
-
savefig
(self, fname, dpi=None, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, file_format=None, transparent=False, bbox_inches=None, bbox_extra_artists=None, pad_inches=0.1, frameon=None)[source]¶ Save figure to disk.
- Parameters
- fnamestring
containing a path to a filename, or a Python file-like object. If file_format is None and fname is a string, the output file_format is deduced from the extension of the filename. If the filename has no extension, the value of the rc parameter savefig.file_format is used. If fname is not a string, remember to specify file_format to ensure that the correct backend is used.
- dpi[ None | scalar > 0 ], optional
The resolution in dots per inch. If None it will default to the value savefig.dpi in the matplotlibrc file.
- facecolor, edgecolorcolor or str, optional
The colors of the figure rectangle. Default ‘w’ (white).
- orientation: [ ‘landscape’ | ‘portrait’ ], optional
not supported on all backends; currently only on postscript output
- papertypestr, optional
One of ‘letter’, ‘legal’, ‘executive’, ‘ledger’, ‘a0’ through ‘a10’, ‘b0’ through ‘b10’. Only supported for postscript output.
- file_formatstr, optional
One of the file extensions supported by the active backend. Most backends support png, pdf, ps, eps and svg.
- transparentbool, optional
If True, the axes patches will all be transparent; the figure patch will also be transparent unless facecolor and/or edgecolor are specified via kwargs. This is useful, for example, for displaying a plot on top of a colored background on a web page. The transparency of these patches will be restored to their original values upon exit of this function.
- bbox_inchesscalar or str, optional
Bbox in inches. Only the given portion of the figure is saved. If ‘tight’, try to figure out the tight bbox of the figure.
- bbox_extra_artistslist
A list of extra artists that will be considered when the tight bbox is calculated.
- pad_inchesscalar
Amount of padding around the figure when bbox_inches is ‘tight’.
- frameonbool, optional
If True, the figure patch will be colored, if False, the figure background will be transparent. If not provided, the rcParam ‘savefig.frameon’ will be used.
-
static
show
(block=True)[source]¶ Show all created figures.
- Parameters
- blockboolean, default True
If true, execution is halted until the showed figure(s) are closed.
-
property
sp
¶ Return reference to active subplot class.
If no subplot is available, creates a new standard subplot in (1,1,1) position and returns its reference.
-
subplot
(self, n_rows=1, n_cols=1, grid_slot=1, **kwargs)[source]¶ Create a new subplot into specific position.
Creates a new subplot placing it in the n_plot position of the n_rows * n_cols imaginary grid. Position is indexed in raster-scan.
- If subplot is created in a slot occupied by another subplot,
old subplot will be used.
- Parameters
- n_rowsint
Number of rows of the imaginary grid used for subdividing the figure. Default 1 (one row).
- n_colsint
Number of columns of the imaginary grid used for subdividing the figure. Default 1 (one column).
- grid_slotint or tuple
If int, raster scan position of the new subplot. Default 1. If tuple, index of the slot of the grid. Each dimension can be specified as an int or a slice, e.g. in a 3 x 3 subplot grid, grid_slot=(0, slice(1, 3)) will create a subplot at row index 0 that spans between columns index 1 and 2.
Examples
import numpy as np import matplotlib.pyplot as plt from secml.figure import CFigure fig = CFigure(fontsize=16) # create a new subplot fig.subplot(2, 2, 1) x = np.linspace(-np.pi, np.pi, 100) y = 2*np.sin(x) # function `plot` will be applied to the last subplot created fig.sp.plot(x, y) # subplot indices are are the same of the first subplot # so the following function will be run inside the previous plot fig.subplot(2, 2, 1) y = x fig.sp.plot(x, y) # create a new subplot fig.subplot(2, 2, 3) fig.sp.plot(x, y) fig.subplot(2, 2, grid_slot=(1, slice(2))) y = 2*np.sin(x) fig.sp.plot(x, y) plt.show()
(Source code, png)
-
subplots_adjust
(self, left=0.125, right=0.9, bottom=0.1, top=0.9, wspace=0.2, hspace=0.2)[source]¶ Tune the subplot layout.
- Parameters
- leftfloat, default 0.125
Left side of the subplots.
- rightfloat, default 0.9
Right side of the subplots.
- bottomfloat, default 0.1
Bottom of the subplots.
- topfloat, default 0.9
Top of the subplots.
- wspacefloat, default 0.2
Amount of width reserved for blank space between subplots.
- hspacefloat, default 0.2
Amount of height reserved for white space between subplots
Examples
from secml.array import CArray from secml.figure import CFigure n = 5 fig = CFigure() x = CArray.arange(100) y = 3. * CArray.sin(x * 2. * 3.14 / 100.) for i in range(n): temp = 510 + i sp = fig.subplot(n, 1, i) fig.sp.plot(x, y) # for add space from the figure's border you must increased default value parameters fig.subplots_adjust(bottom=0.4, top=0.85, hspace=0.001) fig.sp.xticklabels(()) fig.sp.yticklabels(()) fig.show()
(Source code, png)
-
tight_layout
(self, pad=1.08, h_pad=None, w_pad=None, rect=None)[source]¶ Adjust space between plot and figure.
- Parameters
- padfloat, default 1.08
Padding between the figure edge and the edges of subplots, as a fraction of the font-size.
- h_pad, w_padfloat, defaults to pad_inches.
padding (height/width) between edges of adjacent subplots.
- recttuple of scalars, default is (0, 0, 1, 1).
(left, bottom, right, top) in the normalized figure coordinate that the whole subplots area (including labels) will fit into.
CPlot¶
All methods provided by CPlot
and subclasses are available
by calling the active subplot via CFigure.sp
Warning
To be never explicitly instanced. Will be created by CFigure
.
-
class
secml.figure._plots.c_plot.
CPlot
(sp, default_params)[source]¶ Interface for standard plots.
This class provides an interface and few other methods useful for standard plot creation.
To be never explicitly instanced. Will be created by CFigure.
- Parameters
- spAxes
Subplot to use for plotting. Instance of matplotlib.axes.Axes.
- default_paramsdict
Dictionary with default parameters.
See also
CFigure
creates and handle figures.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
bar
(self, left, height, width=0.8, bottom=None, *args, **kwargs)[source]¶ Bar plot.
- Parameters
- leftsequence of scalars
x coordinates of the left sides of the bars.
- heightsequence of scalars
height(s) of the bars.
- widthscalar or array-like, optional, default: 0.8
width(s) of the bars.
- bottomscalar or array-like, optional, default: None
y coordinate(s) of the bars.
- colorscalar or array-like, optional
Colors of the bar faces.
- edgecolorscalar or array-like, optional
Colors of the bar edges.
- linewidthscalar or array-like, optional, default: None
Width of bar edge(s). If None, use default linewidth; If 0, don’t draw edges.
- xerrscalar or array-like, optional, default: None
If not None, will be used to generate errorbar(s) on the bar chart.
- yerrscalar or array-like, optional, default: None
If not None, will be used to generate errorbar(s) on the bar chart.
- ecolorscalar or array-like, optional, default: None
Specifies the color of errorbar(s)
- capsizeinteger, optional, default: 3
Determines the length in points of the error bar caps.
- error_kwdict
dictionary of kwargs to be passed to errorbar method. ecolor and capsize may be specified here rather than independent kwargs.
- align[‘edge’ | ‘center’], optional, default: ‘edge’
If edge, aligns bars by their left edges (for vertical bars) and by their bottom edges (for horizontal bars). If center, interpret the left argument as the coordinates of the centers of the bars.
- orientation‘vertical’ | ‘horizontal’, optional, default: ‘vertical’
The orientation of the bars.
- logboolean, optional, default: False
If true, sets the axis to be log scale.
- Returns
- bar_listlist of bar type objects
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=12) n = 12 X = CArray.arange(n) Y1 = (1 - X / float(n)) * (1.0 - 0.5) * CArray.rand((n,)) + 0.5 Y2 = (1 - X / float(n)) * (1.0 - 0.5) * CArray.rand((n,)) + 0.5 fig.sp.xticks([0.025, 0.025, 0.95, 0.95]) fig.sp.bar(X, Y1, facecolor='#9999ff', edgecolor='white') fig.sp.bar(X, -Y2, facecolor='#ff9999', edgecolor='white') for x, y in zip(X, Y1): fig.sp.text(x + 0.4, y + 0.05, '%.2f' % y, ha='center', va='bottom') for x, y in zip(X, Y2): fig.sp.text(x + 0.4, -y - 0.05, '%.2f' % y, ha='center', va='top') fig.sp.xlim(-.5, n) fig.sp.xticks(()) fig.sp.ylim(-1.25, 1.25) fig.sp.yticks(()) fig.sp.grid() fig.show()
(Source code, png)
-
barh
(self, bottom, width, height=0.8, left=None, *args, **kwargs)[source]¶ Horizontal bar plot.
- Parameters
- bottomsequence of scalars
y coordinates of the bars.
- widthsequence of scalars
width(s) of the bars.
- heightscalar or array-like, optional, default: 0.8
height(s) of the bars.
- leftscalar or array-like, optional, default: None
x coordinate(s) of the bars.
- colorscalar or array-like, optional
Colors of the bar faces.
- edgecolorscalar or array-like, optional
Colors of the bar edges.
- linewidthscalar or array-like, optional, default: None
Width of bar edge(s). If None, use default linewidth; If 0, don’t draw edges.
- xerrscalar or array-like, optional, default: None
If not None, will be used to generate errorbar(s) on the bar chart.
- yerrscalar or array-like, optional, default: None
If not None, will be used to generate errorbar(s) on the bar chart.
- ecolorscalar or array-like, optional, default: None
Specifies the color of errorbar(s)
- capsizeinteger, optional, default: 3
Determines the length in points of the error bar caps.
- error_kwdict
dictionary of kwargs to be passed to errorbar method. ecolor and capsize may be specified here rather than independent kwargs.
- align[‘edge’ | ‘center’], optional, default: ‘edge’
If edge, aligns bars by their left edges (for vertical bars) and by their bottom edges (for horizontal bars). If center, interpret the left argument as the coordinates of the centers of the bars.
- orientation‘vertical’ | ‘horizontal’, optional, default: ‘vertical’
The orientation of the bars.
- logboolean, optional, default: False
If true, sets the axis to be log scale.
- Returns
- bar_listlist of bar type objects
-
boxplot
(self, x, notch=False, sym=None, vert=True, whis=1.5, positions=None, widths=None, patch_artist=False, bootstrap=None, usermedians=None, conf_intervals=None, meanline=False, showmeans=False, showcaps=True, showbox=True, showfliers=True, boxprops=None, labels=None, flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None, manage_xticks=True)[source]¶ Make a box and whisker plot.
Make a box and whisker plot for each column of x or each vector in sequence x. The box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers.
- Parameters
- xArray or a sequence of vectors.
The input data.
- notchbool, default = False
If False, produces a rectangular box plot. If True, will produce a notched box plot
- symstr or None, default = None
The default symbol for flier points. Enter an empty string (‘’) if you don’t want to show fliers. If None, then the fliers default to ‘b+’ If you want more control use the flierprops kwarg.
- vertbool, default = True
If True (default), makes the boxes vertical. If False, makes horizontal boxes.
- whisfloat, sequence (default = 1.5) or string
As a float, determines the reach of the whiskers past the first and third quartiles (e.g., Q3 + whis*IQR, IQR = interquartile range, Q3-Q1). Beyond the whiskers, data are considered outliers and are plotted as individual points. Set this to an unreasonably high value to force the whiskers to show the min and max values. Alternatively, set this to an ascending sequence of percentile (e.g., [5, 95]) to set the whiskers at specific percentiles of the data. Finally, whis can be the string ‘range’ to force the whiskers to the min and max of the data. In the edge case that the 25th and 75th percentiles are equivalent, whis will be automatically set to ‘range’.
- bootstrapNone (default) or integer
Specifies whether to bootstrap the confidence intervals around the median for notched boxplots. If bootstrap==None, no bootstrapping is performed, and notches are calculated using a Gaussian-based asymptotic approximation (see McGill, R., Tukey, J.W., and Larsen, W.A., 1978, and Kendall and Stuart, 1967). Otherwise, bootstrap specifies the number of times to bootstrap the median to determine it’s 95% confidence intervals. Values between 1000 and 10000 are recommended.
- usermediansarray-like or None (default)
An array or sequence whose first dimension (or length) is compatible with x. This overrides the medians computed by matplotlib for each element of usermedians that is not None. When an element of usermedians == None, the median will be computed by matplotlib as normal.
- conf_intervalsarray-like or None (default)
Array or sequence whose first dimension (or length) is compatible with x and whose second dimension is 2. When the current element of conf_intervals is not None, the notch locations computed by matplotlib are overridden (assuming notch is True). When an element of conf_intervals is None, boxplot compute notches the method specified by the other kwargs (e.g., bootstrap).
- positionsarray-like, default = [1, 2, …, n]
Sets the positions of the boxes. The ticks and limits are automatically set to match the positions.
- widthsarray-like, default = 0.5
Either a scalar or a vector and sets the width of each box. The default is 0.5, or
0.15*(distance between extreme positions)
if that is smaller.- labelssequence or None (default)
Labels for each dataset. Length must be compatible with dimensions of x
- patch_artistbool, default = False
If False produces boxes with the Line2D artist If True produces boxes with the Patch artist
- showmeansbool, default = False
If True, will toggle one the rendering of the means
- showcapsbool, default = True
If True, will toggle one the rendering of the caps
- showboxbool, default = True
If True, will toggle one the rendering of box
- showfliersbool, default = True
If True, will toggle one the rendering of the fliers
- boxpropsdict or None (default)
If provided, will set the plotting style of the boxes
- whiskerpropsdict or None (default)
If provided, will set the plotting style of the whiskers
- cappropsdict or None (default)
If provided, will set the plotting style of the caps
- flierpropsdict or None (default)
If provided, will set the plotting style of the fliers
- medianpropsdict or None (default)
If provided, will set the plotting style of the medians
- meanpropsdict or None (default)
If provided, will set the plotting style of the means
- meanlinebool, default = False
If True (and showmeans is True), will try to render the mean as a line spanning the full width of the box according to meanprops. Not recommended if shownotches is also True. Otherwise, means will be shown as points.
- Returns
- resultdict
A dictionary mapping each component of the boxplot to a list of the
matplotlib.lines.Line2D
instances created. That dictionary has the following keys (assuming vertical boxplots):boxes: the main body of the boxplot showing the quartiles and the median’s confidence intervals if enabled.
medians: horizonal lines at the median of each box.
whiskers: the vertical lines extending to the most extreme, n-outlier data points.
caps: the horizontal lines at the ends of the whiskers.
fliers: points representing data that extend beyond the whiskers (outliers).
means: points or lines representing the means.
-
clabel
(self, contour, *args, **kwargs)[source]¶ Label a contour plot.
- Parameters
- contourcontour object
returned from contour function
- fontsizeint
size in points or relative size e.g., ‘smaller’, ‘x-large’
- colorsstr
if None, the color of each label matches the color of the corresponding contour if one string color, e.g., colors = ‘r’ or colors = ‘red’, all labels will be plotted in this color if a tuple of matplotlib color args (string, float, rgb, etc), different labels will be plotted in different colors in the order specified
- inlinebool
controls whether the underlying contour is removed or not. Default is True.
- inline_spacingint
space in pixels to leave on each side of label when placing inline. Defaults to 5. This spacing will be exact for labels at locations where the contour is straight, less so for labels on curved contours.
- fmtstr
a format string for the label. Default is ‘%1.3f’ Alternatively, this can be a dictionary matching contour levels with arbitrary strings to use for each contour level (i.e., fmt[level]=string), or it can be any callable, such as a Formatter instance, that returns a string when called with a numeric contour level.
- manualbool
if True, contour labels will be placed manually using mouse clicks. Click the first button near a contour to add a label, click the second button (or potentially both mouse buttons at once) to finish adding labels. The third button can be used to remove the last label added, but only if labels are not inline. Alternatively, the keyboard can be used to select label locations (enter to end label placement, delete or backspace act like the third mouse button, and any other key will select a label location). manual can be an iterable object of x,y tuples. Contour labels will be created as if mouse is clicked at each x,y positions.
- rightside_upbool
if True (default), label rotations will always be plus or minus 90 degrees from level.
Examples
from secml.array import CArray from secml.figure import CFigure def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * (-x ** 2 - y ** 2).exp() fig = CFigure() x_linspace = CArray.linspace(-3, 3, 256) y_linspace = CArray.linspace(-3, 3, 256) X, Y = CArray.meshgrid((x_linspace, y_linspace)) C = fig.sp.contour(X, Y, f(X, Y), linewidth=.5, cmap='hot') fig.sp.clabel(C, inline=1, fontsize=10) fig.sp.xticks(()) fig.sp.yticks(()) fig.show()
-
colorbar
(self, mappable, ticks=None, *args, **kwargs)[source]¶ Add colorbar to plot.
- Parameters
- mappableobject
Image, ContourSet, or other to which the colorbar applies
- use_gridspecboolean, default False
if True colorbar is created as an instance of Subplot using the grid_spec module.
- Additional keyword arguments are of two kinds:
- Axes properties:
Property
Description
orientation
vertical or horizontal
fraction, default 0.15
fraction of original axes to use for colorbar
pad, default 0.05 if vertical, 0.15 if horizontal
fraction of original axes between colorbar and new image axes
shrink, default 1.0
fraction by which to shrink the colorbar
aspect, default 20
ratio of long to short dimensions
anchor, default (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal
the anchor point of the colorbar axes
panchor, default (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal;
the anchor point of the colorbar parent axes. If False, the parent axes’ anchor will be unchanged
- Colorbar properties:
Property
Description
extend
[ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ] If not ‘neither’, make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods.
extendfrac
[ None | ‘auto’ | length | lengths ] If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to ‘auto’, makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to ‘uniform’) or the same lengths as the respective adjacent interior boxes (when spacing is set to ‘proportional’). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length.
extendrect
[ False | True ] If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular.
spacing
[ ‘uniform’ | ‘proportional’ ] Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval.
ticks
[ None | list of ticks | Locator object ] If None, ticks are determined automatically from the input.
format
[ None | format string | Formatter object ] If None, the ScalarFormatter is used. If a format string is given, e.g., ‘%.3f’, that is used. An alternative Formatter object may be given instead.
drawedges
[ False | True ] If true, draw lines at color boundaries.
Notes
If mappable is a ContourSet, its extend kwarg is included automatically. Note that the shrink kwarg provides a simple way to keep a vertical colorbar. If the colorbar is too tall (or a horizontal colorbar is too wide) use a smaller value of shrink.
Examples
from secml.array import CArray from secml.figure import CFigure def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * (-x ** 2 - y ** 2).exp() fig = CFigure(width=10, title="Colorbar Example") fig.subplot(1, 2, 1) x_linspace = CArray.linspace(-3, 3, 256) y_linspace = CArray.linspace(-3, 3, 256) X, Y = CArray.meshgrid((x_linspace, y_linspace)) c = fig.sp.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap='hot') fig.sp.colorbar(c) fig.sp.title("Hot Contourf") fig.sp.xticks(()) fig.sp.yticks(()) fig.subplot(1, 2, 2) c = fig.sp.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap='winter') fig.sp.colorbar(c) fig.sp.title("Cold Contourf") fig.sp.xticks(()) fig.sp.yticks(()) fig.show()
(Source code, png)
-
contour
(self, x, y, z, *args, **kwargs)[source]¶ Draw contour lines of a function.
- Parameters
- x, yCArray or list
specify the (x, y) coordinates of the surface. X and Y must both be 2-D with the same shape as Z, or they must both be 1-D such that len(X) is the number of columns in Z and len(Y) is the number of rows in Z.
- zCArray or list
value into (x, y) surface’s position
- colors[ None | string | (mpl_colors) ]
If None, the colormap specified by cmap will be used. If a string, like ‘r’ or ‘red’, all levels will be plotted in this color. If a tuple of matplotlib color args (string, float, rgb, etc), different levels will be plotted in different colors in the order specified.
- alphafloat
The alpha blending value
- cmap[ None | Colormap ]
A cm Colormap instance or None. If cmap is None and colors is None, a default Colormap is used.
- vmin, vmax[ None | scalar ]
If not None, either or both of these values will be supplied to the matplotlib.colors. Normalize instance, overriding the default color scaling based on levels.
- levels[level0, level1, …, leveln]
A list of floating point numbers indicating the level curves to draw; e.g., to draw just the zero contour pass levels=[0]
- origin[ None | ‘upper’ | ‘lower’ | ‘image’ ]
If None, the first value of Z will correspond to the lower left corner, location (0,0). If ‘image’, the default parameter value for image.origin will be used. This keyword is not active if X and Y are specified in the call to contour.
- extent[ None | (x0,x1,y0,y1) ]
If origin is not None, then extent is interpreted as in matplotlib.pyplot.imshow(): it gives the outer pixel boundaries. In this case, the position of Z[0,0] is the center of the pixel, not a corner. If origin is None, then (x0, y0) is the position of Z[0,0], and (x1, y1) is the position of Z[-1,-1]. This keyword is not active if X and Y are specified in the call to contour.
- extend[ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ]
Unless this is ‘neither’, contour levels are automatically added to one or both ends of the range so that all data are included. These added ranges are then mapped to the special colormap values which default to the ends of the colormap range.
- antialiased[ True | False ]
enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from default_parameters [‘lines.antialiased’].
- linewidths[ None | number | tuple of numbers ]
If linewidths is None, the default width in lines.linewidth default_parameters is used. If a number, all levels will be plotted with this linewidth. If a tuple, different levels will be plotted with different linewidths in the order specified.
- linestyles[ None | ‘solid’ | ‘dashed’ | ‘dashdot’ | ‘dotted’ ]
If linestyles is None, the default is ‘solid’ unless the lines are monochrome. In that case, negative contours will take their linestyle from the matplotlibrc contour.negative_ linestyle setting. linestyles can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary.
Examples
from secml.array import CArray from secml.figure import CFigure def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * (-x ** 2 - y ** 2).exp() fig = CFigure() x_linspace = CArray.linspace(-3, 3, 256) y_linspace = CArray.linspace(-3, 3, 256) X, Y = CArray.meshgrid((x_linspace, y_linspace)) C = fig.sp.contour(X, Y, f(X, Y), linewidths=.5, cmap='hot') fig.sp.xticks(()) fig.sp.yticks(()) fig.show()
(Source code, png)
-
contourf
(self, x, y, z, *args, **kwargs)[source]¶ Draw filled contour of a function.
- Parameters
- x, yCArray or list
specify the (x, y) coordinates of the surface. X and Y must both be 2-D with the same shape as Z, or they must both be 1-D such that len(X) is the number of columns in Z and len(Y) is the number of rows in Z.
- zCArray or list
value into (x, y) surface’s position
- colors[ None | string | (mpl_colors) ]
If None, the colormap specified by cmap will be used. If a string, like ‘r’ or ‘red’, all levels will be plotted in this color. If a tuple of matplotlib color args (string, float, rgb, etc), different levels will be plotted in different colors in the order specified.
- alphafloat
The alpha blending value
- cmap[ None | Colormap ]
A cm Colormap instance or None. If cmap is None and colors is None, a default Colormap is used.
- vmin, vmax[ None | scalar ]
If not None, either or both of these values will be supplied to the matplotlib.colors. Normalize instance, overriding the default color scaling based on levels.
- levels[level0, level1, …, leveln]
A list of floating point numbers indicating the level curves to draw; e.g., to draw just the zero contour pass levels=[0]
- origin[ None | ‘upper’ | ‘lower’ | ‘image’ ]
If None, the first value of Z will correspond to the lower left corner, location (0,0). If ‘image’, the default parameter value for image.origin will be used. This keyword is not active if X and Y are specified in the call to contour.
- extent[ None | (x0,x1,y0,y1) ]
If origin is not None, then extent is interpreted as in matplotlib.pyplot.imshow(): it gives the outer pixel boundaries. In this case, the position of Z[0,0] is the center of the pixel, not a corner. If origin is None, then (x0, y0) is the position of Z[0,0], and (x1, y1) is the position of Z[-1,-1]. This keyword is not active if X and Y are specified in the call to contour.
- extend[ ‘neither’ | ‘both’ | ‘min’ | ‘max’ ]
Unless this is ‘neither’, contour levels are automatically added to one or both ends of the range so that all data are included. These added ranges are then mapped to the special colormap values which default to the ends of the colormap range.
- antialiased[ True | False ]
enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from default_parameters [‘lines.antialiased’].
Examples
from secml.array import CArray from secml.figure import CFigure def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * (-x ** 2 - y ** 2).exp() fig = CFigure() x_linspace = CArray.linspace(-3, 3, 256) y_linspace = CArray.linspace(-3, 3, 256) X, Y = CArray.meshgrid((x_linspace, y_linspace)) fig.sp.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap='hot') fig.sp.xticks(()) fig.sp.yticks(()) fig.show()
(Source code, png)
-
errorbar
(self, x, y, xerr=None, yerr=None, *args, **kwargs)[source]¶ Plot with error deltas in yerr and xerr.
Vertical errorbars are plotted if yerr is not None. Horizontal errorbars are plotted if xerr is not None. x, y, xerr, and yerr can all be scalars, which plots a single error bar at x, y.
- Parameters
- xlist or CArray
x axis values.
- ylist or CArray
y axis values.
- xerr, yerr[ scalar | N, Nx1, or 2xN array-like ], default None
If a scalar number, len(N) array-like object, or an Nx1 array-like object, errorbars are drawn at +/-value relative to the data. If a sequence of shape 2xN, errorbars are drawn at -row1 and +row2 relative to the data.
- fmt[ ‘’ | ‘none’ | plot format string ], default ‘’
The plot format symbol. If fmt is ‘none’ (case-insensitive), only the errorbars are plotted. This is used for adding errorbars to a bar plot, for example. Default is ‘’, an empty plot format string; properties are then identical to the defaults for plot().
- ecolor[ None | mpl color ], default None
A matplotlib color arg which gives the color the errorbar lines; if None, use the color of the line connecting the markers.
- elinewidthscalar, default None
The linewidth of the errorbar lines. If None, use the linewidth.
- capsizescalar, default 3
The length of the error bar caps in points.
- capthickscalar, default None
An alias kwarg to markeredgewidth (a.k.a. - mew). This setting is a more sensible name for the property that controls the thickness of the error bar cap in points. For backwards compatibility, if mew or markeredgewidth are given, then they will over-ride capthick. This may change in future releases.
- barsabove[ True | False ]
if True, will plot the errorbars above the plot symbols. Default is below.
- lolims, uplims, xlolims, xuplims[ False | True ], default False
These arguments can be used to indicate that a value gives only upper/lower limits. In that case a caret symbol is used to indicate this. lims-arguments may be of the same type as xerr and yerr. To use limits with inverted axes, set_xlim() or set_ylim() must be called before errorbar().
- erroreverypositive integer, default 1
subsamples the errorbars. e.g., if everyerror=5, errorbars for every 5-th datapoint will be plotted. The data plot itself still shows all data points.
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=16) fig.title('Errorbars can go negative!') fig.sp.xscale("symlog", nonposx='clip') fig.sp.yscale("symlog", nonposy='clip') x = CArray(10.0).pow(CArray.linspace(0.0, 2.0, 20)) y = x ** 2.0 fig.sp.errorbar(x, y, xerr=0.1 * x, yerr=5.0 + 0.75 * y) fig.sp.ylim(bottom=0.1) fig.sp.grid() fig.show()
(Source code, png)
-
fill_between
(self, x, y1, y2=0, where=None, interpolate=False, step=None, **kwargs)[source]¶ Fill the area between two horizontal curves.
The curves are defined by the points (x, y1) and (x, y2). This creates one or multiple polygons describing the filled area.
You may exclude some horizontal sections from filling using where.
By default, the edges connect the given points directly. Use step if the filling should be a step function, i.e. constant in between x.
- Parameters
- xCArray (length N)
The x coordinates of the nodes defining the curves.
- y1CArray (length N) or scalar
The y coordinates of the nodes defining the first curve.
- y2CArray (length N) or scalar, optional, default: 0
The y coordinates of the nodes defining the second curve.
- whereCArray of bool (length N), optional, default: None
Define where to exclude some horizontal regions from being filled. The filled regions are defined by the coordinates x[where]. More precisely, fill between x[i] and x[i+1] if where[i] and where[i+1]. Note that this definition implies that an isolated True value between two False values in where will not result in filling. Both sides of the True position remain unfilled due to the adjacent False values.
- interpolatebool, optional
This option is only relvant if where is used and the two curves are crossing each other. Semantically, where is often used for y1 > y2 or similar. By default, the nodes of the polygon defining the filled region will only be placed at the positions in the x array. Such a polygon cannot describe the above semantics close to the intersection. The x-sections containing the intersecion are simply clipped. Setting interpolate to True will calculate the actual intersection point and extend the filled region up to this point.
- step{‘pre’, ‘post’, ‘mid’}, optional
Define step if the filling should be a step function, i.e. constant in between x. The value determines where the step will occur:
‘pre’: The y value is continued constantly to the left from every x position, i.e. the interval (x[i-1], x[i]] has the value y[i].
‘post’: The y value is continued constantly to the right from every x position, i.e. the interval [x[i], x[i+1]) has the value y[i].
‘mid’: Steps occur half-way between the x positions.
-
get_legend_handles_labels
(self)[source]¶ Return handles and labels for legend contained by the subplot.
-
get_xticks_idx
(self, xticks)[source]¶ Returns the position of markers to plot.
- Parameters
- xticksCArray
Ticks of x-axis where marker should be plotted.
- Returns
- ticks_idxlist
List with the position of each xtick.
Notes
If a given xtick is not exactly available, the closest value’s position will be returned.
-
grid
(self, grid_on=True, axis='both', **kwargs)[source]¶ Draw grid for current plot.
- Parameters
- grid_onboolean, default True
if True show grid, elsewhere hide grid.
- axisstring, default ‘both’
can be ‘both’ (default), ‘x’, or ‘y’ to control which set of gridlines are drawn.
- kwargsany
Other keyword arguments for grid.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-', label="cosine") fig.sp.plot(X, S, label="sine") fig.sp.xticks(CArray([-3.14, -3.14 / 2, 0, 3.14 / 2, 3.14])) fig.sp.yticks(CArray([-1, 0, +1])) fig.sp.grid() fig.sp.legend(loc=0) fig.show()
(Source code, png)
-
hist
(self, x, *args, **kwargs)[source]¶ Plot a histogram.
Compute and draw the histogram of x.
The return value is a tuple (n, bins, patches) or ([n0, n1, …], bins, [patches0, patches1,…]) if the input contains multiple data.
Multiple data can be provided via x as a list of datasets of potentially different length ([x0, x1, …]), or as a 2-D ndarray in which each column is a dataset.
- Parameters
- x(n,) array or sequence of (n,) arrays
Input values, this takes either a single array or a sequency of arrays which are not required to be of the same length
- binsinteger or array_like, optional, default is 10
If an integer is given, bins + 1 bin edges are returned. Unequally spaced bins are supported if bins is a sequence.
- rangetuple or None, optional
The lower and upper range of the bins. Lower and upper outliers are ignored. If not provided, range is (x.min(), x.max()). Range has no effect if bins is a sequence. If bins is a sequence or range is specified, autoscaling is based on the specified bin range instead of the range of x.
- normedboolean, optional
If True, the first element of the return tuple will be the counts normalized to form a probability density, i.e., n/(len(x)`dbin), i.e., the integral of the histogram will sum to 1. If stacked is also True, the sum of the histograms is normalized to 1.
- weights(n, ) array_like or None, optional
An array of weights, of the same shape as x. Each value in x only contributes its associated weight towards the bin count (instead of 1). If normed is True, the weights are normalized, so that the integral of the density over the range remains 1.
- cumulativeboolean, optional
Dafault False. If True, then a histogram is computed where each bin gives the counts in that bin plus all bins for smaller values. The last bin gives the total number of datapoints. If normed is also True then the histogram is normalized such that the last bin equals 1. If cumulative evaluates to less than 0 (e.g., -1), the direction of accumulation is reversed. In this case, if normed is also True, then the histogram is normalized such that the first bin equals 1.
- bottomarray_like, scalar, or None
Location of the bottom baseline of each bin. If a scalar, the base line for each bin is shifted by the same amount. If an array, each bin is shifted independently and the length of bottom must match the number of bins. If None, defaults to 0.
- histtype{‘bar’, ‘barstacked’, ‘step’, ‘stepfilled’}, optional
‘bar’ (default) is a traditional bar-type histogram. If multiple data are given the bars are aranged side by side.
‘barstacked’ is a bar-type histogram where multiple data are stacked on top of each other.
‘step’ generates a lineplot that is by default unfilled.
‘stepfilled’ generates a lineplot that is by default filled.
- align{‘left’, ‘mid’, ‘right’}, optional
‘left’: bars are centered on the left bin edges.
‘mid’: default, bars are centered between the bin edges.
‘right’: bars are centered on the right bin edges.
- orientation{‘horizontal’, ‘vertical’}, optional
If ‘horizontal’, barh will be used for bar-type histograms and the bottom kwarg will be the left edges.
- rwidthscalar or None, optional
The relative width of the bars as a fraction of the bin width. If None, automatically compute the width. Ignored if histtype is ‘step’ or ‘stepfilled’.
- logboolean, optional
Default False. If True, the histogram axis will be set to a log scale. If log is True and x is a 1D array, empty bins will be filtered out and only the non-empty (n, bins, patches) will be returned.
- colorcolor or array_like of colors or None, optional
Color spec or sequence of color specs, one per dataset. Default (None) uses the standard line color sequence.
- labelstring or None, optional
String, or sequence of strings to match multiple datasets. Bar charts yield multiple patches per dataset, but only the first gets the label, so that the legend command will work as expected.
- stackedboolean, optional
If True, multiple data are stacked on top of each other. If False (default) multiple data are aranged side by side if histtype is ‘bar’ or on top of each other if histtype is ‘step’.
- Returns
- nCArray or list of arrays
The values of the histogram bins. See normed and weights for a description of the possible semantics. If input x is an array, then this is an array of length nbins. If input is a sequence arrays [data1, data2,..], then this is a list of arrays with the values of the histograms for each of the arrays in the same order.
- binsCArray
The edges of the bins. Length nbins + 1 (nbins left edges and right edge of last bin). Always a single array even when multiple data sets are passed in.
- patcheslist or list of lists
Silent list of individual patches used to create the histogram or list of such list if multiple input datasets.
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=14) # example data mu = 100 # mean of distribution sigma = 15 # standard deviation of distribution x = mu + sigma * CArray.randn((10000,)) num_bins = 50 # the histogram of the data n, bins, patches = fig.sp.hist(x, num_bins, normed=1, facecolor='green', alpha=0.5) # add a 'best fit' line y = bins.normpdf(mu, sigma) fig.sp.plot(bins, y, 'r--') fig.sp.xlabel('Smarts') fig.sp.ylabel('Probability') fig.title(r'Histogram of IQ: $\mu=100$, $\sigma=15$') # Tweak spacing to prevent clipping of ylabel fig.subplots_adjust(left=0.15) fig.sp.grid() fig.show()
(Source code, png)
-
legend
(self, *args, **kwargs)[source]¶ Create legend for plot.
- Parameters
- loc: integer or string or pair of floats, default: 0
Integer
Location
0
‘best’
1
‘upper right’
2
‘upper left’
3
‘lower left’
4
‘lower right’
5
‘right’
6
‘center left’
7
‘center right’
8
‘lower center’
9
‘upper center’
10
‘center’
- bbox_to_anchortuple of floats
Specify any arbitrary location for the legend in bbox_transform coordinates (default Axes coordinates). For example, to put the legend’s upper right hand corner in the center of the axes the following keywords can be used: loc=’upper right’, bbox_to_anchor=(0.5, 0.5).
- ncolinteger
The number of columns that the legend has. Default is 1.
- propNone or dict
The font properties of the legend. If None (default), the current default parameters will be used.
- fontsizeint or float or {‘xx-small’, ‘x-small’, ‘small’, ‘medium’, ‘large’, ‘x-large’, ‘xx-large’}
Controls the font size of the legend. If the value is numeric the size will be the absolute font size in points. String values are relative to the current default font size. This argument is only used if prop is not specified.
- numpointsNone or int
The number of marker points in the legend when creating a legend entry for a line. Default is None which will take the value from the legend.numpoints default parameter.
- scatterpointsNone or int
The number of marker points in the legend when creating a legend entry for a scatter plot. Default is None which will take the value from the legend.scatterpoints default parameter.
- scatteryoffsetsiterable of floats
The vertical offset (relative to the font size) for the markers created for a scatter plot legend entry. 0.0 is at the base the legend text, and 1.0 is at the top. To draw all markers at the same height, set to [0.5]. Default [0.375, 0.5, 0.3125].
- markerscaleNone or int or float
The relative size of legend markers compared with the originally drawn ones. Default is None which will take the value from the legend.markerscale default parameter.
- frameonNone or bool
Control whether a frame should be drawn around the legend. Default is None which will take the value from the legend.frameon default parameter.
- fancyboxNone or bool
Control whether round edges should be enabled around the FancyBboxPatch which makes up the legend’s background. Default is None which will take the value from the legend.fancybox default parameter.
- shadowNone or bool
Control whether to draw a shadow behind the legend. Default is None which will take the value from the legend.shadow default parameter.
- framealphaNone or float
Control the alpha transparency of the legend’s frame. Default is None which will take the value from the legend.framealpha default parameter.
- modeeither between {“expand”, None}
If mode is set to “expand” the legend will be horizontally expanded to fill the axes area (or bbox_to_anchor if defines the legend’s size).
- bbox_transformNone or matplotlib.transforms.Transform
The transform for the bounding box (bbox_to_anchor). For a value of None (default) the Axes’ transAxes transform will be used.
- titlestr or None
The legend’s title. Default is no title (None).
- borderpadfloat or None
The fractional whitespace inside the legend border. Measured in font-size units. Default is None which will take the value from the legend.borderpad default parameter.
- labelspacingfloat or None
The vertical space between the legend entries. Measured in font-size units. Default is None which will take the value from the legend.labelspacing default parameter.
- handlelengthfloat or None
The length of the legend handles. Measured in font-size units. Default is None which will take the value from the legend.handlelength default parameter.
- handletextpadfloat or None
The pad between the legend handle and text. Measured in font-size units. Default is None which will take the value from the legend.handletextpad default parameter.
- borderaxespadfloat or None
The pad between the axes and legend border. Measured in font-size units. Default is None which will take the value from the legend.borderaxespad default parameter.
- columnspacingfloat or None
The spacing between columns. Measured in font-size units. Default is None which will take the value from the legend.columnspacing default parameter.
- *args, **kwargs
Same as
text
.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-', label="cosine") fig.sp.plot(X, S, label="sine") fig.sp.grid() fig.sp.legend(loc=0) fig.show()
(Source code, png)
-
loglog
(self, x, y=None, *args, **kwargs)[source]¶ Plot with log scaling on both the x and y axis.
If only one array is given it is supposed to be the y axis data. x axis values are set as index array 0..N-1 .
- Parameters
- xlist or CArray
x axis values.
- ylist or CArray
y axis values.
- basex, baseyscalar > 1, default is 10
Base of the x/y logarithm.
- subsx, subsy[ None | sequence ]
Where to place the subticks between each major tick. Should be a sequence of integers. For example, in a log10 scale: [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick.
- nonposx, nonposy[‘mask’ | ‘clip’ ], default ‘mask’.
Non-positive values in x or y can be masked as invalid, or clipped to a very small positive number.
See also
plot
Plot with standard axis.
-
matshow
(self, array, *args, **kwargs)[source]¶ Plot an array as a matrix.
- Parameters
- arrayCArray
Array that we want plot as a matrix.
-
merge
(self, sp)[source]¶ Merge input subplot to active subplot.
- Parameters
- spCPlot
Subplot to be merged.
-
property
n_lines
¶ Returns the number of lines inside current subplot.
-
plot
(self, x, y=None, *args, **kwargs)[source]¶ Plot a line.
If only one array is given it is supposed to be the y axis data. x axis values are set as index array 0..N-1 .
- Parameters
- xlist or CArray
x axis values
- ylist or CArray
y axis values
- colorstr
Character
Color
‘b’
blue
‘g’
green
‘r’
red
‘c’
cyan
‘m’
magenta
‘y’
yellow
‘k’
black
‘w’
white
- alphafloat, default 1.0
0.0 for transparent through 1.0 opaque
- linestylecharacter, default ‘-‘
Can be one into this list : [‘-‘ | ‘–’ | ‘-.’ | ‘:’ | ‘None’ | ‘ ‘ | ‘’]
- linewidthfloat
0.0 to 1.0
- markerstr
Character
Marker
‘.’
point marker
‘,’
pixel marker
‘o’
circle marker
‘v’
triangle_down marker
‘^’
triangle_up marker
‘<’
triangle_left marker
‘>’
triangle_right marker
‘1’
tri_down marker
‘2’
tri_up marker
‘3’
tri_left marker
‘4’
tri_right marker
‘s’
square marker
‘p’
pentagon marker
‘*’
star marker
‘h’
hexagon1 marker
‘H’
hexagon2 marker
‘+’
plus marker
‘x’
x marker
‘D’
diamond marker
‘d’
thin_diamond marker
‘|’
vline marker
‘_’
hline marker
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.show()
(Source code, png)
-
plot_path
(self, path, path_style='-', path_width=1.5, path_color='k', straight=False, start_style='h', start_facecolor='r', start_edgecolor='k', start_edgewidth=1, final_style='*', final_facecolor='g', final_edgecolor='k', final_edgewidth=1)[source]¶ Plot a path traversed by a point.
By default, path is drawn in solid black, start point is drawn with a red star and the end point is drawn with a green asterisk.
- Parameters
- pathCArray
Every row contain one point coordinate.
- path_stylestr
Style for the path line. Default solid (-).
- path_widthint
Width of path line. Default 1.5.
- path_colorstr
Color for the path line. Default black (k).
- straightbool, default False
If True, path will be plotted straight between start and end point.
- start_stylestr
Style for the start point. Default an hexagon (h).
- start_facecolorstr
Color for the start point. Default red (r).
- start_edgecolorstr
Color for the edge of the start point marker. Default black (k).
- start_edgewidthscalar
Width of the edge for the start point. Default 1.
- final_stylestr
Style for the end point. Default a star (*).
- final_facecolorstr
Color for the end point. Default red (g).
- final_edgecolorstr
Color for the edge of the final point marker. Default black (k).
- final_edgewidthscalar
Width of the edge for the end point. Default 1.
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=14) fig.sp.title("5-points path") path = CArray([[2, 2], [3, 2], [4, 7], [5, 4], [1, 3]]) fig.sp.plot_path(path) fig.sp.xlim(0, 6) fig.sp.ylim(1, 8) fig.show()
(Source code, png)
-
quiver
(self, U, V, X=None, Y=None, color='k', linestyle='-', linewidth=1.0, alpha=1.0)[source]¶ A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
For example, the first vector is defined by components u(1), v(1) and is displayed at the point x(1), y(1).
quiver(x,y,u,v) plots vectors as arrows at the coordinates specified in each corresponding pair of elements in x and y.
quiver(u,v) draws vectors specified by u and v at equally spaced points in the x-y plane.
- Parameters
- U, V: scalar or CArray
Give the x and y components of the arrow vectors.
- X, Y: scalar or CArray, optional
The x and y coordinates of the arrow locations (default is tail of arrow; see pivot kwarg)
- color :
Color of the gradient directions.
- linestylestr
[‘solid’ | ‘dashed’, ‘dashdot’, ‘dotted’ | (offset, on-off-dash-seq) | ‘-‘ | ‘–’ | ‘-.’ | ‘:’ | ‘None’ | ‘ ‘ | ‘’]
- linewidthfloat
Width of the line.
- alphafloat
Transparency factor of the directions.
-
scatter
(self, x, y, s=20, c='b', *args, **kwargs)[source]¶ Scatter plot of x vs y.
- Parameters
- x, ylist or CArray
Input data. Both object must have the same size.
- sscalar or shape (n, ), optional, default: 20
size in points^2.
- ccolor or sequence of color, optional, default ‘b’
c can be a single color format string, or a sequence of color specifications of length N, or a sequence of numbers with the same shape of x,y to be mapped to colors using the cmap and norm specified via kwargs (see below). Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. c can be a 2-D array in which the rows are RGB or RGBA, however.
- markerMarkerStyle, optional, default: ‘o’
See markers for more information on the different styles of markers scatter supports.
- cmapColormap, optional, default: None
A Colormap instance or registered name. cmap is only used if c is an array of floats. If None, default parameter image.cmap is used.
- normNormalize, optional, default: None
A Normalize instance is used to scale luminance data to 0, 1. norm is only used if c is an array of floats.
- vmin, vmaxscalar, optional, default: None
vmin and vmax are used in conjunction with norm to normalize luminance data. If either are None, the min and max of the color array is used. Note if you pass a norm instance, your settings for vmin and vmax will be ignored.
- alphascalar, optional, default: None
The alpha blending value, between 0 (transparent) and 1 (opaque)
- linewidthsscalar or array_like, optional, default: None
If None, defaults to (lines.linewidth,). Note that this is a tuple, and if you set the linewidths argument you must set it as a sequence of float.
Examples
from secml.data.loader import CDLRandom from secml.figure import CFigure dataset = CDLRandom().load() fig = CFigure(fontsize=14) fig.sp.scatter(dataset.X[:, 0].ravel(), dataset.X[:, 1].ravel(), s=75, c=dataset.Y, alpha=.7) fig.show()
(Source code, png)
-
semilogx
(self, x, y=None, *args, **kwargs)[source]¶ Plot with log scaling on the x axis.
If only one array is given it is supposed to be the y axis data. x axis values are set as index array 0..N-1 .
- Parameters
- xlist or CArray
x axis values
- ylist or CArray
y axis values
- basexscalar > 1, default is 10
Base of the x logarithm
- subsx[ None | sequence ]
Where to place the subticks between each major tick. Sequence of integers. For example, in a log10 scale: [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick.
- nonposx[ ‘mask’ | ‘clip’ ], default ‘mask’
Non-positive values in x can be masked as invalid, or clipped to a very small positive number
See also
plot
Plot with standard axis.
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=14) t = CArray.arange(0.01, 20.0, 0.01) fig.sp.semilogx(t, (2 * 3.14 * t).sin()) fig.sp.grid() fig.sp.title('semilogx') fig.show()
(Source code, png)
-
semilogy
(self, x, y=None, *args, **kwargs)[source]¶ Plot with log scaling on the y axis.
If only one array is given it is supposed to be the y axis data. x axis values are set as index array 0..N-1 .
- Parameters
- xlist or CArray
x axis values.
- ylist or CArray
y axis values.
- baseyscalar > 1, default is 10
Base of the y logarithm
- subsy[ None | sequence ], default None
Where to place the subticks between each major tick. Should be a sequence of integers. For example, in a log10 scale: [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick.
- nonposy[ ‘mask’ | ‘clip’ ], default ‘mask’
Non-positive values in x can be masked as invalid, or clipped to a very small positive number.
See also
plot
Plot with standard axis.
Examples
from secml.array import CArray from secml.figure import CFigure fig = CFigure(fontsize=14) t = CArray.arange(0.01, 20.0, 0.01) fig.sp.semilogy(t, (-t / 5.0).exp()) fig.sp.title('semilogy') fig.sp.grid() fig.show()
(Source code, png)
-
set
(self, param_name, param_value, copy=False)[source]¶ Set a parameter that has a specific name to a specific value.
Only parameters, i.e. PUBLIC or READ/WRITE attributes, can be set.
- The following checks are performed before setting:
if parameter is an attribute of current class, set directly;
- else, iterate over __dict__ and look for a class attribute
having the desired parameter as an attribute;
- else, if attribute is not found on the 2nd level,
raise AttributeError.
If possible, a reference to the parameter to set is assigned. Use copy=True to always make a deepcopy before set.
- Parameters
- param_namestr
Name of the parameter to set.
- param_valueany
Value to set for the parameter.
- copybool
By default (False) a reference to the parameter to assign is set. If True or a reference cannot be extracted, a deepcopy of the parameter is done first.
-
set_params
(self, params_dict, copy=False)[source]¶ Set all parameters passed as a dictionary {key: value}.
This function natively takes as input the dictionary created by .get_params. Only parameters, i.e. PUBLIC or READ/WRITE attributes, can be set.
For more informations on the setting behaviour see .CCreator.set.
If possible, a reference to the parameter to set is assigned. Use copy=True to always make a deepcopy before set.
- Parameters
- params_dictdict
Dictionary of parameters to set.
- copybool
By default (False) a reference to the parameter to assign is set. If True or a reference cannot be extracted, a deepcopy of the parameter is done first.
See also
get_params
returns the dictionary of class parameters.
-
text
(self, *args, **kwargs)[source]¶ Create a Text instance at x, y with string text.
- Parameters
- Any of the following keyword arguments is supported.
- Text properties:
Property
Description
alpha
float (0.0 transparent through 1.0 opaque)
animated
[True | False]
backgroundcolor
one of possible color
bbox
rectangle prop dict
color
one of possible color
family or fontfamily or fontname or name
[FONTNAME | ‘serif’ | ‘sans-serif’ | ‘cursive’ | ‘fantasy’ | ‘monospace’ ]
horizontalalignment or ha
[ ‘center’ | ‘right’ | ‘left’ ]
label
string or anything printable with ‘%s’ conversion.
linespacing
float (multiple of font size)
position
(x,y)
rasterized
[True | False | None]
rotation
[ angle in degrees | ‘vertical’ | ‘horizontal’ ]
size or fontsize
[size in points | ‘xx-small’ | ‘x-small’ | ‘small’ | ‘medium’ | ‘large’ | ‘x-large’ | ‘xx-large’ ]
stretch or fontstretch
[a numeric value in range 0-1000 | ‘ultra-condensed’ | ‘extra-condensed’ | ‘condensed’ | ‘semi-condensed’ | ‘normal’ | ‘semi-expanded’ | ‘expanded’ | ‘extra-expanded’ | ‘ultra-expanded’ ]
style or fontstyle
[ ‘normal’ | ‘italic’ | ‘oblique’]
text
string or anything printable with ‘%s’ conversion.
verticalalignment or va or ma
[ ‘center’ | ‘top’ | ‘bottom’ | ‘baseline’ ]
visible
[True | False]
weight or fontweight
[a numeric value in range 0-1000 | ‘ultralight’ | ‘light’ | ‘normal’ | ‘regular’ | ‘book’ | ‘medium’ | ‘roman’ | ‘semibold’ | ‘demibold’ | ‘demi’ | ‘bold’ | ‘heavy’ | ‘extra bold’ | ‘black’ ]
x
float, x position of the text.
y
float. y position of the text.
zorder
any number, objects with lower zorder values are drawn first.
- Font properties:
Property
Description
family
(font name or font family) es: ‘serif’, ‘sans-serif’, ‘cursive’, ‘fantasy’, or ‘monospace’
style
either between ‘normal’, ‘italic’ or ‘oblique’
variant
‘normal’ or ‘small-caps’
stretch
A numeric value in the range 0-1000 or one of ‘ultra-condensed’, ‘extra-condensed’, ‘condensed’, ‘semi-condensed’, ‘normal’, ‘semi-expanded’, ‘expanded’, ‘extra-expanded’ or ‘ultra-expanded’
weight
A numeric value in the range 0-1000 or one of ‘ultralight’, ‘light’, ‘normal’, ‘regular’, ‘book’, ‘medium’, ‘roman’, ‘semibold’, ‘demibold’, ‘demi’, ‘bold’, ‘heavy’, ‘extra bold’, ‘black’
size
Either an relative value of ‘xx-small’, ‘x-small’, ‘small’, ‘medium’, ‘large’, ‘x-large’, ‘xx-large’ or an absolute font size, e.g., 12
-
tick_params
(self, *args, **kwargs)[source]¶ Change the appearance of ticks and tick labels.
- Parameters
- axis[‘x’ | ‘y’ | ‘both’]
Axis on which to operate; default is ‘both’.
- reset[True | False]
Default False. If True, set all parameters to defaults before processing other keyword arguments.
- which[‘major’ | ‘minor’ | ‘both’]
Default is ‘major’; apply arguments to which ticks.
- direction[‘in’ | ‘out’ | ‘inout’]
Puts ticks inside the axes, outside the axes, or both.
- lengthint
Tick length in points.
- widthint
Tick width in points.
- colorstr
Tick color; accepts any mpl color spec.
- padint
Distance in points between tick and label.
- labelsizeint, str
Tick label font size in points or as a string (e.g., ‘large’).
- labelcolorstr
Tick label color; mpl color spec.
- colorsstr
Changes the tick color and the label color to the same value: mpl color spec.
- bottom, top, left, right[bool | ‘on’ | ‘off’]
controls whether to draw the respective ticks.
- labelbottom, labeltop, labelleft, labelrightbool, str
Boolean or [‘on’ | ‘off’], controls whether to draw the respective tick labels.
Examples
from secml.array import CArray from secml.figure import CFigure from secml.core.constants import pi X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.sp.xticks(CArray([-pi, -pi / 2, 0, pi / 2, pi])) fig.sp.xticklabels(CArray(["- pi", "-pi/2", "0", "pi/2", "pi"])) fig.sp.tick_params(direction='out', length=6, width=2, colors='r', right='off') fig.sp.yticks(CArray([-1, 0, +1])) fig.show()
(Source code, png)
-
xlabel
(self, label, *args, **kwargs)[source]¶ Set a label for the x axis.
- Parameters
- labelstring
Label’s text.
- *args, **kwargs
Same as
text
method.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.sp.xlabel("x", color='r', fontsize=10) fig.show()
(Source code, png)
-
xlim
(self, bottom=None, top=None)[source]¶ Set axes x limits.
- Parameters
- bottomscalar
Starting value for the x axis.
- topscalar
Ending value for the x axis.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.sp.xlim(-3, 3) fig.show()
(Source code, png)
-
xscale
(self, scale_type, nonposx='mask', basex=10, **kwargs)[source]¶ Set scale for x axis.
- Parameters
- scale_type{‘linear’, ‘log’, ‘symlog’, ‘logit’}
Scale for x axis. Default ‘linear’.
- nonposx: [ ‘mask’ | ‘clip’ ], default ‘mask’
Non-positive values in x can be masked as invalid, or clipped to a very small positive number.
- basexint
The base of the logarithm, must be higger than 1.
-
xticklabels
(self, labels, *args, **kwargs)[source]¶ Set the xtick labels.
- Parameters
- labelslist or CArray of string
Xtick labels.
- *args, **kwargs
Same as
text
method.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.sp.xticks(CArray([-3.14, -3.14 / 2, 0, 3.14 / 2, 3.14])) fig.sp.xticklabels(["- pi", "-pi/2", "0", "pi/2", "pi"]) fig.sp.yticks(CArray([-1, 0, +1])) fig.show()
(Source code, png)
-
xticks
(self, location_array, *args, **kwargs)[source]¶ Set the x-tick locations and labels.
- Parameters
- location_arrayCArray or list
Contain ticks location.
- *args, **kwargs
Same as
text
method.
Examples
from secml.array import CArray from secml.figure import CFigure X = CArray.linspace(-3.14, 3.14, 256, endpoint=True) C, S = X.cos(), X.sin() fig = CFigure(fontsize=14) fig.sp.plot(X, C, color='red', alpha=0.5, linewidth=1.0, linestyle='-') fig.sp.plot(X, S) fig.sp.xticks(CArray([-3.14, -3.14 / 2, 0, 3.14 / 2, 3.14])) fig.sp.yticks(CArray([-1, 0, +1])) fig.show()
(Source code, png)
-
ylabel
(self, label, *args, **kwargs)[source]¶ Set a label for the y axis
- Parameters
- labelstring
Label’s text.
- *args, **kwargs
Same as
text
method.
See also
xlabel
Set a label for the x axis.
-
ylim
(self, bottom=None, top=None)[source]¶ Set axes y limits.
- Parameters
- bottomscalar
Starting value for the y axis.
- topscalar
Ending value for the y axis.
See also
xlim
Set x axis limits.
-
yscale
(self, scale_type, nonposy='mask', basey=10, **kwargs)[source]¶ Set scale for y axis.
- Parameters
- scale_type{‘linear’, ‘log’, ‘symlog’, ‘logit’}
Scale for y axis. Default ‘linear’.
- nonposy: [ ‘mask’ | ‘clip’ ], default ‘mask’
Non-positive values in y can be masked as invalid, or clipped to a very small positive number.
- baseyint
The base of the logarithm, must be higger than 1.
-
yticklabels
(self, labels, *args, **kwargs)[source]¶ Set the ytick labels.
- Parameters
- labelslist or CArray of string
Xtick labels.
- *args, **kwargs
Same as
text
method.
See also
xticklabels
Set the xtick labels.
-
class
secml.figure._plots.c_plot_classifier.
CPlotClassifier
(sp, default_params)[source]¶ Plot a classifier.
Custom plotting parameters can be specified.
- Currently parameters default:
grid: False.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_clf
(self)Apply defined parameters to active subplot.
apply_params_fun
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_decision_regions
(self, clf[, …])Plot decision boundaries and regions for the given classifier.
plot_fgrads
(self, gradf[, n_grid_points, …])Plot function gradient directions.
plot_fun
(self, func[, multipoint, …])Plot a function (used for decision functions or boundaries).
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
plot_decision_regions
(self, clf, plot_background=True, levels=None, grid_limits=None, n_grid_points=30, cmap=None)[source]¶ Plot decision boundaries and regions for the given classifier.
- Parameters
- clfCClassifier
Classifier which decision function should be plotted.
- plot_backgroundbool, optional
Specifies whether to color the decision regions. Default True. in the background using a colorbar.
- levelslist or None, optional
List of levels to be plotted. If None, CArray.arange(0.5, clf.n_classes) will be plotted.
- grid_limitslist of tuple
List with a tuple of min/max limits for each axis. If None, [(0, 1), (0, 1)] limits will be used.
- n_grid_pointsint, optional
Number of grid points. Default 30.
- cmapstr or list or matplotlib.pyplot.cm or None, optional
Colormap to use. Could be a list of colors. If None and the number of dataset classes is <= 6, colors will be chosen from [‘blue’, ‘red’, ‘lightgreen’, ‘black’, ‘gray’, ‘cyan’]. Otherwise the ‘jet’ colormap will be used.
-
class
secml.figure._plots.c_plot_constraint.
CPlotConstraint
(sp, default_params)[source]¶ Plot constraint on bi-dimensional feature spaces.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_fun
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_constraint
(self, constraint[, …])Plot constraint bound.
plot_fgrads
(self, gradf[, n_grid_points, …])Plot function gradient directions.
plot_fun
(self, func[, multipoint, …])Plot a function (used for decision functions or boundaries).
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
plot_constraint
(self, constraint, grid_limits=None, n_grid_points=30)[source]¶ Plot constraint bound.
- Parameters
- constraintCConstraint
Constraint to be plotted.
- grid_limitslist of tuple
List with a tuple of min/max limits for each axis. If None, [(0, 1), (0, 1)] limits will be used.
- n_grid_pointsint, optional
Number of grid points. Default 30.
-
class
secml.figure._plots.c_plot_ds.
CPlotDataset
(sp, default_params)[source]¶ Plots a Dataset.
Custom plotting parameters can be specified.
- Currently parameters default:
show_legend: True
grid: True
See also
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_ds
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_ds
(self, dataset[, colors, markers])Plot patterns of each class with a different color/marker.
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
plot_ds
(self, dataset, colors=None, markers='o', *args, **kwargs)[source]¶ Plot patterns of each class with a different color/marker.
- Parameters
- datasetCDataset
Dataset that contain samples which we want plot.
- colorslist or None, optional
Color to be used for plotting each class. If a list, each color will be assigned to a dataset’s class, with repetitions if necessary. If None and the number of classes is 1, blue will be used. If None and the number of classes is 2, blue and red will be used. If None and the number of classes is > 2, ‘jet’ colormap is used.
- markerslist or str, optional
Marker to use for plotting. Default is ‘o’ (circle). If a string, the same specified marker will be used for each class. If a list, must specify one marker for each dataset’s class.
- args, kwargsany
Any optional argument for plots. If the number of classes is 2, a plot will be created. If the number of classes is > 2, a scatter plot will be created.
-
class
secml.figure._plots.c_plot_fun.
CPlotFunction
(sp, default_params)[source]¶ Plots a Function.
Custom plotting parameters can be specified.
- Currently parameters default:
show_legend: True
grid: True
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_fun
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_fgrads
(self, gradf[, n_grid_points, …])Plot function gradient directions.
plot_fun
(self, func[, multipoint, …])Plot a function (used for decision functions or boundaries).
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
plot_fgrads
(self, gradf, n_grid_points=30, grid_limits=None, color='k', linestyle='-', linewidth=1.0, alpha=1.0, func_args=(), **func_kwargs)[source]¶ Plot function gradient directions.
- Parameters
- gradffunction
Function that computes gradient directions.
- n_grid_pointsint
Number of grid points.
- grid_limitslist of tuple
List with a tuple of min/max limits for each axis. If None, [(0, 1), (0, 1)] limits will be used.
- color :
Color of the gradient directions.
- linestylestr
[‘solid’ | ‘dashed’, ‘dashdot’, ‘dotted’ | (offset, on-off-dash-seq) | ‘-‘ | ‘–’ | ‘-.’ | ‘:’ | ‘None’ | ‘ ‘ | ‘’]
- linewidthfloat
Width of the line.
- alphafloat
Transparency factor of the directions.
- func_args, func_kwargsany
Other arguments or keyword arguments to pass to gradf.
-
plot_fun
(self, func, multipoint=False, plot_background=True, plot_levels=True, levels=None, levels_color='k', levels_style=None, levels_linewidth=1.0, n_colors=50, cmap='jet', alpha=1.0, alpha_levels=1.0, vmin=None, vmax=None, colorbar=True, n_grid_points=30, grid_limits=None, func_args=(), **func_kwargs)[source]¶ Plot a function (used for decision functions or boundaries).
- Parameters
- funcunbound function
Function to be plotted.
- multipointbool, optional
If True, all grid points will be passed to the function. If False (default), function is iterated over each point of the grid.
- plot_backgroundbool, optional
Specifies whether to plot the value of func at each point in the background using a colorbar.
- plot_levelsbool, optional
Specify if function levels should be plotted (default True).
- levelslist or None, optional
List of levels to be plotted. If None, 0 (zero) level will be plotted.
- levels_colorstr or tuple or None, optional
If None, the colormap specified by cmap will be used. If a string, like ‘k’, all levels will be plotted in this color. If a tuple of colors (string, float, rgb, etc), different levels will be plotted in different colors in the order specified. Default ‘k’.
- levels_style[ None | ‘solid’ | ‘dashed’ | ‘dashdot’ | ‘dotted’ ]
If levels_style is None, the default is ‘solid’. levels_style can also be an iterable of the above strings specifying a set of levels_style to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary.
- levels_linewidthfloat or list of floats, optional
The line width of the contour lines. Default 1.0.
- n_colorsint, optional
Number of color levels of background plot. Default 50.
- cmapstr or list or matplotlib.pyplot.cm, optional
Colormap to use (default ‘jet’). Could be a list of colors.
- alphafloat, optional
The alpha blending value of the background. Default 1.0.
- alpha_levelsfloat, optional
The alpha blending value of the levels. Default 1.0.
- vmin, vmaxfloat or None, optional
Limits of the colors used for function plotting. If None, colors are determined by the colormap.
- colorbarbool, optional
True if colorbar should be displayed.
- n_grid_pointsint, optional
Number of grid points.
- grid_limitslist of tuple, optional
List with a tuple of min/max limits for each axis. If None, [(0, 1), (0, 1)] limits will be used.
- func_args, func_kwargs
Other arguments or keyword arguments to pass to func.
Examples
from secml.array import CArray from secml.figure import CFigure # we must define a function that take an array and return a value for every row def f(array): res = CArray.zeros(array.shape[0]) for r in range(array.shape[0]): x = array[r, 0] y = array[r, 1] res[r] = x + y return res fig = CFigure() fig.sp.plot_fun(f, levels=[.5, 1, 1.5]) fig.sp.grid() fig.show()
(Source code, png)
-
class
secml.figure._plots.c_plot_sec_eval.
CPlotSecEval
(sp, default_params)[source]¶ Plots Classifier Security Evaluation results.
This class creates a figure plotting in a fancy and standard style data from .CSecEvalData class.
Custom plotting parameters can be specified.
- Currently parameters default:
show_legend: True. Set False to hide show_legend on next plot.
grid: True.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_sec_eval
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
plot_sec_eval
(self, sec_eval_data[, metric, …])Plot the Security Evaluation Curve using desired metric.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
-
plot_sec_eval
(self, sec_eval_data, metric='accuracy', mean=False, percentage=False, show_average=False, label=None, linestyle='-', color=None, marker=None, metric_args=())[source]¶ Plot the Security Evaluation Curve using desired metric.
- Parameters
- sec_eval_dataCSecEvalData or list
A single CSecEvalData object or a list with multiple repetitions.
- metricstr or CMetric, optional
Metric to be evaluated. Default ‘accuracy’.
- meanbool, optional
If True, the mean of all sec eval repetitions will be computed. Default False..
- percentagebool, optional
If True, values will be displayed in percentage. Default False.
- show_averagebool, optional
If True, the average along the sec eval parameters will be shown in legend. Default False.
- labelstr, optional
Label of the sec eval curve. Default None.
- linestylestr, optional
Style of the curve. Default ‘-‘.
- colorstr or None, optional
Color of the curve. If None (default) the plot engine will chose.
- markerstr or None, optional
Style of the markers. Default None.
- metric_args
Any other argument for the metric.
-
class
secml.figure._plots.c_plot_stats.
CPlotStats
(sp, default_params)[source]¶ Plots for statistical functions.
Custom plotting parameters can be specified.
- Currently parameters default:
show_legend: True.
grid: True.
See also
CPlot
basic subplot functions.
CFigure
creates and handle figures.
- Attributes
class_type
Defines class type.
logger
Logger for current object.
n_lines
Returns the number of lines inside current subplot.
verbose
Verbosity level of logger output.
Methods
apply_params_stats
(self)Apply defined parameters to active subplot.
bar
(self, left, height[, width, bottom])Bar plot.
barh
(self, bottom, width[, height, left])Horizontal bar plot.
boxplot
(self, x[, notch, sym, vert, whis, …])Make a box and whisker plot.
clabel
(self, contour, \*args, \*\*kwargs)Label a contour plot.
colorbar
(self, mappable[, ticks])Add colorbar to plot.
contour
(self, x, y, z, \*args, \*\*kwargs)Draw contour lines of a function.
contourf
(self, x, y, z, \*args, \*\*kwargs)Draw filled contour of a function.
copy
(self)Returns a shallow copy of current class.
create
([class_item])This method creates an instance of a class with given type.
deepcopy
(self)Returns a deep copy of current class.
errorbar
(self, x, y[, xerr, yerr])Plot with error deltas in yerr and xerr.
fill_between
(self, x, y1[, y2, where, …])Fill the area between two horizontal curves.
get_class_from_type
(class_type)Return the class associated with input type.
get_legend
(self)Returns the handler of current subplot legend.
get_legend_handles_labels
(self)Return handles and labels for legend contained by the subplot.
get_lines
(self)Return a list of lines contained by the subplot.
get_params
(self)Returns the dictionary of class parameters.
get_subclasses
()Get all the subclasses of the calling class.
get_xticks_idx
(self, xticks)Returns the position of markers to plot.
grid
(self[, grid_on, axis])Draw grid for current plot.
hist
(self, x, \*args, \*\*kwargs)Plot a histogram.
imshow
(self, img, \*args, \*\*kwargs)Plot image.
legend
(self, \*args, \*\*kwargs)Create legend for plot.
list_class_types
()This method lists all types of available subclasses of calling one.
load
(path)Loads class from pickle object.
loglog
(self, x[, y])Plot with log scaling on both the x and y axis.
matshow
(self, array, \*args, \*\*kwargs)Plot an array as a matrix.
merge
(self, sp)Merge input subplot to active subplot.
plot
(self, x[, y])Plot a line.
plot_path
(self, path[, path_style, …])Plot a path traversed by a point.
plot_prob_density
(self, scores, ts, \*\*params)Plot density estimation of benign and malicious class.
quiver
(self, U, V[, X, Y, color, linestyle, …])A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y).
save
(self, path)Save class object using pickle.
scatter
(self, x, y[, s, c])Scatter plot of x vs y.
semilogx
(self, x[, y])Plot with log scaling on the x axis.
semilogy
(self, x[, y])Plot with log scaling on the y axis.
set
(self, param_name, param_value[, copy])Set a parameter that has a specific name to a specific value.
set_axisbelow
(self[, axisbelow])Set axis ticks and gridlines below most artists.
set_params
(self, params_dict[, copy])Set all parameters passed as a dictionary {key: value}.
text
(self, \*args, \*\*kwargs)Create a Text instance at x, y with string text.
tick_params
(self, \*args, \*\*kwargs)Change the appearance of ticks and tick labels.
timed
([msg])Timer decorator.
title
(self, text, \*args, \*\*kwargs)Set a title for subplot.
xlabel
(self, label, \*args, \*\*kwargs)Set a label for the x axis.
xlim
(self[, bottom, top])Set axes x limits.
xscale
(self, scale_type[, nonposx, basex])Set scale for x axis.
xticklabels
(self, labels, \*args, \*\*kwargs)Set the xtick labels.
xticks
(self, location_array, \*args, \*\*kwargs)Set the x-tick locations and labels.
ylabel
(self, label, \*args, \*\*kwargs)Set a label for the y axis
ylim
(self[, bottom, top])Set axes y limits.
yscale
(self, scale_type[, nonposy, basey])Set scale for y axis.
yticklabels
(self, labels, \*args, \*\*kwargs)Set the ytick labels.
yticks
(self, location_array, \*args, \*\*kwargs)Set the y-tick locations and labels.
secml.parallel¶
parfor¶
-
secml.parallel.parfor.
parfor
(task, processes, args)[source]¶ Parallel For.
Applies a function task to each argument in args, using a pool of concurrent processes.
- Parameters
- taskfunction
Function object that should process each element in args.
- processesint
Maximum number of concurrent processes to be used in the pool. If higher than multiprocessing.cpu_count(), all processor’s cores will be used.
- argsany
Iterable object, where each element is an argument for task.
- Returns
- outiterable
Iterable object containing the output of task(arg) for each arg in args.
-
secml.parallel.parfor.
parfor2
(task, n_reps, processes, *args)[source]¶ Parallel For.
Run function task using each argument in args as input, using a pool of concurrent processes. The task should take as first input the index of parfor iteration.
- Parameters
- taskfunction
Function object that should process each element in args.
- n_repsint
Number of times the task should be run.
- processesint
Maximum number of concurrent processes to be used in the pool. If higher than multiprocessing.cpu_count(), all processor’s cores will be used.
- argsany, optional
Tuple with input arguments for task.
- Returns
- outlist
List with iteration output, sorted (rep1, rep2, …, repN).
secml.utils¶
CLog¶
-
class
secml.utils.c_log.
CLog
(level=None, logger_id=None, add_stream=True, file_handler=None, propagate=False)[source]¶ Bases:
object
Manager for logging and logfiles.
Logger can be used to save important runtime code informations to disk instead of built-in function ‘print’. Along with any print-like formatted string, the logger stores full time stamp and calling class name.
The default filename of a log file is
logs.log
. This will be placed in the same directory of the calling file.- Logging levels currently available and target purpose:
DISABLE - 100: disable all logging.
CRITICAL - 50: critical error messages.
ERROR - 40: standard error messages.
WARNING - 30: standard error messages.
INFO - 20: general info logging.
DEBUG - 10: debug logging only.
Logger is fully integrated to the
CTimer
class in order to log performance of a desired method or routine.- Parameters
- levelLOG_LEVEL, int or None, optional
Initial logging level. Default is None, meaning that the current logging level will be preserved if the logger has already been created.
- logger_idstr or None, optional
Identifier of the logger. Default None. If None, creates a logger which is the root of the hierarchy
- add_streambool, optional
If True, attach a stream handler to the logger. Default True. A stream handler prints to stdout the logged messages.
- file_handlerstr or None, optional
If a string, attach a file handler to the logger. Default None. A file handler stores to the specified path the logged messages.
- propagatebool, optional
If True, messages logged to this logger will be passed to the handlers of higher level (ancestor) loggers, in addition to any handler attached to this logger. Default False.
See also
CTimer
Manages performance monitoring and logging.
Notes
Unlike most of the Python logging modules, our implementation can be fully used inside parallelized code.
Examples
>>> from secml.array import CArray >>> from secml.utils import CLog
>>> log = CLog().warning("{:}".format(CArray([1,2,3]))) ... - WARNING - CArray([1 2 3])
- Attributes
Methods
attach_file
(self, filepath)Adds a file handler to the logger.
attach_stream
(self)Adds a stream handler to the logger.
A context manager that copies and restores the warnings filter upon exiting the context.
critical
(self, msg, \*args, \*\*kwargs)Logs a message with level CRITICAl on this logger.
debug
(self, msg, \*args, \*\*kwargs)Logs a message with level DEBUG on this logger.
error
(self, msg, \*args, \*\*kwargs)Logs a message with level ERROR on this logger.
filterwarnings
(action[, message, category, …])Insert an entry into the list of warnings filters (at the front).
get_child
(self, name)Return a child logger associated with ancestor.
info
(self, msg, \*args, \*\*kwargs)Logs a message with level INFO on this logger.
log
(self, level, msg, \*args, \*\*kwargs)Logs a message with specified level on this logger.
remove_handler_file
(self, filepath)Removes the file handler from the logger.
remove_handler_stream
(self)Removes the stream handler from the logger.
set_level
(self, level)Sets logging level of the logger.
timed
(self[, msg])Timer decorator.
timer
(self[, msg])Starts a timed codeblock.
warning
(self, msg, \*args, \*\*kwargs)Logs a message with level WARNING on this logger.
-
static
catch_warnings
()[source]¶ A context manager that copies and restores the warnings filter upon exiting the context.
Wrapper of warnings.catch_warnings.
-
critical
(self, msg, *args, **kwargs)[source]¶ Logs a message with level CRITICAl on this logger.
See CLog.log for details on args and kwargs.
-
debug
(self, msg, *args, **kwargs)[source]¶ Logs a message with level DEBUG on this logger.
See CLog.log for details on args and kwargs.
-
error
(self, msg, *args, **kwargs)[source]¶ Logs a message with level ERROR on this logger.
See CLog.log for details on args and kwargs.
-
static
filterwarnings
(action, message='', category=<class 'Warning'>, module='', lineno=0, append=False)[source]¶ Insert an entry into the list of warnings filters (at the front).
Wrapper of warnings.filterwarnings.
- Parameters
- actionstr
One of “error”, “ignore”, “always”, “default”, “module”, or “once”.
- messagestr, optional
A regex that the warning message must match.
- categoryclass, optional
A class that the warning must be a subclass of. Default Warning.
- modulestr, optional
A regex that the module name must match.
- linenoint, optional
An integer line number, 0 (default) matches all warnings.
- appendbool, optional
If true, append to the list of filters.
-
get_child
(self, name)[source]¶ Return a child logger associated with ancestor.
- Parameters
- namestr-like
Identifier of the child logger. Can be any object safely convertible to string (int, float, etc.)
- Returns
- child_loggerlogger
Instance of the child logger.
-
info
(self, msg, *args, **kwargs)[source]¶ Logs a message with level INFO on this logger.
See CLog.log for details on args and kwargs.
-
property
level
¶ Return logging level.
-
log
(self, level, msg, *args, **kwargs)[source]¶ Logs a message with specified level on this logger.
The msg is the message format string, and the args are the arguments which are merged into msg using the string formatting operator.
There are two keyword arguments in kwargs which are inspected: exc_info which, if it does not evaluate as false, causes exception information to be added to the logging message. If an exception tuple (in the format returned by sys.exc_info()) is provided, it is used; otherwise, sys.exc_info() is called to get the exception information.
The second keyword argument is extra which can be used to pass a dictionary which is used to populate the __dict__ of the LogRecord created for the logging event with user-defined attributes.
-
property
logger_id
¶ Return identifier of the logger.
-
property
propagate
¶ If True, events logged will be passed to the handlers of higher level (ancestor) loggers.
-
timed
(self, msg=None)[source]¶ Timer decorator.
Returns a decorator that can be used to measure execution time of any method. Performance data will be stored inside the calling logger. Messages will be logged using the DEBUG logging level. As this decorator accepts optional arguments, must be called as a method. See examples.
- Parameters
- msgstr or None, optional
Custom message to display when entering the timed block. If None, “Entering timed block function_name…” will printed.
Examples
>>> from secml.array import CArray >>> from secml.utils import CLog
>>> log = CLog() >>> log.set_level(10) >>> @log.timed() ... def abc(): ... print("Hello world!")
>>> abc() Hello world!
-
timer
(self, msg=None)[source]¶ Starts a timed codeblock.
Returns an instance of context manager
CTimer
. Performance data will be stored inside the calling logger. Messages will be logged using the DEBUG logging level.- Parameters
- msgstr or None, optional
Custom message to display when entering the timed block. If None, “Entering timed block…” will printed.
Examples
>>> from secml.array import CArray >>> from secml.utils import CLog
>>> log = CLog() >>> log.set_level(10) >>> with log.timer("Timing the instruction..."): ... a = CArray([1,2,3]) 2... - root - DEBUG - Timing the instruction... 2... - root - DEBUG - Elapsed time: ... ms
-
class
secml.utils.c_log.
CTimer
(log=None, msg=None)[source]¶ Bases:
object
Context manager for performance logging
The code inside the specific context will be timed and performance data printed and/or logged.
This class fully integrates with
CLog
in order to store to disk performance data. When no logger is specified, data is printed on the console output.Times are always stored in milliseconds (ms).
- Parameters
- logCLog or None, optional
Instance of
CLog
class to be used as performance logger. If a logger is specified, timer data will not be printed on console.- msgstr or None, optional
Custom message to display when entering the timed block. If None, “Entering timed block function_name…” will printed.
See also
CLog
CLog and store runtime informations on disk.
Examples
>>> from secml.array import CArray >>> from secml.utils import CTimer
>>> with CTimer() as t: ... a = CArray([1,2,3]) Entering timed block... Elapsed time: ... ms
>>> with CTimer(msg="Timing the instruction...") as t: ... a = CArray([1,2,3]) Timing the instruction... Elapsed time: ... ms
>>> from secml.utils import CLog >>> logger = CLog() >>> logger.set_level(10) >>> with CTimer(logger) as t: ... a = CArray([1,2,3]) 2... - root - DEBUG - Entering timed block... 2... - root - DEBUG - Elapsed time: ... ms
- Attributes
step
Return time elapsed from timer start (milliseconds).
Methods
timed
([log, msg])Timer decorator.
-
property
step
¶ Return time elapsed from timer start (milliseconds).
-
static
timed
(log=None, msg=None)[source]¶ Timer decorator.
Returns a decorator that can be used to measure execution time of any method. As this decorator accepts optional arguments, must be called as a method. See examples.
- Parameters
- logCLog or None, optional
Instance of
CLog
class to be used as performance logger. If a logger is specified, timer data will not be printed on console.- msgstr or None, optional
Custom message to display when entering the timed block. If None, “Entering timed block…” will printed.
See also
CLog
CLog and store runtime informations on disk.
Examples
>>> from secml.array import CArray >>> from secml.utils import CTimer
>>> @CTimer.timed() ... def abc(): ... print("Hello world!")
>>> abc() Entering timed block `abc`... Hello world! Elapsed time: ... ms
c_file_manager¶
-
secml.utils.c_file_manager.
folder_exist
(folder_path)[source]¶ Test whether a folder exists.
Returns False for broken symbolic links.
-
secml.utils.c_file_manager.
file_exist
(file_path)[source]¶ Test whether a file exists.
Returns False for broken symbolic links.
-
secml.utils.c_file_manager.
make_folder_incwd
(folder_name, mode=511)[source]¶ Create a directory named folder_name inside current working directory (cwd).
- Parameters
- folder_namestr
Desired name for the new folder.
- modeoct, optional
Octal literal representing the numeric mode to use. On some systems, mode is ignored. Where it is used, the current umask value is first masked out. If bits other than the last 9 (i.e. the last 3 digits of the octal representation of the mode) are set, their meaning is platform-dependent. On some platforms, they are ignored and you should call chmod() explicitly to set them. Default 0o777.
See also
make_folder
Create a directory given full path.
-
secml.utils.c_file_manager.
make_folder
(folder_path, mode=511)[source]¶ Create a directory inside folder_path with numeric mode ‘mode’.
All intermediate-level directories needed to contain the leaf directory will be recursively made.
- Parameters
- folder_pathstr
Desired path for the new folder.
- modeoct, optional
Octal literal representing the numeric mode to use. On some systems, mode is ignored. Where it is used, the current umask value is first masked out. If bits other than the last 9 (i.e. the last 3 digits of the octal representation of the mode) are set, their meaning is platform-dependent. On some platforms, they are ignored and you should call chmod() explicitly to set them. Default 0o777.
See also
make_folder_inpath
Create a directory inside a specific folder.
-
secml.utils.c_file_manager.
remove_folder
(folder_path, force=False)[source]¶ Remove (delete) the directory path.
Path must point to a directory (but not a symbolic link to a directory).
- Parameters
- folder_pathstr
Absolute or relative path to folder to remove.
- forcebool, optional
By default, if force is False, directory is removed only if empty, otherwise, OSError is raised. Set to True in order to remove the whole directory and its subdirectories.
-
secml.utils.c_file_manager.
make_rand_folder
(folder_path=None, custom_name=None)[source]¶ Create a random named folder.
Random name will be selected inside the integer range between 1 and 1 million [1, 1kk).
- Parameters
- folder_pathstr, optional
Path where to create the new directory. If None, folder will be created inside calling file folder.
- custom_namestr, optional
Custom name to add before the random ID number. An underscore is placed between ID and custom_name.
- Returns
- target_pathstr
Absolute path of created directory.
Notes
There is a small chance that randomly generated folder already exists. Just run the function again :)
-
secml.utils.c_file_manager.
abspath
(file_name)[source]¶ Return the absolute path to file.
File name, as well as directory separator, is not added to the end of the returned path.
Examples
>>> import secml.utils.c_file_manager as fm
>>> cur_file = fm.split(__file__)[1] # Getting only the filename >>> cur_file 'c_folder_manager.py' >>> fm.abspath(cur_file)[-12:] '/secml/utils'
-
secml.utils.c_file_manager.
normpath
(path)[source]¶ Normalize a pathname.
Normalize a pathname by collapsing redundant separators and up-level references so that A//B, A/B/, A/./B and A/foo/../B all become A/B. This string manipulation may change the meaning of a path that contains symbolic links. On Windows, it converts forward slashes to backward slashes.
Examples
>>> import secml.utils.c_file_manager as fm
>>> cur_path = fm.split(__file__)[0] # Getting only the filename >>> cur_path '---/src/secml/utils' >>> upper_path = fm.join(cur_path, '..', '..') >>> upper_path '---/src/secml/utils/../..' >>> fm.normpath(upper_path) '---/src'
-
secml.utils.c_file_manager.
join
(*paths)[source]¶ Join one or more path components intelligently.
The return value is the concatenation of path and any members of *paths with exactly one directory separator (os.sep) following each non-empty part except the last, meaning that the result will only end in a separator if the last part is empty. If a component is an absolute path, all previous components are thrown away and joining continues from the absolute path component.
See also
split
Split the pathname path into a pair (head, tail).
-
secml.utils.c_file_manager.
split
(path)[source]¶ Split the pathname path into a pair (head, tail).
Tail is the last pathname component and head is everything leading up to that. The tail part will never contain a slash; if path ends in a slash, tail will be empty. If there is no slash in path, head will be empty. If path is empty, both head and tail are empty. Trailing slashes are stripped from head unless it is the root (one or more slashes only). In all cases, join(head, tail) returns a path to the same location as path (but the strings may differ).
- Returns
- out_splittuple of str
A tuple of strings consisting of (head, tail), where tail is the last pathname component and head is everything leading up to that.
See also
join
Join one or more path components intelligently.
Examples
>>> import secml.utils.c_file_manager as fm
>>> path = fm.join('dir1','dir2','dir3') >>> path 'dir1/dir2/dir3' >>> print(fm.split(path)) ('dir1/dir2', 'dir3')
-
secml.utils.c_file_manager.
expanduser
(path)[source]¶ Replace user path shortcut with real user path.
On Unix and Windows, return path with an initial ~ or ~user replaced by that user’s home directory.
On Unix, an initial ~ is replaced by the environment variable HOME if it is set; otherwise the current user’s home directory is looked up in the password directory through the built-in module pwd. An initial ~user is looked up directly in the password directory.
On Windows, HOME and USERPROFILE will be used if set, otherwise a combination of HOMEPATH and HOMEDRIVE will be used. An initial ~user is handled by stripping the last directory component from the created user path derived above.
If the expansion fails or if the path does not begin with a tilde, the path is returned unchanged.
Examples
>>> import secml.utils.c_file_manager as fm
>>> fm.expanduser('~') '/home/username' >>> fm.expanduser(fm.join('~','documents')) '/home/username/documents'
-
secml.utils.c_file_manager.
dirsep
()[source]¶ The character used by the operating system to separate pathname components. This is ‘/’ for POSIX and ‘’ for Windows. Note that knowing this is not sufficient to be able to parse or concatenate pathnames, use CFileManager.split() and CFileManager.join() instead, but it is occasionally useful.
pickle_utils¶
-
secml.utils.pickle_utils.
save
(file_path, obj)[source]¶ Save object to file using cPickle.
This functions stores a generic python object into a compressed gzip file (*.gz).
Saved objects can be loaded using .load.
- Parameters
- file_pathstr
Path to destination file.
- objobject
Any python object to save.
- Returns
- obj_pathstr
Full path to the stored object.
download_utils¶
-
secml.utils.download_utils.
dl_file
(url, output_dir, user=None, chunk_size=1024, md5_digest=None)[source]¶ Download file from input url and store in output_dir.
- Parameters
- urlstr
Url of the file to download.
- output_dirstr
Path to the directory where the file should be stored. If folder does not exists, will be created.
- userstr or None, optional
String with the user[:password] if required for accessing url.
- chunk_sizeint, optional
Size of the data chunk to read from url in bytes. Default 1024.
- md5_digeststr or None, optional
Expected MD5 digest of the downloaded file. If a different digest is computed, the downloaded file will be removed and ValueError is raised.
dict_utils¶
-
secml.utils.dict_utils.
load_dict
(file_path, values_dtype=<class 'str'>, encoding='ascii')[source]¶ Load dictionary from textfile.
Each file’s line should be <key: value>
- Parameters
- file_pathstr
Full path to the file to read.
- values_dtypedtype
Datatype of the values. Default str (string).
- encodingstr, optional
Encoding to use for reading the file. Default ‘ascii’.
- Returns
- dictionarydict
Loaded dictionary with one key for each line in the input text file.
-
secml.utils.dict_utils.
merge_dicts
(*dicts)[source]¶ Shallow copy and merge any number of input dicts.
Precedence goes to key value pairs in latter dicts.
- Parameters
- dictsdict1, dict2, …
Any sequence of dict objects to merge.
Examples
>>> from secml.utils import merge_dicts
>>> d1 = {'attr1': 100, 'attr2': 200} >>> d2 = {'attr3': 300, 'attr1': 999} # Redefining `attr1`
>>> merge_dicts(d1, d2) # Value of `attr1` will be set according to `d2` dictionary {'attr3': 300, 'attr2': 200, 'attr1': 999}
-
secml.utils.dict_utils.
invert_dict
(d)[source]¶ Returns a new dict with keys as values and values as keys.
- Parameters
- ddict
Input dictionary. If one value of the dictionary is a list or a tuple, each element of the sequence will be considered separately.
- Returns
- dict
The new dictionary with d keys as values and d values as keys. In the case of duplicated d values, the value of the resulting key of the new dictionary will be a list with all the corresponding d keys.
Examples
>>> from secml.utils.dict_utils import invert_dict
>>> a = {'k1': 2, 'k2': 2, 'k3': 1} >>> print(invert_dict(a)) {1: 'k3', 2: ['k1', 'k2']}
>>> a = {'k1': 2, 'k2': [2,3,1], 'k3': 1} >>> print(invert_dict(a)) {1: ['k2', 'k3'], 2: ['k1', 'k2'], 3: 'k2'}
-
class
secml.utils.dict_utils.
LastInDict
[source]¶ Bases:
collections.abc.MutableMapping
Last In Dictionary.
A standard dictionary that keeps in memory the key of the last set item. The setting behaviour is queue-like: a single element can be inserted in the dictionary each time.
The last key can be changes manually calling LastInDict.lastitem_id = key.
Examples
>>> from secml.utils import LastInDict
>>> li = LastInDict()
>>> li['key1'] = 123 >>> li['key2'] = 102030
>>> li.lastin_key 'key2' >>> li.lastin 102030
- Attributes
- lastin
- lastin_key
Methods
clear
(self)get
(self, key[, default])items
(self)keys
(self)pop
(self, key[, default])If key is not found, d is returned if given, otherwise KeyError is raised.
popitem
(self)as a 2-tuple; but raise KeyError if D is empty.
setdefault
(self, key[, default])update
(\*args, \*\*kwds)If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
values
(self)-
property
lastin
¶
-
property
lastin_key
¶
-
class
secml.utils.dict_utils.
SubLevelsDict
(data)[source]¶ Bases:
collections.abc.MutableMapping
Sub-Levels Dictionary.
A standard dictionary that allows easy access to attributes of contained objects at infinite deep.
Examples
>>> from secml.utils import SubLevelsDict
>>> class Foo(object): ... attr2 = 5
>>> li = SubLevelsDict({'attr1': Foo()})
>>> print(type(li['attr1'])) <class 'dict_utils.Foo'> >>> print(li['attr1.attr2']) 5
>>> li['attr1.attr2'] = 10 # Subattributes can be set in the same way >>> print(li['attr1.attr2']) 10
Methods
clear
(self)get
(self, key[, default])items
(self)keys
(self)pop
(self, key[, default])If key is not found, d is returned if given, otherwise KeyError is raised.
popitem
(self)as a 2-tuple; but raise KeyError if D is empty.
setdefault
(self, key[, default])update
(\*args, \*\*kwds)If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
values
(self)
list_utils¶
-
secml.utils.list_utils.
find_duplicates
(l)[source]¶ Find and returns a python set with input list duplicates.
- Parameters
- llist
List to examine.
- Returns
- duplicatesset
Python set with input list duplicates.
References
http://stackoverflow.com/questions/9835762/find-and-list-duplicates-in-python-list
Examples
>>> from secml.utils.list_utils import find_duplicates >>> l = ['1', '1', 2, '3', 2] >>> print(find_duplicates(l)) set(['1', 2])
mixed_utils¶
-
class
secml.utils.mixed_utils.
AverageMeter
[source]¶ Bases:
object
Computes and stores the average and current value.
- Attributes
- valfloat
Current value.
- avgfloat
Average.
- sumfloat
Cumulative sum of seen values.
- countint
Number of seen values.
Methods
update
(self, val[, n])Updated average and current value.
reset
-
class
secml.utils.mixed_utils.
OrderedFlexibleClass
(*items)[source]¶ Bases:
object
A flexible class exposing its attributes in a specific order when iterated.
Order of the attributes inside the class follows the inputs sequence. Any attribute set after class initialization will be placed at the end of attributes sequence (see examples).
- Parameters
- itemstuple1, tuple2, …
Any custom sequence of tuples with the attributes to set. Each tuple must be a (key, value) pair.
Examples
>>> from secml.utils import OrderedFlexibleClass
>>> c = OrderedFlexibleClass(('attr1', None), ('attr2', 5)) >>> print(tuple(attr for attr in c)) (None, 5)
>>> c.attr3 = 123 >>> print(tuple(attr for attr in c)) (None, 5, 123)
- Attributes
attr_order
Returns a list specifing current attributes order.
-
property
attr_order
¶ Returns a list specifing current attributes order.
-
secml.utils.mixed_utils.
check_is_fitted
(obj, attributes, msg=None, check_all=True)[source]¶ Check if the input object is trained (fitted).
Checks if the input object is fitted by verifying if all or any of the input attributes are not None.
- Parameters
- objobject
Instance of the class to check. Must implement .fit() method.
- attributesstr or list of str
Attribute or list of attributes to check. Es.: [‘classes’, ‘n_features’, …], ‘classes’
- msgstr or None, optional
If None, the default error message is: “this {name} is not trained. Call .fit() first.”. For custom messages if ‘{name}’ is present in the message string, it is substituted by the class name of the checked object.
- check_allbool, optional
Specify whether to check (True) if all of the given attributes are not None or (False) just any of them. Default True.
- Raises
- NotFittedError
If check_all is True and any of the attributes is None; if check_all is False and all of attributes are None.
secml.settings¶
-
secml.settings.
SECML_CONFIG
= ['/home/docs/secml-data/secml.conf', '/home/docs/checkouts/readthedocs.org/user_builds/secml/envs/v0.8.1/lib/python3.7/site-packages/secml-0.8.1-py3.7.egg/secml/secml.conf']¶ Main directory for storing datasets.
This is set by default to: ‘SECML_HOME_DIR/datasets’
-
secml.settings.
SECML_STORE_LOGS
= False¶ Directory where logs will be stored.
This is set by default to: ‘SECML_HOME_DIR/logs’
CHANGELOG¶
v0.8.1 (05/09/2019)¶
This version does not contain any significant change.
Documentation (2 changes)¶
#523 Fixed documentation not compiling under Sphinx v2.2.
#529 Updated roadmap accordingly for v0.9 release.
v0.8 (06/08/2019)¶
First public release!
ROADMAP¶
SecML is still in alpha stage and the roadmap is subject to change at any time.
(Q4 2019) secml.explanations package
(Q4 2019) PyTorch library wrapper
(Q1 2020) Cleverhans library wrapper
(Q1 2020) Improved support for Tensorflow 2 library
(Q2 2020) Foolbox library wrapper
(Q2 2020) Keras library wrapper
For further details and the most up-to-date roadmap see: https://gitlab.com/secml/secml/milestones