1.2. Neural Networks with PyTorch

In this tutorial we will show how to create a Neural Network using the PyTorch (more usage examples of PyTorch here).

1.2.1. Classifying blobs

First, we need to create a neural network. We simply use PyTorch nn.Module as regular PyTorch code.

[1]:
import torch
from torch import nn

class Net(nn.Module):
    """
    Model with input size (-1, 5) for blobs dataset
    with 5 features
    """
    def __init__(self, n_features, n_classes):
        """Example network."""
        super(Net, self).__init__()
        self.fc1 = nn.Linear(n_features, 5)
        self.fc2 = nn.Linear(5, n_classes)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

We will use a 5D dataset composed with 3 gaussians. We can use 4k samples for training and 1k for testing. We can divide the sets in batches so that they can be processed in small groups by the network. We use a batch size of 20.

[2]:
# experiment parameters
n_classes = 3
n_features = 2
n_samples_tr = 4000  # number of training set samples
n_samples_ts = 1000  # number of testing set samples
batch_size = 20

# dataset creation
from secml.data.loader import CDLRandom
dataset = CDLRandom(n_samples=n_samples_tr + n_samples_ts,
                    n_classes=n_classes,
                    n_features=n_features, n_redundant=0,
                    n_clusters_per_class=1,
                    class_sep=1, random_state=0).load()

# Split in training and test
from secml.data.splitter import CTrainTestSplit
splitter = CTrainTestSplit(train_size=n_samples_tr,
                           test_size=n_samples_ts,
                           random_state=0)

tr, ts = splitter.split(dataset)

# Normalize the data
from secml.ml.features.normalization import CNormalizerMinMax
nmz = CNormalizerMinMax()
tr.X = nmz.fit_transform(tr.X)
ts.X = nmz.transform(ts.X)

Now we can create an instance of the PyTorch model and then wrap it in the specific class that will link it to our library functionalities.

[3]:
# Random seed
torch.manual_seed(0)

# torch model creation
net = Net(n_features=n_features, n_classes=n_classes)

from torch import optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(),
                      lr=0.001, momentum=0.9)

# wrap torch model in CClassifierPyTorch class
from secml.ml.classifiers import CClassifierPyTorch
clf = CClassifierPyTorch(model=net,
                         loss=criterion,
                         optimizer=optimizer,
                         input_shape=(n_features,),
                         random_state=0)

We can simply use the loaded CDataset and pass it to the fit method. The wrapper will handle batch processing and train the network for the number of epochs specified in the wrapper constructor.

[4]:
# clf.verbose = 1  # Can be used to display training process output
print("Training started...")
clf.fit(tr)
print("Training completed!")
Training started...
Training completed!

Using the model in “predict” mode is just as easy. We can use the method predict defined in our wrapper, and pass in the data. We can evaluate the accuracy with the CMetric defined in our library.

[5]:
label_torch = clf.predict(ts.X, return_decision_function=False)

from secml.ml.peval.metrics import CMetric
acc_torch = CMetric.create('accuracy').performance_score(ts.Y, label_torch)

print("Model Accuracy: {}".format(acc_torch))
Model Accuracy: 0.991