Classification of the entire test set

In this tutorial we’re going to take the model we developed in the previous tutorial, run it on the entire MNIST testing set and calculate the overall classification accuracy.

Install PyGeNN wheel from Google Drive

Download wheel file

[ ]:
if "google.colab" in str(get_ipython()):
    #import IPython
    #IPython.core.magics.execution.ExecutionMagics.run.func_defaults[2] = lambda a: a
    #%run "../install_collab.ipynb"
    !pip install gdown --upgrade
    !gdown 1V_GzXUDzcFz9QDIpxAD8QNEglcSipssW
    !pip install pygenn-5.0.0-cp310-cp310-linux_x86_64.whl
    %env CUDA_PATH=/usr/local/cuda
Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (5.1.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.12.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.13.1)
Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.31.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.66.2)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.5)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2024.2.2)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1)
Downloading...
From: https://drive.google.com/uc?id=1V_GzXUDzcFz9QDIpxAD8QNEglcSipssW
To: /content/pygenn-5.0.0-cp310-cp310-linux_x86_64.whl
100% 8.29M/8.29M [00:00<00:00, 149MB/s]
Processing ./pygenn-5.0.0-cp310-cp310-linux_x86_64.whl
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (1.25.2)
Requirement already satisfied: deprecated in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (1.2.14)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (5.9.5)
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.10/dist-packages (from deprecated->pygenn==5.0.0) (1.14.1)
pygenn is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.
env: CUDA_PATH=/usr/local/cuda

Download pre-trained weights and MNIST test data

[ ]:
!gdown 1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc
!gdown 131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF
Downloading...
From: https://drive.google.com/uc?id=1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc
To: /content/weights_0_1.npy
100% 402k/402k [00:00<00:00, 127MB/s]
Downloading...
From: https://drive.google.com/uc?id=131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF
To: /content/weights_1_2.npy
100% 5.25k/5.25k [00:00<00:00, 23.6MB/s]

Install MNIST package

[ ]:
!pip install mnist
Collecting mnist
  Downloading mnist-0.2.2-py2.py3-none-any.whl (3.5 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from mnist) (1.25.2)
Installing collected packages: mnist
Successfully installed mnist-0.2.2

Build model

As well as the standard modules and required PyGeNN functions and classes we used in the first tutorial, also import time.perf_counter for measuring the performance of our classifier and tqdm.tqdm for drawing progress bars

[ ]:
import mnist
import numpy as np
import matplotlib.pyplot as plt
from pygenn import (create_neuron_model, create_current_source_model,
                    init_postsynaptic, init_weight_update, GeNNModel)
from time import perf_counter
from tqdm.auto import tqdm

As before, define some simulation parameters

[ ]:
TIMESTEP = 1.0
PRESENT_TIMESTEPS = 100
INPUT_CURRENT_SCALE = 1.0 / 100.0

Create very similar neuron and current source models. However, to avoid having to download every spike and count them on the CPU, here, we add an additional state variable SpikeCount to each neuron which gets incremented in the reset code to count spikes.

[ ]:
# Very simple integrate-and-fire neuron model
if_model = create_neuron_model(
    "if_model",
    params=["Vthr"],
    vars=[("V", "scalar"), ("SpikeCount", "unsigned int")],
    sim_code="V += Isyn * dt;",
    reset_code="""
    V = 0.0;
    SpikeCount++;
    """,
    threshold_condition_code="V >= Vthr")

cs_model = create_current_source_model(
    "cs_model",
    vars=[("magnitude", "scalar")],
    injection_code="injectCurrent(magnitude);")

Build model, load weights and create neuron, synapse and current source populations as before

[ ]:
model = GeNNModel("float", "tutorial_2")
model.dt = TIMESTEP

# Load weights
weights_0_1 = np.load("weights_0_1.npy")
weights_1_2 = np.load("weights_1_2.npy")

if_params = {"Vthr": 5.0}
if_init = {"V": 0.0, "SpikeCount":0}
neurons = [model.add_neuron_population("neuron0", weights_0_1.shape[0],
                                       if_model, if_params, if_init),
           model.add_neuron_population("neuron1", weights_0_1.shape[1],
                                       if_model, if_params, if_init),
           model.add_neuron_population("neuron2", weights_1_2.shape[1],
                                       if_model, if_params, if_init)]
model.add_synapse_population(
        "synapse_0_1", "DENSE",
        neurons[0], neurons[1],
        init_weight_update("StaticPulse", {}, {"g": weights_0_1.flatten()}),
        init_postsynaptic("DeltaCurr"))
model.add_synapse_population(
        "synapse_1_2", "DENSE",
        neurons[1], neurons[2],
        init_weight_update("StaticPulse", {}, {"g": weights_1_2.flatten()}),
        init_postsynaptic("DeltaCurr"));

current_input = model.add_current_source("current_input", cs_model,
                                         neurons[0], {}, {"magnitude": 0.0})

Run code generator to generate simulation code for model and load it into PyGeNN as before but, here, we don’t want to record any spikes so no need to specify a recording buffer size.

[ ]:
model.build()
model.load()

Just like in the previous tutorial, load testing images and labels and verify their dimensions

[ ]:
mnist.datasets_url = "https://storage.googleapis.com/cvdf-datasets/mnist/"
testing_images = mnist.test_images()
testing_labels = mnist.test_labels()

testing_images = np.reshape(testing_images, (testing_images.shape[0], -1))
assert testing_images.shape[1] == weights_0_1.shape[0]
assert np.max(testing_labels) == (weights_1_2.shape[1] - 1)

Simulate model

In this tutorial we’re going to not only inject current but also access the new spike count variable in the output population and reset the voltages throughout the model. Therefore we need to create some additional memory views

[ ]:
current_input_magnitude = current_input.vars["magnitude"]
output_spike_count = neurons[-1].vars["SpikeCount"]
neuron_voltages = [n.vars["V"] for n in neurons]

Now, we define our inference loop. We loop through all of the testing images and for each one:

  1. Copy the (scaled) image data into the current input memory view and copy it to the GPU

  2. Loop through all the neuron populations, zero their membrance voltages and copy these to the GPU

  3. Zero the output spike count and copy that to the GPU

  4. Simulate the model for PRESENT_TIMESTEPS

  5. Download the spike counts from the output layer

  6. If highest spike count corresponds to correct label, increment num_correct

[ ]:
# Simulate
num_correct = 0
start_time = perf_counter()
for i in tqdm(range(testing_images.shape[0])):
    current_input_magnitude.values = testing_images[i] * INPUT_CURRENT_SCALE
    current_input_magnitude.push_to_device()

    # Loop through all voltage variables
    for v in neuron_voltages:
        # Manually 'reset' voltage
        v.view[:] = 0.0

        # Upload
        v.push_to_device()

    # Zero spike count
    output_spike_count.view[:] = 0
    output_spike_count.push_to_device()

    for t in range(PRESENT_TIMESTEPS):
        model.step_time()

    # Download spike count from last layer
    output_spike_count.pull_from_device()

    # Find which neuron spiked the most to get prediction
    predicted_label = np.argmax(output_spike_count.values)
    true_label = testing_labels[i]

    if predicted_label == true_label:
        num_correct += 1

end_time = perf_counter()
print(f"\nAccuracy {((num_correct / float(testing_images.shape[0])) * 100.0)}%%")
print(f"Time {end_time - start_time} seconds")


Accuracy 97.44%%
Time 11.930175114999997 seconds