{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "lGa0_oLb61zz" }, "source": [ "# Classification of the entire test set\n", "In this tutorial we're going to take the model we developed in the previous tutorial, run it on the entire MNIST testing set and calculate the overall classification accuracy.\n", "\n", "## Install PyGeNN wheel from Google Drive\n", "Download wheel file" ] }, { "cell_type": "code", "source": [ "if \"google.colab\" in str(get_ipython()):\n", " #import IPython\n", " #IPython.core.magics.execution.ExecutionMagics.run.func_defaults[2] = lambda a: a\n", " #%run \"../install_collab.ipynb\"\n", " !pip install gdown --upgrade\n", " !gdown 1V_GzXUDzcFz9QDIpxAD8QNEglcSipssW\n", " !pip install pygenn-5.0.0-cp310-cp310-linux_x86_64.whl\n", " %env CUDA_PATH=/usr/local/cuda" ], "metadata": { "id": "Qqz__TiIdE9x", "outputId": "912641fe-072b-48d1-aa90-f911ab463cd3", "colab": { "base_uri": "https://localhost:8080/" } }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (5.1.0)\n", "Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.12.3)\n", "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.13.1)\n", "Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.31.0)\n", "Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.66.2)\n", "Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.5)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.3.2)\n", "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.6)\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.0.7)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2024.2.2)\n", "Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1)\n", "Downloading...\n", "From: https://drive.google.com/uc?id=1V_GzXUDzcFz9QDIpxAD8QNEglcSipssW\n", "To: /content/pygenn-5.0.0-cp310-cp310-linux_x86_64.whl\n", "100% 8.29M/8.29M [00:00<00:00, 149MB/s]\n", "Processing ./pygenn-5.0.0-cp310-cp310-linux_x86_64.whl\n", "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (1.25.2)\n", "Requirement already satisfied: deprecated in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (1.2.14)\n", "Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from pygenn==5.0.0) (5.9.5)\n", "Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.10/dist-packages (from deprecated->pygenn==5.0.0) (1.14.1)\n", "pygenn is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.\n", "env: CUDA_PATH=/usr/local/cuda\n" ] } ] }, { "cell_type": "markdown", "metadata": { "id": "8tqbF5GldF0o" }, "source": [ "## Download pre-trained weights and MNIST test data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "N-2PV7LcdFg_", "outputId": "1404acd1-ba2c-4c08-c620-c1ad71ece658" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Downloading...\n", "From: https://drive.google.com/uc?id=1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc\n", "To: /content/weights_0_1.npy\n", "100% 402k/402k [00:00<00:00, 127MB/s]\n", "Downloading...\n", "From: https://drive.google.com/uc?id=131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF\n", "To: /content/weights_1_2.npy\n", "100% 5.25k/5.25k [00:00<00:00, 23.6MB/s]\n" ] } ], "source": [ "!gdown 1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc\n", "!gdown 131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF" ] }, { "cell_type": "markdown", "source": [ "## Install MNIST package" ], "metadata": { "id": "KVRtXVzIg07T" } }, { "cell_type": "code", "source": [ "!pip install mnist" ], "metadata": { "id": "AikBc4sfg1b-", "outputId": "ddb641da-6ec7-459f-db01-5157d2a17f49", "colab": { "base_uri": "https://localhost:8080/" } }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Collecting mnist\n", " Downloading mnist-0.2.2-py2.py3-none-any.whl (3.5 kB)\n", "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from mnist) (1.25.2)\n", "Installing collected packages: mnist\n", "Successfully installed mnist-0.2.2\n" ] } ] }, { "cell_type": "markdown", "metadata": { "id": "l7UOIOeX1xeE" }, "source": [ "## Build model\n", "As well as the standard modules and required PyGeNN functions and classes we used in the first tutorial, also import `time.perf_counter` for measuring the performance of our classifier and `tqdm.tqdm` for drawing progress bars" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "agqWFZjickfU" }, "outputs": [], "source": [ "import mnist\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from pygenn import (create_neuron_model, create_current_source_model,\n", " init_postsynaptic, init_weight_update, GeNNModel)\n", "from time import perf_counter\n", "from tqdm.auto import tqdm" ] }, { "cell_type": "markdown", "metadata": { "id": "FMBcXoyd4yS1" }, "source": [ "As before, define some simulation parameters" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KqBx7iO_kApE" }, "outputs": [], "source": [ "TIMESTEP = 1.0\n", "PRESENT_TIMESTEPS = 100\n", "INPUT_CURRENT_SCALE = 1.0 / 100.0" ] }, { "cell_type": "markdown", "metadata": { "id": "2QlVBYQG431K" }, "source": [ "Create very similar neuron and current source models. However, to avoid having to download every spike and count them on the CPU, here, we add an additional state variable `SpikeCount` to each neuron which gets incremented in the reset code to count spikes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-7lzXzmQcgbt" }, "outputs": [], "source": [ "# Very simple integrate-and-fire neuron model\n", "if_model = create_neuron_model(\n", " \"if_model\",\n", " params=[\"Vthr\"],\n", " vars=[(\"V\", \"scalar\"), (\"SpikeCount\", \"unsigned int\")],\n", " sim_code=\"V += Isyn * dt;\",\n", " reset_code=\"\"\"\n", " V = 0.0;\n", " SpikeCount++;\n", " \"\"\",\n", " threshold_condition_code=\"V >= Vthr\")\n", "\n", "cs_model = create_current_source_model(\n", " \"cs_model\",\n", " vars=[(\"magnitude\", \"scalar\")],\n", " injection_code=\"injectCurrent(magnitude);\")" ] }, { "cell_type": "markdown", "metadata": { "id": "lWMtozHB3OrM" }, "source": [ "Build model, load weights and create neuron, synapse and current source populations as before" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Sx1VOU5udixG" }, "outputs": [], "source": [ "model = GeNNModel(\"float\", \"tutorial_2\")\n", "model.dt = TIMESTEP\n", "\n", "# Load weights\n", "weights_0_1 = np.load(\"weights_0_1.npy\")\n", "weights_1_2 = np.load(\"weights_1_2.npy\")\n", "\n", "if_params = {\"Vthr\": 5.0}\n", "if_init = {\"V\": 0.0, \"SpikeCount\":0}\n", "neurons = [model.add_neuron_population(\"neuron0\", weights_0_1.shape[0],\n", " if_model, if_params, if_init),\n", " model.add_neuron_population(\"neuron1\", weights_0_1.shape[1],\n", " if_model, if_params, if_init),\n", " model.add_neuron_population(\"neuron2\", weights_1_2.shape[1],\n", " if_model, if_params, if_init)]\n", "model.add_synapse_population(\n", " \"synapse_0_1\", \"DENSE\",\n", " neurons[0], neurons[1],\n", " init_weight_update(\"StaticPulse\", {}, {\"g\": weights_0_1.flatten()}),\n", " init_postsynaptic(\"DeltaCurr\"))\n", "model.add_synapse_population(\n", " \"synapse_1_2\", \"DENSE\",\n", " neurons[1], neurons[2],\n", " init_weight_update(\"StaticPulse\", {}, {\"g\": weights_1_2.flatten()}),\n", " init_postsynaptic(\"DeltaCurr\"));\n", "\n", "current_input = model.add_current_source(\"current_input\", cs_model,\n", " neurons[0], {}, {\"magnitude\": 0.0})" ] }, { "cell_type": "markdown", "metadata": { "id": "jdggjUe13tT_" }, "source": [ "Run code generator to generate simulation code for model and load it into PyGeNN as before but, here, we don't want to record any spikes so no need to specify a recording buffer size." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "K8kHbKMJ3kIY" }, "outputs": [], "source": [ "model.build()\n", "model.load()" ] }, { "cell_type": "markdown", "metadata": { "id": "oMxrFcIP66CX" }, "source": [] }, { "cell_type": "markdown", "metadata": { "id": "rUxwsE323l37" }, "source": [ "Just like in the previous tutorial, load testing images and labels and verify their dimensions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0Tf07KUOeP-X" }, "outputs": [], "source": [ "mnist.datasets_url = \"https://storage.googleapis.com/cvdf-datasets/mnist/\"\n", "testing_images = mnist.test_images()\n", "testing_labels = mnist.test_labels()\n", "\n", "testing_images = np.reshape(testing_images, (testing_images.shape[0], -1))\n", "assert testing_images.shape[1] == weights_0_1.shape[0]\n", "assert np.max(testing_labels) == (weights_1_2.shape[1] - 1)" ] }, { "cell_type": "markdown", "metadata": { "id": "r-TFULk_3i8z" }, "source": [ "## Simulate model\n", "In this tutorial we're going to not only inject current but also access the new spike count variable in the output population and reset the voltages throughout the model. Therefore we need to create some additional memory views" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3z1ccKHeejeB" }, "outputs": [], "source": [ "current_input_magnitude = current_input.vars[\"magnitude\"]\n", "output_spike_count = neurons[-1].vars[\"SpikeCount\"]\n", "neuron_voltages = [n.vars[\"V\"] for n in neurons]" ] }, { "cell_type": "markdown", "metadata": { "id": "JCDP_sTa4HTL" }, "source": [ "Now, we define our inference loop. We loop through all of the testing images and for each one:\n", "\n", "1. Copy the (scaled) image data into the current input memory view and copy it to the GPU\n", "2. Loop through all the neuron populations, zero their membrance voltages and copy these to the GPU\n", "3. Zero the output spike count and copy that to the GPU\n", "4. Simulate the model for `PRESENT_TIMESTEPS`\n", "5. Download the spike counts from the output layer\n", "6. If highest spike count corresponds to correct label, increment `num_correct`\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 101, "referenced_widgets": [ "e2a5b2d7a928414c921ef1945e969ff2", "3822016801b34d7c91f7973ce35b7918", "07ddfc013d83495fa87c3fc1df6a8870", "2659dd42699542deac2a7dadbe9eca61", "9f19ea2e563f40409e23389b5dc4a0a8", "0e560bce941d4b28847ce2e58bf19bff", "8c69c9171dae42d2a8fc3867c092ece1", "3d4d8e0017d648bfabd8fcf0c89f4ec8", "2e72f36403e6469c8670f67a3dffb5ef", "098d0a4e89024f8fa8c61ff3f5c477d5", "bb812d858cf746be9e33b71e67fc04f6" ] }, "id": "4qSoinT4etKq", "outputId": "01a98bc1-3bb6-4fab-b172-598e3f90fb2b" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ " 0%| | 0/10000 [00:00