GeNN  4.9.0
GPU enhanced Neuronal Networks (GeNN)
Best practices guide

GeNN generates code according to the network model defined by the user as described in User Manual . Here we provide a guideline to setup GeNN and use generated functions. We recommend users to also .

Simulating a network model

By setting , GeNN can be used in a simple mode where CUDA automatically transfers data between the GPU and CPU when required (see https://devblogs.nvidia.com/unified-memory-cuda-beginners/). However, copying elements between the GPU and the host memory is costly in terms of performance and the automatic copying operates on a fairly coarse grain (pages are approximately 4 bytes). Therefore, in order to maximise performance, we recommend you do not use automatic copying and instead manually call the following when required:

You can use to copy from the host to the GPU. At the end of your simulation, if you want to access the variables you need to copy them back from the device using the or one of the more fine-grained functions listed above.

Extra Global Parameters

If extra global parameters have a "scalar" type such as float they can be set directly from simulation code. For example the extra global parameter "reward" of :

Extra global parameters can also be used to provide additional data to snippets used for variable (see Variable initialisation) or sparse connectivity (see Sparse connectivity initialisation) initialisation.

Floating point precision

Double precision floating point numbers are supported by devices with compute capability 1.3 or higher. If you have an older GPU, you need to use single precision floating point in your models and simulation. Furthermore, GPUs are designed to work better with single precision while double precision is the standard for CPUs. This difference should be kept in mind while comparing performance.

Typically, variables in GeNN models are defined using the scalar type. This type is substituted with "float" or "double" during code generation, according to the model precision. This is specified .

There may be ambiguities in arithmetic operations using explicit numbers. Standard C compilers presume that any number defined as "X" is an integer and any number defined as "X.Y" is a double. Make sure to use the same precision in your operations in order to avoid performance loss.

Working with variables in GeNN

Model variables

User-defined model variables originate from classes derived off the NeuronModels::Base, WeightUpdateModels::Base or PostsynapticModels::Base classes. The name of model variable is defined in the model type, i.e. with a statement such as

For convenience, GeNN provides functions to copy each state variable from the device into host memory and vice versa e.g. . Alternatively, all state variables associated with a population can be copied using a single call E.g. These conventions also apply to the the variables of postsynaptic and weight update models.

Built-in Variables in GeNN

GeNN has no explicitly hard-coded synapse and neuron variables. Users are free to name the variable of their models as they want. However, there are some reserved variables that are used for intermediary calculations and communication between different parts of the generated code. They can be used in the user defined code but no other variables should be defined with these names.

  • DT : Time step (typically in ms) for simulation; Neuron integration can be done in multiple sub-steps inside the neuron model for numerical stability (see Traub-Miles and Izhikevich neuron model variations in Neuron models).
Note
DT exceptionally does not require bracketing with $(.)
  • t : The current time as a floating point value, typically interpreted as in units of ms
  • id : The index of a neuron in a neuron population; can be used in the context of neurons, current sources, postsynaptic models, and pre- and postsynaptic weight update models
  • id_syn : index of a synapse in a synapse population
  • id_pre : Used in a synapse context. The index of the pre-synaptic neuron in its population.
  • id_post : Used in a synapse context. The index of the post-synaptic neuron in its population.
  • inSyn : Used in the context of post_synaptic models. This is an intermediary synapse variable which contains the summed input into a postsynaptic neuron (originating from the $(addToInSyn, X) or $(addToInSynDelay, X, Y) functions of the weight update model used by incoming synapses).
  • Isyn : This is a local variable which contains the (summed) input current to a neuron. It is typically the sum of any explicit current input and all synaptic inputs. The way its value is calculated during the update of the postsynaptic neuron is defined by the code provided in the postsynaptic model. For example, the standard PostsynapticModels::ExpCond postsynaptic model defines which implements a conductance based synapse in which the postsynaptic current is given by $I_{\rm syn}= g*s*(V_{\rm rev}-V_{\rm post})$. The value of $(Isyn) resulting from the apply input code can then be used in neuron sim code like so:
    $(V)+= (-$(V)+$(Isyn))*DT
  • sT : This is a neuron variable containing the spike time of each neuron and is automatically generated for pre and postsynaptic neuron groups if they are connected using a synapse population with a weight update model that has set.
  • prev_sT: This is a neuron variable containing the previous spike time of each neuron and is automatically generated for pre and postsynaptic neuron groups if they are connected using a synapse population with a weight update model that has set.

In addition to these variables, neuron variables can be referred to in the synapse models by calling $(<neuronVarName>_pre) for the presynaptic neuron population, and $(<neuronVarName>_post) for the postsynaptic population. For example, $(sT_pre), $(sT_post), $(V_pre), etc.

Spike Recording

Especially in models simulated with small timesteps, very few spikes may be emitted every timestep, making calling every timestep very inefficient. Instead, the spike recording system allows spikes and spike-like events emitted over a number of timesteps to be collected in GPU memory before transferring to the host. Spike recording can be enabled on chosen neuron groups with the . Remaining GPU memory can then be allocated at runtime for spike recording by . The data structures can then be copied from the GPU to the host using the and the spikes emitted by a population can be accessed Similarly, spike-like events emitted by a population can be accessed via the .

Debugging suggestions


Previous | Top | Next