Custom models

One of the main things that makes GeNN different from other SNN simulators is that all the models and snippets used to describe the behaviour of your model (see Building networks) can be easily customised by the user using strings containing a C-like language called GeNNCode.

GeNNCode

GeNN model functionality is implemented using strings of C-like code which we call GeNNCode. This is essentially C99 (https://en.cppreference.com/w/c/language) with the following differences:

  • No preprocessor

  • Enough support for strings to printf debug messages but not much more i.e. no strstr etc.

  • Functions, typedefines and structures cannot be defined in user code

  • Structures are not supported at all

  • Some esoteric C99 language features like octal integer and hexadecimal floating point literals aren’t supported

  • The address of (&) operator isn’t supported. On the GPU hardware GeNN targets, local variables are assumed to be stored in registers and not addressable. The only time this is limiting is when dealing with extra global parameter arrays as you can no longer do something like const int *egpSubset = &egp[offset]; and instead have to do const int *egpSubset = egp + offset;.

  • Like C++ (but not C99) function overloading is supported so sin(30.0f) will resolve to the floating point rather than double-precision version.

  • Floating point literals like 30.0 without a suffix will be treated as scalar (i.e. the floating point type declared as the precision of the overall model), 30.0f will always be treated as float and 30.0d will always be treated as double.

  • An LP64 data model is used on all platforms where int is 32-bit and long is 64-bit.

  • Only the following standard library functions are supported: cos, sin, tan, acos, asin, atan, atan2, cosh, sinh, tanh, acosh, asinh, atanh, exp, expm1, exp2, pow, scalbn, log, log1p, log2, log10, ldexp, ilogb, sqrt, cbrt, hypot, ceil, floor, fmod, round, rint, trunc, nearbyint, nextafter, remainder, fabs, fdim, fmax, fmin, erf, erfc, tgamma, lgamma, copysign, fma, min, max, abs, printf

Random number generation

Random numbers are useful in many forms of custom model, for example as a source of noise or a probabilistic spiking mechanism. In GeNN this can be implemented by using the following functions within GeNNCode:

  • gennrand() returns a random 32-bit unsigned integer

  • gennrand_uniform() returns a number drawn uniformly from the interval \([0.0, 1.0]\)

  • gennrand_normal() returns a number drawn from a normal distribution with a mean of 0 and a standard deviation of 1.

  • gennrand_exponential() returns a number drawn from an exponential distribution with \(\lambda=1\).

  • gennrand_log_normal(mean, std) returns a number drawn from a log-normal distribution with the specified mean and standard deviation.

  • gennrand_gamma(alpha) returns a number drawn from a gamma distribution with the specified shape.

  • gennrand_binomial(n, p) returns a number drawn from a binomial distribution with the specified shape.

Initialisation snippets

Initialisation snippets are GeNNCode to initialise various parts of a GeNN model. They are configurable by the user with parameters, derived parameters and extra global parameters. Parameters have a homogeneous numeric value across the population being initialised. ‘Derived parameters’ are a mechanism for enhanced efficiency when running neuron models. They allow constants used within the GeNNCode implementation of a model to be computed from more ‘user friendly’ parameters provided by the user. For example, a decay to apply each timestep could be computed from a time constant provided in a parameter called tau by passing the following keyword arguments to one of the snippet or model creation functions described below:

params=["tau"],
derived_params=[("ExpTC", lambda pars, dt: np.exp(-dt / pars["tau"]))])

Variable initialisation

New variable initialisation snippets can be defined by calling:

pygenn.create_var_init_snippet(class_name, params=None, derived_params=None, var_init_code=None, extra_global_params=None)

Creates a new variable initialisation snippet. Within the var_init_code, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

And, if the snippet is used to initialise a per-neuron variable:

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

or, a per-synapse variable:

  • id_pre which represents the index of the presynaptic neuron (starting from zero)

  • id_post which represents the index of the postsynaptic neuron (starting from zero)

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Finally, the variable being initialised is represented by the write-only value variable.

Parameters:
  • class_name (str) – name of the new model (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from paramss

  • var_init_code (str | None) – string containing the code statements required to initialise the variable

  • extra_global_params – names and types of model extra global parameters

For example, if we wanted to define a snippet to initialise variables by sampling from a normal distribution, redrawing if the value is negative (which could be useful to ensure delays remain causal):

normal_positive_model = pygenn.create_var_init_snippet(
    'normal_positive',
    params=['mean', 'sd'],
    var_init_code=
        """
        scalar normal;
        do {
            normal = mean + (gennrand_normal() * sd);
        } while (normal < 0.0);
        value = normal;
        """)

Sparse connectivity initialisation

Sparse connectivity initialisation snippets can be used to initialise connectivity when using SynapseMatrixType.SPARSE or SynapseMatrixType.BITMASK connectivity; and to generate connectivity on the fly when using SynapseMatrixType.PROCEDURAL connectivity. New sparse connectivity initialisation snippets can be defined by calling:

pygenn.create_sparse_connect_init_snippet(class_name, params=None, derived_params=None, row_build_code=None, col_build_code=None, calc_max_row_len_func=None, calc_max_col_len_func=None, calc_kernel_size_func=None, extra_global_params=None)

Creates a new sparse connectivity initialisation snippet. Within the code strings, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

  • thread when some procedural connectivity is used with multiple threads per presynaptic neuron, represents the index of the current thread

and, in row_build_code:

  • id_pre represents the index of the presynaptic neuron (starting from zero)

  • id_post_begin when some procedural connectivity is used with multiple threads per presynaptic neuron, represents the index of the first postsynaptic neuron to connect.

and, in col_build_code:

  • id_post which represents the index of the postsynaptic neuron (starting from zero).

Finally, the function addSynapse(x) can be used to add a new synapse to the connectivity where, in row_build_code, x is the index of the postsynaptic neuron to connect id_pre to and, in col_build_code, x is the index of the presynaptic neuron to connect to id_post

Parameters:
  • class_name (str) – name of the snippet (only for debugging)

  • params – name and optional types of model parameters

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from paramss

  • row_build_code (str | None) – code for building connectivity row by row

  • col_build_code (str | None) – code for building connectivity column by column

  • calc_max_row_len_func (Callable | None) – used to calculate the maximum row length of the synaptic matrix created using this snippet

  • calc_max_col_len_func (Callable | None) – used to calculate the maximum column length of the synaptic matrix created using this snippet

  • calc_kernel_size_func (Callable | None) – used to calculate the size of the kernel if snippet requires one

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of snippet extra global parameters

  • param_names (Sequence[str | Tuple[str, str | ResolvedType]] | None)

For example, if we wanted to define a snippet to initialise connectivity where each presynaptic neuron targets a fixed number of postsynaptic neurons, sampled uniformly with replacement, we could define a snippet as follows:

from scipy.stats import binom

fixed_number_post = pygenn.create_sparse_connect_init_snippet(
    "fixed_number_post",
    params=[("num", "unsigned int")],
    row_build_code=
        """
        for(unsigned int c = num; c != 0; c--) {
            const unsigned int idPost = gennrand() % num_post;
            addSynapse(idPost + id_post_begin);
        }
        """,
    calc_max_row_len_func=lambda num_pre, num_post, pars: pars["num"],
    calc_max_col_len_func=lambda num_pre, num_post, pars: binom.ppf(0.9999 ** (1.0 / num_post),
                                                                    pars["num"] * num_pre,
                                                                    1.0 / num_post))

For full details of how maximum column lengths are calculated, you should refer to our paper [Knight2018] but, in short, the number of connections that end up in a column are distributed binomially with \(n=\text{num}\) and \(p=\frac{1}{\text{num_post}}\) Therefore, we can calculate the maximum column length by looking at the inverse cummulative distribution function (CDF) for the binomial distribution, looking at the point in the inverse CDF where there is a 0.9999 chance of the bound being correct when drawing synapses from num_post columns.

Toeplitz connectivity initialisation

Toeplitz connectivity initialisation snippets are used to generate convolution-like connectivity on the fly when using SynapseMatrixType.TOEPLITZ connectivity. New Toeplitz connectivity initialisation snippets can be defined by calling:

pygenn.create_toeplitz_connect_init_snippet(class_name, params=None, derived_params=None, diagonal_build_code=None, calc_max_row_len_func=None, calc_kernel_size_func=None, extra_global_params=None)

Creates a new Toeplitz connectivity initialisation snippet. Each diagonal of Toeplitz connectivity is initialised independently by running the snippet of code specified using the diagonal_build_code. Within the code strings, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

  • id_diag when some procedural connectivity is used with multiple threads

Additionally, the function addSynapse(id_post, id_kern_0, id_kern_1, ..., id_kern_N) can be used to generate a new synapse to postsynaptic neuron id_post using N-dimensional kernel variables indexed with id_kern_0, id_kern_1, ..., id_kern_N. Finally the for_each_synapse{} construct can be used to loop through incoming spikes and, inside this, id_pre will represent the index of the spiking presynaptic neuron.

Parameters:
  • class_name (str) – name of the snippet (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from paramss

  • diagonal_build_code (str | None) – code for building connectivity row by row

  • calc_max_row_len_func (Callable | None) – used to calculate the maximum row length of synaptic matrix created using this snippet

  • calc_kernel_size_func (Callable | None) – used to calculate the size of the kernel

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of snippet extra global parameters

For example, the following Toeplitz connectivity initialisation snippet could be used to convolve a \(\text{kern_dim} \times \text{kern_dim}\) square kernel with the spikes from a population of \(\text{pop_dim} \times \text{pop_dim}\) neurons.

simple_conv2d_model = pynn.create_toeplitz_connect_init_snippet(
    "simple_conv2d",
    params=[("kern_size", "int"), ("pop_dim", "int")],
    diagonal_build_code=
        """
        const int kernRow = id_diag / kern_dim;
        const int kernCol = id_diag % kern_dim;

        for_each_synapse {
            const int preRow = id_pre / pop_dim;
            const int preCol = id_pre % pop_dim;
            // If we haven't gone off edge of output
            const int postRow = preRow + kernRow - 1;
            const int postCol = preCol + kernCol - 1;
            if(postRow >= 0 && postCol >= 0 && postRow < pop_dim && postCol < pop_dim) {
                // Calculate postsynaptic index
                const int postInd = (postRow * pop_dim) + postCol;
                addSynapse(postInd,  kernRow, kernCol);
            }
        }
        """,

    calc_max_row_len_func=lambda num_pre, num_post, pars: pars["kern_size"] * pars["kern_size"],
    calc_kernel_size_func=lambda pars: [pars["kern_size"], pars["kern_size"]])

For full details of how convolution-like connectivity is expressed in this way, please see our paper [Turner2022].

Models

Models extend the snippets described above by adding state. They are used to define the behaviour of neurons, synapses and custom updates.

Variable access

When defining custom models intended to work in batched simulations, it is important to consider the ‘variable access’ of state variables which determines if they can contain different values in each batch or whether the same values are shared between batches. Because simulations are assumed to run in parallel, if variables are shared they must be be read-only. Therefore the following modes are available for variables defined in neuron, weight update, current source and custom connectivity update models:

class pygenn.VarAccess(self: pygenn._genn.VarAccess, value: int)

Supported combinations of access mode and dimension for neuron and synapse variables

Members:

READ_WRITE : This variable can be read from and written to and stores separate values for each element and each batch

READ_ONLY : This variable can only be read from and stores separate values for each element but these are shared across batches

READ_ONLY_DUPLICATE : This variable can only be read from and stores separate values for each element and each batch

READ_ONLY_SHARED_NEURON : This variable can only be read from and stores separate values for each batch but these are shared across neurons

The situation is further complicated when considering custom update models as not only do these support operations such as reductions but whether the update itself is batched or not depends on the types of variables it is attached to via its variable references. Therefore, so that custom update models can be re-used in different circumstances, their variables can have the following modes:

class pygenn.CustomUpdateVarAccess(self: pygenn._genn.CustomUpdateVarAccess, value: int)

Supported combinations of access mode and dimension for custom update variables. The axes are defined ‘subtractively’, i.e. VarAccessDim::BATCH indicates that this axis should be removed.

Members:

READ_WRITE : This variable can be read from and written to and has the same dimensions as whatever the custom update is attached to

READ_ONLY : This variable can only be read from and has the same dimensions as whatever the custom update is attached to

BROADCAST_DELAY : This variable has the same dimensions as whatever the custom update is attached to and writes to it get broadcast across delay slots

READ_ONLY_SHARED : This variable can only be read from and has the same dimensions as whatever

the custom update is attached to aside from being shared across batches

READ_ONLY_SHARED_NEURON : This variable can only be read from and has the same dimensions as whatever

the custom update is attached to aside from being shared across neurons

REDUCE_BATCH_SUM : This variable is a target for a reduction across batches using a sum operation

REDUCE_BATCH_MAX : This variable is a target for a reduction across batches using a max operation

REDUCE_NEURON_SUM : This variable is a target for a reduction across neurons using a sum operation

REDUCE_NEURON_MAX : This variable is a target for a reduction across neurons using a max operation

Neuron models

Neuron models define the dynamics and spiking behaviour of populations of neurons. New neuron models are defined by calling:

pygenn.create_neuron_model(class_name, params=None, vars=None, derived_params=None, sim_code=None, threshold_condition_code=None, reset_code=None, extra_global_params=None, additional_input_vars=None, auto_refractory_required=False)

Creates a new neuron model. Within all of the code strings, the variables, parameters, derived parameters, additional input variables and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt()).

  • Isyn which represents the total incoming synaptic input.

  • id which represents a neurons index within a population (starting from zero).

  • num_neurons which represents the number of neurons in the population.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of model variables

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • sim_code (str | None) – string containing the simulation code statements to be run every timestep

  • threshold_condition_code (str | None) – string containing a threshold condition expression to test whether a spike should be emitted

  • reset_code (str | None) – string containing the reset code statements to run after emitting a spike

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of model extra global parameters

  • additional_input_vars – list of tuples with names and types as strings and initial values of additional local input variables

  • auto_refractory_required (bool) – does this model require auto-refractory logic to be generated?

For example, we can define a leaky integrator \(\tau\frac{dV}{dt}= -V + I_{{\rm syn}}\) solved using Euler’s method:

leaky_integrator_model = pygenn.create_neuron_model(
    "leaky_integrator",

    sim_code=
        """
        V += (-V + Isyn) * (dt / tau);
        """,
    threshold_condition_code="V >= 1.0",
    reset_code=
        """
        V = 0.0;
        """,

    params=["tau"],
    vars=[("V", "scalar", pygenn.VarAccess.READ_WRITE)])

Additional input variables

Normally, neuron models receive the linear sum of the inputs coming from all of their synaptic inputs through the Isyn variable. However neuron models can define additional input variables, allowing input from different synaptic inputs to be combined non-linearly. For example, if we wanted our leaky integrator to operate on the the product of two input currents, we could modify our model as follows:

...
additional_input_vars=[("Isyn2", "scalar", 1.0)],
sim_code=
    """
    const scalar input = Isyn * Isyn2;
    sim_code="V += (-V + input) * (dt / tau);
    """,
...

Weight update models

Weight update models define the event-driven and time-driven behaviour of synapses and what output they deliver to postsynaptic (and presynaptic) neurons. New weight update models are defined by calling:

pygenn.create_weight_update_model(class_name, params=None, vars=None, pre_vars=None, pre_ post_vars=None, post_ pre_neuron_var_refs=None, post_neuron_var_refs=None, derived_params=None, pre_spike_syn_code=None, pre_event_syn_code=None, post_event_syn_code=None, post_spike_syn_code=None, synapse_dynamics_code=None, pre_ post_ pre_spike_code=None, post_spike_code=None, pre_dynamics_code=None, post_dynamics_code=None, extra_global_params=None)

Creates a new weight update model. GeNN operates on the assumption that the postsynaptic output of the synapses are added linearly at the postsynaptic neuron. Within all of the synaptic code strings (pre_spike_syn_code, pre_event_syn_code, post_event_syn_code, post_spike_syn_code and synapse_dynamics_code ) these currents are delivered using the addToPost(inc) function. For example,

pre_spike_syn_code="addToPost(inc);"

where inc is the amount to add to the postsynapse model’s inSyn variable for each pre-synaptic spike. Dendritic delays can also be inserted between the synapse and the postsynaptic neuron by using the addToPostDelay(inc, delay) function. For example,

pre_spike_syn_code="addToPostDelay(inc, delay);"

where, once again, inc is the amount to add to the postsynaptic neuron’s inSyn variable and delay is the length of the dendritic delay in timesteps. By implementing delay as a weight update model variable, heterogeneous synaptic delays can be implemented. For an example, see weight_update_models.StaticPulseDendriticDelay() for a simple synapse update model with heterogeneous dendritic delays. These delays can also be used to provide delayed access to post_vars and post_neuron_var_refs using [] syntax. For example,

pre_spike_syn_code="variable -= postVar[delay];"

where, variable is a per-synapse variable; postVar is either a postsynaptic variable or postsynaptic variable reference; and delay is some sort of integer expression. When using dendritic delays, the maximum dendritic delay for a synapse populations must be specified via the SynapseGroup.max_dendritic_delay_timesteps property. One can also define synaptic effects that occur in the reverse direction, i.e. terms that are added to a target variable in the _presynaptic_ neuron using the addToPre(inc) function. For example,

pre_spike_syn_code="addToPre(inc * V_post);"

would add terms inc * V_post to for each outgoing synapse of a presynaptic neuron. Similar to postsynaptic models, by default these inputs are accumulated in Isyn in the presynaptic neuron but they can also be directed to additional input variables by setting the SynapseGroup.pre_target_var property. Unlike for normal forward synaptic actions, reverse synaptic actions with addToPre(inc) are not modulated through a post-synaptic model but added directly into the indicated presynaptic target input variable.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of per-synapse model variables

  • pre_vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of per-presynaptic neuron model variables

  • names (post_vars) – modifiers of per-postsynaptic neuron model variables

  • access (types and optional variable) – modifiers of per-postsynaptic neuron model variables

  • pre_neuron_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to presynaptic neuron variables

  • post_neuron_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • pre_spike_syn_code (str | None) – string with the presynaptic spike code

  • pre_event_syn_code (str | None) – string with the presynaptic event code

  • post_event_syn_code (str | None) – string with the postsynaptic event code

  • post_spike_syn_code (str | None) – string with the postsynaptic spike code

  • synapse_dynamics_code (str | None) – string with the synapse dynamics code

  • pre_event_threshold_condition_code (str | None) – string with the presynaptic event threshold condition code

  • post_event_threshold_condition_code (str | None) – string with the postsynaptic event threshold condition code

  • pre_spike_code (str | None) – string with the code run once per spiking presynaptic neuron. Only presynaptic variables and variable references can be referenced from this code.

  • post_spike_code (str | None) – string with the code run once per spiking postsynaptic neuron

  • pre_dynamics_code (str | None) – string with the code run every timestep on presynaptic neuron. Only presynaptic variables and variable references can be referenced from this code.

  • post_dynamics_code (str | None) – string with the code run every timestep on postsynaptic neuron. Only postsynaptic variables and variable references can be referenced from this code.

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of model extra global parameters

  • post_vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None)

For example, we can define a simple additive STDP rule with nearest-neighbour spike pairing and the following time-dependence (equivalent to weight_update_models.STDP()):

\[\begin{split}\Delta w_{ij} & = \begin{cases} A_{+}\exp\left(-\frac{\Delta t}{\tau_{+}}\right) & if\, \Delta t>0\\ A_{-}\exp\left(\frac{\Delta t}{\tau_{-}}\right) & if\, \Delta t\leq0 \end{cases}\end{split}\]

in a fully event-driven manner as follows:

stdp_additive_model = pygenn.create_weight_update_model(
    "stdp_additive",
    params=["tauPlus", "tauMinus", "aPlus", "aMinus", "wMin", "wMax"],
    vars=[("g", "scalar")],

    pre_spike_syn_code=
        """
        addToPost(g);
        const scalar dt = t - st_post;
        if (dt > 0) {
            const scalar timing = exp(-dt / tauMinus);
            const scalar newWeight = g - (Aminus * timing);
            g = fmax(Wmin, fmin(Wmax, newWeight));
        }
        """,
    post_spike_syn_code=
        """
        const scalar dt = t - st_pre;
        if (dt > 0) {
            const scalar timing = exp(-dt / tauPlus);
            const scalar newWeight = g + (Aplus * timing);
            g = fmax(Wmin, fmin(Wmax, newWeight));
        }
        """)

Pre and postsynaptic dynamics

The memory required for synapse variables and the computational cost of updating them tends to grow with \(O(N^2)\) with the number of neurons. Therefore, if it is possible, implementing synapse variables on a per-neuron rather than per-synapse basis is a good idea. The pre_var_name_types and post_var_name_types keyword arguments are used to define any pre or postsynaptic state variables. For example, using pre and postsynaptic variables, our event-driven STDP rule can be extended to use all-to-all spike pairing using pre and postsynaptic trace variables [Morrison2008] :

stdp_additive_2_model = genn_model.create_custom_weight_update_class(
    "stdp_additive_2",
    params=["tauPlus", "tauMinus", "aPlus", "aMinus", "wMin", "wMax"],
    vars=[("g", "scalar")],
    pre_vars=[("preTrace", "scalar")],
    post_vars=[("postTrace", "scalar")],

    pre_spike_syn_code=
        """
        addToPost(g);
        const scalar dt = t - st_post;
        if(dt > 0) {
            const scalar newWeight = g - (aMinus * postTrace);
            g = fmin(wMax, fmax(wMin, newWeight));
        }
        """,
    post_spike_syn_code=
        """
        const scalar dt = t - st_pre;
        if(dt > 0) {
            const scalar newWeight = g + (aPlus * preTrace);
            g = fmin(wMax, fmax(wMin, newWeight));
        }
        """,

    pre_spike_code="preTrace += 1.0;",
    pre_dynamics_code="preTrace *= tauPlusDecay;",
    post_spike_code="postTrace += 1.0;",
    post_dynamics_code="postTrace *= tauMinusDecay;")
Synapse dynamics

Unlike the event-driven updates previously described, synapse dynamics code is run for each synapse and each timestep, i.e. it is time-driven. This can be used where synapses have internal variables and dynamics that are described in continuous time, e.g. by ODEs. However, using this mechanism is typically computationally very costly because of the large number of synapses in a typical network. By using the addToPost() and addToPostDelay() functions discussed in the context of pre_spike_syn_code, the synapse dynamics can also be used to implement continuous synapses for rate-based models. For example a continous synapse which multiplies a presynaptic neuron variable by the weight could be added to a weight update model definition as follows:

pre_neuron_var_refs=[("V_pre", "scalar")],
synapse_dynamics_code="addToPost(g * V_pre);",
Spike-like events

As well as time-driven synapse dynamics and spike event-driven updates, GeNN weight update models also support “spike-like events”. These can be triggered by a threshold condition evaluated on the pre or postsynaptic neuron. This typically involves pre or postsynaptic weight update model variables or variable references respectively.

For example, to trigger a presynaptic spike-like event when the presynaptic neuron’s voltage is greater than 0.02, the following could be added to a weight update model definition:

pre_neuron_var_refs=[("V_pre", "scalar")],
pre_event_threshold_condition_code="V_pre > -0.02"

Whenever this expression evaluates to true, the event code in pre_event_code will be executed.

Postsynaptic models

The postsynaptic model defines how synaptic input translates into an input current (or other input term for models that are not current based). They can contain equations defining dynamics that are applied to the (summed) synaptic activation, e.g. an exponential decay over time. New postsynaptic models are defined by calling:

pygenn.create_postsynaptic_model(class_name, params=None, vars=None, neuron_var_refs=None, derived_params=None, sim_code=None, extra_global_params=None)

Creates a new postsynaptic update model. Within all of the code strings, the variables, parameters, derived parameters and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables:

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

  • inSyn which contains the summed input received from the weight update model through addToPost() or addToPostDelay()

Finally, the function injectCurrent(x) can be used to inject a current x into the postsynaptic neuron. The variable it goes into can be configured using the SynapseGroup.post_target_var. By default it targets Isyn.

Parameters:
  • class_name – name of the new class (only for debugging)

  • params – name and optional types of model parameters

  • vars – names, types and optional variable access modifiers of model variables

  • neuron_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • sim_code (str | None) – string containing the simulation code statements to be run every timestep

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of model extra global parameters

Current source models

Current source models allow input currents to be injected into neuron models. New current source models are defined by calling:

pygenn.create_current_source_model(class_name, params=None, vars=None, neuron_var_refs=None, derived_params=None, injection_code=None, extra_global_params=None)

Creates a new current source model. Within the injection_code code string, the variables, parameters, derived parameters, neuron variable references and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

Finally, the function injectCurrent(x) can be used to inject a current x into the attached neuron. The variable it goes into can be configured using the CurrentSource.target_var. It defaults to Isyn.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of model variables

  • neuron_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to variables in neuron population current source is attached to

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • injection_code (str | None) – string containing the simulation code statements to be run every timestep

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of model extra global parameters

For example, we can define a simple current source that injects uniformly-distributed noise as follows:

uniform_noise_model = pygenn.create_current_source_model(
    "uniform_noise",
    params=["magnitude"],
    injection_code="injectCurrent(gennrand_uniform() * magnitude);")

Custom update models

Custom update models define operations on model variables that can be triggered on demand by the user. New custom update models are defined by calling:

pygenn.create_custom_update_model(class_name, params=None, vars=None, derived_params=None, var_refs=None, update_code=None, extra_global_params=None, extra_global_param_refs=None)

Creates a new custom update model. Within the update_code code string, the variables, parameters, derived parameters, variable references, extra global parameters and extra global parameter references defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

And, if a custom update using this model is attached to per-neuron variables:

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

or, to per-synapse variables:

  • id_pre which represents the index of the presynaptic neuron (starting from zero)

  • id_post which represents the index of the postsynaptic neuron (starting from zero)

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, CustomUpdateVarAccess]] | None) – names, types and optional variable access modifiers of model variables

  • var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to variables in population(s) custom update is attached to

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • update_code (str | None) – string containing the code statements to be run when custom update is launched

  • extra_global_params (Sequence[Tuple[str, str | ResolvedType]] | None) – names and types of model extra global parameters

  • extra_global_param_refs – names and types of extra global parameter references

For example, we can define a custom update which will set a referenced variable to the value of a custom update model state variable:

reset_model = pygenn.create_custom_update_model(
    "reset",
    vars=[("v", "scalar", pygenn.CustomUpdateVarAccess.READ_ONLY)],
    var_refs=[("r", "scalar", pygenn.VarAccessMode.READ_WRITE)],
    update_code="r = v;")

When used in a model with batch size > 1, whether custom updates of this sort are batched or not depends on the variables their references point to. If any referenced variables have VarAccess.READ_ONLY_DUPLICATE or VarAccess.READ_WRITE access modes, then the update will be batched and any variables associated with the custom update with VarAccess.READ_ONLY_DUPLICATE or VarAccess.READ_WRITE access modes will be duplicated across the batches.

Batch reduction

As well as the standard variable access modes described previously, custom updates support variables with ‘batch reduction’ access modes such as CustomUpdateVarAccess.REDUCE_BATCH_SUM and CustomUpdateVarAccess.REDUCE_BATCH_MAX. These access modes allow values read from variables duplicated across batches to be reduced into variables that are shared across batches. For example, in a gradient-based learning scenario, a model like this could be used to sum gradients from across all batches so they can be used as the input to a learning rule operating on shared synaptic weights:

reduce_model = pygenn.create_custom_update_model(
    "gradient_batch_reduce",
    vars=[("reducedGradient", "scalar", pygenn.CustomUpdateVarAccess.REDUCE_BATCH_SUM)],
    var_refs=[("gradient", "scalar", pygenn.VarAccessMode.READ_ONLY)],
    update_code=
        """
        reducedGradient = gradient;
        gradient = 0;
        """)

Batch reductions can also be performed into variable references with the VarAccessMode.REDUCE_SUM or VarAccessMode.REDUCE_MAX access modes.

Neuron reduction

Similarly to the batch reduction modes discussed previously, custom updates also support variables with several ‘neuron reduction’ access modes such as CustomUpdateVarAccess.REDUCE_NEURON_SUM and CustomUpdateVarAccess.REDUCE_NEURON_MAX.

These access modes allow values read from per-neuron variables to be reduced into variables that are shared across neurons. For example, a model like this could be used to calculate the maximum value of a state variable in a population of neurons:

reduce_model = pygenn.create_custom_update_model(
    "neuron_reduce",
    vars=[("reduction", "scalar", pygenn.CustomUpdateVarAccess.REDUCE_NEURON_SUM)],
    var_refs=[("gradient", "scalar", pygenn.VarAccessMode.READ_ONLY)],
    update_code=
        """
        reduction = source;
        """)

Again, like batch reductions, neuron reductions can also be performed into variable references with the VarAccessMode.REDUCE_SUM or VarAccessMode.REDUCE_MAX access modes.

Custom connectivity update models

Custom update models define operations on model connectivity that can be triggered on demand by the user. New custom connectivity update models are defined by calling:

pygenn.create_custom_connectivity_update_model(class_name, params=None, vars=None, pre_vars=None, post_vars=None, derived_params=None, var_refs=None, pre_var_refs=None, post_var_refs=None, row_update_code=None, host_update_code=None, extra_global_params=None, extra_global_param_refs=None)

Creates a new custom connectivity update model.

Within host update code, you have full access to parameters, derived parameters, extra global parameters and pre and postsynaptic variables. By design you do not have access to per-synapse variables or variable references and, currently, you cannot access pre and postsynaptic variable references as there are issues regarding delays. Each variable has an accompanying push and pull function to copy it to and from the device. For variables these have no parameters as illustrated in the example in Pushing and pulling, and for extra global parameters they have a single parameter specifying the size of the array. Within the row update code you have full access to parameters, derived parameters, extra global parameters, presynaptic variables and presynaptic variables references. Postsynaptic and synaptic variables and variables references can only be accessed from within one of the for_each_synapse loops illustrated below. Additionally, both the host and row update code cam refer to the following built-in read-only variables:

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • row_stride which represents the maximum number of synapses which each presynaptic neuron can have (this can be increased via SynapseGroup.max_connections).

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Host code can also access the current number of synapses emanating from each presynaptic neuron using the row_length array whereas, in row-update code, this contains the number of synapses emanating from the current presynaptic neuron (identified by id_pre).

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Sequence[str | Tuple[str, str | ResolvedType]] | None) – name and optional types of model parameters

  • vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of per-synapse model variables

  • pre_vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None) – names, types and optional variable access modifiers of per-presynaptic neuron model variables

  • names (post_vars) – modifiers of per-postsynaptic neuron model variables

  • access (types and optional variable) – modifiers of per-postsynaptic neuron model variables

  • derived_params (Sequence[Tuple[str, Callable, str | ResolvedType]] | None) – names, types and callables to calculate derived parameter values from params

  • var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None) – names, types and optional variable access of references to be assigned to synaptic variables

  • pre_neuron_var_refs – names, types and optional variable access of references to be assigned to presynaptic neuron variables

  • post_neuron_var_refs – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • row_update_code (str | None) – string containing the code statements to be run when custom update is launched

  • host_update_code (str | None) – string containing the code statements to be run on CPU when custom connectivity update is launched

  • extra_global_params – names and types of model extra global parameters

  • extra_global_param_refs – names and types of extra global parameter references

  • post_vars (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccess]] | None)

  • pre_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None)

  • post_var_refs (Sequence[Tuple[str, str | ResolvedType] | Tuple[str, str | ResolvedType, VarAccessMode]] | None)

Parallel synapse iteration and removal

The main GPU operation that custom connectivity updates expose is the ability to generate per-presynaptic neuron update code. This can be used to implement a very simple model which removes ‘diagonals’ from the connectivity matrix:

remove_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "remove_diagonal",
    row_update_code=
        """
        for_each_synapse {
            if(id_post == id_pre) {
                remove_synapse();
                break;
            }
        }
        """)

Parallel synapse creation

Similarly you could implement a custom connectivity model which adds diagonals back into the connection matrix like this:

add_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "add_diagonal",
    row_update_code=
        """
        add_synapse(id_pre);
        """)

One important issue here is that lots of other parts of the model (e.g. other custom connectivity updates or custom weight updates) might have state variables ‘attached’ to the same connectivity that the custom update is modifying. GeNN will automatically detect this and add and shuffle all these variables around accordingly which is fine for removing synapses but has no way of knowing what value to add synapses with. If you want new synapses to be created with state variables initialised to values other than zero, you need to use variables references to hook them to the custom connectivity update. For example, if you wanted to be able to provide weights for your new synapse, you could update the previous example model like:

add_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "add_diagonal",
    var_refs=[("g", "scalar")],
    row_update_code=
        """
        add_synapse(id_pre, 1.0);
        """)

Host updates

Some common connectivity update scenarios involve some computation which can’t be easily parallelized. If, for example you wanted to determine which elements on each row you wanted to remove on the host, you can include host_update_code which gets run before the row update code:

remove_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "remove_diagonal",
    pre_var_name_types=[("postInd", "unsigned int")],
    row_update_code=
        """
        for_each_synapse {
            if(id_post == postInd) {
                remove_synapse();
                break;
            }
        }
        """,
    host_update_code=
        """
        for(unsigned int i = 0; i < num_pre; i++) {
           postInd[i] = i;
        }
        pushpostIndToDevice();
        """)