GeNN  4.9.0
GPU enhanced Neuronal Networks (GeNN)
Defining a network model

  1. :
  2. Neuron populations (at least one) must be added (see Defining neuron populations). The user may add as many neuron populations as they wish. However, before this breaking point is reached, GeNN will make all necessary efforts in terms of block size optimisation to accommodate the defined models. All populations must have a unique name.
  3. Synapse populations (zero or more) can be added (see Defining synapse populations).
  4. Current sources (zero or more) can be added to neuron populations (see Defining current sources).
  5. Custom updates (zero or more) can be added (see Defining custom updates).
Note
If your model requires more memory than your GPU has available, there will be a warning but GeNN will not fail.

Defining neuron populations

Neuron populations are added using the function

where the arguments are:

  • (see Neuron models).
  • : Unique name of the neuron population
  • : number of neurons in the population
  • of this neuron type
  • values or initialisation snippets for variables of this neuron type (see Variable initialisation)

The user may add as many neuron populations as the model necessitates. They must all have unique names. The possible values for the arguments, predefined models and their parameters and initial values are detailed Neuron models below.

Defining synapse populations

Synapse populations are added with the function

where the arguments are

  • (see Weight update models).
  • (see Postsynaptic integration methods).
  • : The name of the synapse population
  • the synaptic matrix is stored. See Synaptic matrix types for available options.
  • : Homogeneous (axonal) delay for synapse population (in terms of the simulation time step DT).
  • of the (existing!) presynaptic neuron population.
  • of the (existing!) postsynaptic neuron population.
  • parameter values (common to all synapses of the population) for the weight update model.
  • initial values or initialisation snippets for the weight update model's state variables (see Variable initialisation)
  • initial values or initialisation snippets for the weight update model's presynaptic state variables (see Variable initialisation)
  • initial values or initialisation snippets for the weight update model's postsynaptic state variables (see Variable initialisation)
  • parameter values (common to all postsynaptic neurons) for the postsynaptic model.
  • initial values or initialisation snippets for variables for the postsynaptic model's state variables (see Variable initialisation)
  • : Optional argument, specifying the initialisation snippet for synapse population's sparse connectivity (see Sparse connectivity initialisation).

Note
If the synapse matrix uses one of the "GLOBALG" types then the global value of the synapse parameters are taken from the initial value provided in weightVarInitialisers therefore these must be constant rather than sampled from a distribution etc.

Defining current sources

Current sources are added with the function

where the arguments are

  • (see Defining your own current source model).
  • : The name of the current source
  • parameter values (common to all current sources in the population) for the current source model.
  • initial values or initialisation snippets for the current source model's state variables (see Variable initialisation)
    Note
    The number of current sources in a population always equals the number of neurons in the population it injects current into.

Defining custom updates

Custom updates are added with the function

where the arguments are

  • (see Defining your own custom update model).
  • : The name of the custom update
  • : The name of the group this custom update belongs to. Updates in each group can be launched as described in Simulating a network model.
  • parameter values (common to all custom updates in the population) for the custom update model.
  • initial values or initialisation snippets for the custom update model's state variables (see Variable initialisation)
  • variable references for the custom update model (see Variable references)
  • extra global parameter references for the custom update model (see Extra Global Parameter references)

Batching

When running models on a GPU, smaller models may not fully occupy the device. In some scenerios such as gradient-based training and parameter sweeping, this can be overcome by runing multiple copies of the same model at the same time (batching in Machine Learning speak). Batching can be enabled on a GeNN model with:

Model parameters and sparse connectivity are shared across all batches. Read-write state variables are duplicated for each batch and, by default, read-only state variables are shared across all batches (see section Neuron models for more details).


Previous | Top | Next