pygenn package

class pygenn.CurrentSource

Bases: pybind11_object, CurrentSourceMixin

get_var_location(self: pygenn._genn.CurrentSource, arg0: str) pygenn._genn.VarLocation

Get variable location for current source model state variable

property model

Current source model used for this source

property name

Unique name of current source

property params

Values of current source parameters

set_param_dynamic(self: pygenn._genn.CurrentSource, param_name: str, dynamic: bool = True) None

Set whether parameter is dynamic or not i.e. it can be changed at runtime

set_var_location(self: pygenn._genn.CurrentSource, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of current source state variable. This is ignored for simulations on hardware with a single memory space.

class pygenn.CustomConnectivityUpdate

Bases: pybind11_object, CustomConnectivityUpdateMixin

get_post_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str) pygenn._genn.VarLocation

Get variable location for postsynaptic state variable

get_pre_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str) pygenn._genn.VarLocation

Get variable location for presynaptic state variable

get_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str) pygenn._genn.VarLocation

Get variable location for synaptic state variable

property model

Custom connectivity update model used for this update

property name

Unique name of custom connectivity update

property params

Values of custom connectivity update parameters

set_param_dynamic(self: pygenn._genn.CustomConnectivityUpdate, param_name: str, dynamic: bool = True) None

Set whether parameter is dynamic or not i.e. it can be changed at runtime

set_post_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of postsynaptic state variable. This is ignored for simulations on hardware with a single memory space

set_pre_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of presynaptic state variable. This is ignored for simulations on hardware with a single memory space

set_var_location(self: pygenn._genn.CustomConnectivityUpdate, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of synaptic state variable. This is ignored for simulations on hardware with a single memory space

property synapse_group

Synapse group this custom connectivity update is associated with

property update_group_name

Name of the update group this custom connectivity update is part of

class pygenn.CustomUpdate

Bases: CustomUpdateBase, CustomUpdateMixin

property num_neurons

Number of neurons custom update operates over. This must be the same for all groups whose variables are referenced

class pygenn.CustomUpdateBase

Bases: pybind11_object

get_var_location(self: pygenn._genn.CustomUpdateBase, arg0: str) pygenn._genn.VarLocation

Get variable location for custom update model state variable

property model

Custom update model used for this update

property name

Unique name of custom update

property params

Values of custom connectivity update parameters

set_param_dynamic(self: pygenn._genn.CustomUpdateBase, param_name: str, dynamic: bool = True) None

Set whether parameter is dynamic or not i.e. it can be changed at runtime

set_var_location(self: pygenn._genn.CustomUpdateBase, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of state variable. This is ignored for simulations on hardware with a single memory space

property update_group_name

Name of the update group this custom connectivity update is part of

class pygenn.CustomUpdateVarAccess(self: pygenn._genn.CustomUpdateVarAccess, value: int)

Bases: pybind11_object

Supported combinations of access mode and dimension for custom update variables. The axes are defined ‘subtractively’, i.e. VarAccessDim::BATCH indicates that this axis should be removed.

Members:

READ_WRITE : This variable can be read from and written to and has the same dimensions as whatever the custom update is attached to

READ_ONLY : This variable can only be read from and has the same dimensions as whatever the custom update is attached to

READ_ONLY_SHARED : This variable can only be read from and has the same dimensions as whatever

the custom update is attached to aside from being shared across batches

READ_ONLY_SHARED_NEURON : This variable can only be read from and has the same dimensions as whatever

the custom update is attached to aside from being shared across neurons

REDUCE_BATCH_SUM : This variable is a target for a reduction across batches using a sum operation

REDUCE_BATCH_MAX : This variable is a target for a reduction across batches using a max operation

REDUCE_NEURON_SUM : This variable is a target for a reduction across neurons using a sum operation

REDUCE_NEURON_MAX : This variable is a target for a reduction across neurons using a max operation

READ_ONLY = <CustomUpdateVarAccess.READ_ONLY: 1>
READ_ONLY_SHARED = <CustomUpdateVarAccess.READ_ONLY_SHARED: 65>
READ_ONLY_SHARED_NEURON = <CustomUpdateVarAccess.READ_ONLY_SHARED_NEURON: 33>
READ_WRITE = <CustomUpdateVarAccess.READ_WRITE: 2>
REDUCE_BATCH_MAX = <CustomUpdateVarAccess.REDUCE_BATCH_MAX: 84>
REDUCE_BATCH_SUM = <CustomUpdateVarAccess.REDUCE_BATCH_SUM: 76>
REDUCE_NEURON_MAX = <CustomUpdateVarAccess.REDUCE_NEURON_MAX: 52>
REDUCE_NEURON_SUM = <CustomUpdateVarAccess.REDUCE_NEURON_SUM: 44>
property name
property value
class pygenn.CustomUpdateWU

Bases: CustomUpdateBase, CustomUpdateWUMixin

property synapse_group
class pygenn.GeNNModel(self: pygenn._genn.ModelSpec)

Bases: ModelSpec

This class provides an interface for defining, building and running models

Parameters:
  • precision (Union[str, ResolvedType]) – Data type to use for scalar variables

  • model_name (str) – Name of the model

  • backend (Optional[str]) – Name of backend module to use. Currently supported “single_threaded_cpu”, “cuda”. Defaults to automatically picking the ‘best’ backend for your system

  • time_precision (Optional[Union[str, ResolvedType]]) – data type to use for representing time

  • genn_log_level (PlogSeverity) – Log level for GeNN

  • code_gen_log_level (PlogSeverity) – Log level for GeNN code-generator

  • transpiler_log_level (PlogSeverity) – Log level for GeNN transpiler

  • runtime_log_level (PlogSeverity) – Log level for GeNN runtime

  • backend_log_level (PlogSeverity) – Log level for backend

  • preference_kwargs – Additional keyword arguments to set in backend preferences structure

add_current_source(cs_name, current_source_model, pop, params={}, vars={}, var_refs={})

Add a current source to the GeNN model

Parameters:
  • cs_name (str) – unique name

  • current_source_model (Union[CurrentSourceModelBase, str]) – current source model either as a string referencing a built-in model (see current_source_models) or an instance of CurrentSourceModelBase (for example returned by create_current_source_model())

  • pop (NeuronGroup) – neuron population to inject current into

  • params (Dict[str, Union[int, float]]) – parameter values for the current source model (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial variable values or initialisers for the current source model (see Variables)

  • var_refs (Dict[str, VarReference]) – variables references to neuron variables in pop, typically created using create_var_ref() (see Variables references)

Return type:

CurrentSource

For example, a current source to inject a Gaussian noise current can be added to a model as follows:

cs = model.add_current_source("noise", "GaussianNoise", pop,
                              {"mean": 0.0, "sd": 1.0})

where pop is a reference to a neuron population (as returned by GeNNModel.add_neuron_population())

add_custom_connectivity_update(cu_name, group_name, syn_group, custom_conn_update_model, params={}, vars={}, pre_vars={}, post_vars={}, var_refs={}, pre_var_refs={}, post_var_refs={}, egp_refs={})

Add a custom connectivity update to the GeNN model

Parameters:
  • cu_name (str) – unique name

  • group_name (str) – name of the ‘custom update group’ to include this update in. All custom updates in the same group are executed simultaneously.

  • syn_group (SynapseGroup) – Synapse group to attach custom connectivity update to

  • custom_conn_update_model (Union[CustomConnectivityUpdateModelBase, str]) – custom connectivity update model either as a string referencing a built-in model (see custom_connectivity_update_models) or an instance of CustomConnectivityUpdateModelBaseUpdateModelBase (for example returned by create_custom_connectivity_update_model())

  • params (Dict[str, Union[int, float]]) – parameter values for the custom connectivity model (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial synaptic variable values or initialisers (see Variables)

  • pre_vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial presynaptic variable values or initialisers (see Variables)

  • post_vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial postsynaptic variable values or initialisers (see Variables)

  • var_refs (Dict[str, WUVarReference]) – references to synaptic variables, typically created using create_wu_var_ref() (see Variables references)

  • pre_var_refs (Dict[str, VarReference]) – references to presynaptic variables, typically created using create_var_ref() (see Variables references)

  • post_var_refs (Dict[str, VarReference]) – references to postsynaptic variables, typically created using create_var_ref() (see Variables references)

  • egp_refs (Dict[str, EGPReference]) – references to extra global parameters in other populations to access from this update, typically created using create_egp_ref() (see Extra global parameter references).

add_custom_update(cu_name, group_name, custom_update_model, params={}, vars={}, var_refs={}, egp_refs={})

Add a custom update to the GeNN model

Parameters:
  • cu_name (str) – unique name

  • group_name (str) – name of the ‘custom update group’ to include this update in. All custom updates in the same group are executed simultaneously.

  • custom_update_model (Union[CustomUpdateModelBase, str]) – custom update model either as a string referencing a built-in model (see custom_update_models) or an instance of CustomUpdateModelBase (for example returned by create_custom_update_model())

  • params (Dict[str, Union[int, float]]) – parameter values for the custom update model (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial variable values or initialisers for the custom update model (see Variables)

  • var_refs (Union[Dict[str, VarReference], Dict[str, WUVarReference]]) – references to variables in other populations to access from this update, typically created using either create_var_ref() or create_wu_var_ref() (see Variables references).

  • egp_refs (Dict[str, EGPReference]) – references to extra global parameters in other populations to access from this update, typically created using create_egp_ref() (see Extra global parameter references).

For example, a custom update to calculate transpose weights could be added to a model as follows:

cu = model.add_custom_update("tranpose_pop", "transpose", "Transpose",
                             var_refs={"variable": create_wu_var_ref(fwd_sg, "g",
                                                                     back_sg, "g")})

where fwd_sg and back_sg are references to synapse populations (as returned by GeNNModel.add_synapse_population()). This update could then subsequently be triggered using the name of it’s update group with:

model.custom_update("transpose")
add_neuron_population(pop_name, num_neurons, neuron, params={}, vars={})

Add a neuron population to the GeNN model

Parameters:
  • pop_name (str) – unique name

  • num_neurons (int) – number of neurons

  • neuron (Union[NeuronModelBase, str]) – neuron model either as a string referencing a built-in model (see neuron_models) or an instance of NeuronModelBase (for example returned by create_neuron_model())

  • params (Dict[str, Union[int, float]]) – parameter values for the neuron model (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial variable values or initialisers for the neuron model (see Variables)

Return type:

NeuronGroup

For example, a population of 10 neurons using the built-in Izhikevich model and the standard set of ‘tonic spiking’ parameters could be added to a model as follows:

pop = model.add_neuron_population("pop", 10, "Izhikevich",
                                  {"a": 0.02, "b": 0.2, "c": -65.0, "d": 6.0},
                                  {"V": -65.0, "U": -20.0})
add_synapse_population(pop_name, matrix_type, source, target, weight_update_init, postsynaptic_init, connectivity_init=None)

Add a synapse population to the GeNN model

Parameters:
Return type:

SynapseGroup

For example, a neuron population src_pop could be connected to another called target_pop using sparse connectivity, static synapses and exponential shaped current inputs as follows:

pop = model.add_synapse_population("Syn", "SPARSE",
                                   src_pop, target_pop,
                                   init_weight_update("StaticPulseConstantWeight", {"g": 1.0}),
                                   init_postsynaptic("ExpCurr", {"tau": 5.0}),
                                   init_sparse_connectivity("FixedProbability", {"prob": 0.1}))
property backend_name: str

Name of the currently selected backend

build(path_to_model='./', always_rebuild=False, never_rebuild=False)

Finalize and build a GeNN model

Parameters:
  • path_to_model (str) – path where to place the generated model code. Defaults to the local directory.

  • always_rebuild (bool) – should model be rebuilt even if it doesn’t appear to be required

  • never_rebuild (bool) – should model never be rebuilt even it appears to need it. This should only ever be used to prevent file overwriting when performing parallel runs

custom_update(name)

Perform custom update

Parameters:

name (str) – Name of custom update. Corresponds to the group_name parameter passed to add_custom_update() and add_custom_connectivity_update().

property dT
get_custom_update_remap_time(name)

Get time in seconds spent in remap custom update. Only available if ModelSpec.timing_enabled is set.

Parameters:

name (str) – Name of custom update

Return type:

float

get_custom_update_time(name)

Get time in seconds spent in custom update. Only available if ModelSpec.timing_enabled is set.

Parameters:

name (str) – Name of custom update

Return type:

float

get_custom_update_transpose_time(name)

Get time in seconds spent in transpose custom update. Only available if ModelSpec.timing_enabled is set.

Parameters:

name (str) – Name of custom update

Return type:

float

property init_sparse_time: float

Time in seconds spent in sparse initialisation kernel. Only available if ModelSpec.timing_enabled is set

property init_time: float

Time in seconds spent initialisation kernel. Only available if ModelSpec.timing_enabled is set

load(num_recording_timesteps=None)

Load the previously built model into memory;

Parameters:

num_recording_timesteps (Optional[int]) – Number of timesteps to record spikes for. pull_recording_buffers_from_device() must be called after this number of timesteps

property neuron_update_time: float

Time in seconds spent in neuron update kernel. Only available if ModelSpec.timing_enabled is set

property postsynaptic_update_time: float

Time in seconds spent in postsynaptic update kernel. Only available if ModelSpec.timing_enabled is set

property presynaptic_update_time: float

Time in seconds spent in presynaptic update kernel. Only available if ModelSpec.timing_enabled is set

pull_recording_buffers_from_device()

Pull recording buffers from device

step_time()

Make one simulation step

property synapse_dynamics_time: float

Time in seconds spent in synapse dynamics kernel. Only available if ModelSpec.timing_enabled is set

property t: float

Simulation time in ms

property timestep: int

Simulation time step

unload()

Unload a previously loaded model, freeing all memory

class pygenn.ModelSpec(self: pygenn._genn.ModelSpec)

Bases: pybind11_object

property batch_size

Batch size of this model - efficiently duplicates model

property default_narrow_sparse_ind_enabled

Should ‘narrow’ i.e. less than 32-bit types be used to store postsyanptic neuron indices in SynapseMatrixConnectivity::SPARSE connectivity? If this is true and postsynaptic population has < 256 neurons, 8-bit indices will be used and, if it has < 65536 neurons, 16-bit indices will be used.

property default_sparse_connectivity_location

The default location for sparse synaptic connectivity

property default_var_location

The default location for model state variables?

property dt

The integration time step of the model

property fuse_postsynaptic_models

Should compatible postsynaptic models and dendritic delay buffers be fused? This can significantly reduce the cost of updating neuron population but means that per-synapse group inSyn arrays can not be retrieved

property fuse_pre_post_weight_update_models

Should compatible pre and postsynaptic weight update model variables and updates be fused? This can significantly reduce the cost of updating neuron populations but means that per-synaptic group per and postsynaptic variables cannot be retrieved

property name

Name of the network model

property num_neurons

How many neurons make up the entire model

property precision

Type of floating point variables used for ‘scalar’ types

property seed

RNG seed

property time_precision

Type of floating point variables used for ‘timepoint’ types

property timing_enabled

Whether timing code should be inserted into model

class pygenn.NeuronGroup

Bases: pybind11_object, NeuronGroupMixin

get_var_location(self: pygenn._genn.NeuronGroup, arg0: str) pygenn._genn.VarLocation

Get location of neuron model state variable by name

property model

Neuron model used for this group

property name

Unique name of neuron group

property num_neurons

Number of neurons in group

property params

Values of neuron parameters

property prev_spike_time_location

Location of previous spike times. This is ignored for simulations on hardware with a single memory space

property recording_zero_copy_enabled

Should zero-copy memory (if available) be used for spike and spike-like event recording?

set_param_dynamic(self: pygenn._genn.NeuronGroup, param_name: str, dynamic: bool = True) None

Set whether parameter is dynamic or not i.e. it can be changed at runtime

set_var_location(self: pygenn._genn.NeuronGroup, arg0: str, arg1: pygenn._genn.VarLocation) None

Set variable location of neuron model state variable. This is ignored for simulations on hardware with a single memory space

property spike_event_recording_enabled

Is spike event recording enabled?

property spike_recording_enabled

Is spike recording enabled for this population?

property spike_time_location

Location of spike times from neuron group. This is ignored for simulations on hardware with a single memory space

class pygenn.ParallelismHint(self: pygenn._genn.ParallelismHint, value: int)

Bases: pybind11_object

Hints to backends as to what parallelism strategy to use for this synapse group

Members:

POSTSYNAPTIC : GPU threads loop over spikes and handle all connectivity associated with

a postsynaptic neuron or column of sparse connectivity. Generally, this is the most efficient approach and memory accesses are coalesced and, while atomic operations are used, there should be minimal conflicts between them.

PRESYNAPTIC : GPU threads

If spike rates are high, this can extract more parallelism but there is an overhead to launching numerous threads with no spike to process and this approach does not result in well-coalesced memory accesses.

WORD_PACKED_BITMASK : Rather than processing SynapseMatrixConnectivity::BITMASK connectivity

using one thread per postsynaptic neuron and doing nothing when a zero is encountered, process 32 bits of bitmask using each thread. On the single-threaded CPU backend and when simulating models with significantly more neurons than the target GPU has threads, this is likely to improve performance.

POSTSYNAPTIC = <ParallelismHint.POSTSYNAPTIC: 0>
PRESYNAPTIC = <ParallelismHint.PRESYNAPTIC: 1>
WORD_PACKED_BITMASK = <ParallelismHint.WORD_PACKED_BITMASK: 2>
property name
property value
class pygenn.PlogSeverity(self: pygenn._genn.PlogSeverity, value: int)

Bases: pybind11_object

Members:

NONE

FATAL

ERROR

WARNING

INFO

DEBUG

VERBOSE

DEBUG = <PlogSeverity.DEBUG: 5>
ERROR = <PlogSeverity.ERROR: 2>
FATAL = <PlogSeverity.FATAL: 1>
INFO = <PlogSeverity.INFO: 4>
NONE = <PlogSeverity.NONE: 0>
VERBOSE = <PlogSeverity.VERBOSE: 6>
WARNING = <PlogSeverity.WARNING: 3>
property name
property value
class pygenn.SynapseGroup

Bases: pybind11_object, SynapseGroupMixin

property axonal_delay_steps

Global synaptic conductance delay for the group (in time steps)

property back_prop_delay_steps

Global backpropagation delay for postsynaptic spikes to synapse (in time steps)

property dendritic_delay_location

Location of this synapse group’s dendritic delay buffers. This is ignored for simulations on hardware with a single memory space

get_ps_var_location(self: pygenn._genn.SynapseGroup, arg0: str) pygenn._genn.VarLocation

Get location of postsynaptic model state variable

get_wu_post_var_location(self: pygenn._genn.SynapseGroup, arg0: str) pygenn._genn.VarLocation

Get location of weight update model postsynaptic state variable

get_wu_pre_var_location(self: pygenn._genn.SynapseGroup, arg0: str) pygenn._genn.VarLocation

Get location of weight update model presynaptic state variable

get_wu_var_location(self: pygenn._genn.SynapseGroup, arg0: str) pygenn._genn.VarLocation

Get location of weight update model synaptic state variable

property kernel_size

Kernel size

property matrix_type

Connectivity type of synapses

property max_connections

Maximum number of target neurons any source neuron can connect to

property max_dendritic_delay_timesteps

Maximum dendritic delay timesteps supported for synapses in this population

property max_source_connections

Maximum number of source neurons any target neuron can connect to

property name

Name of the synapse group

property narrow_sparse_ind_enabled

Should narrow i.e. less than 32-bit types be used for sparse matrix indices

property num_threads_per_spike

How many threads GPU implementation use to process each spike when parallelised presynaptically

property output_location

Location of outputs from this synapse group e.g. outPre and outPost. This is ignored for simulations on hardware with a single memory space

property parallelism_hint

Hint as to how synapse group should be parallelised

property post_target_var

Name of neuron input variable postsynaptic model will target. This should either be ‘Isyn’ or the name of one of the postsynaptic neuron’s additional input variables.

property pre_target_var

Name of neuron input variable a presynaptic output specified with $(addToPre) will target. This will either be ‘Isyn’ or the name of one of the presynaptic neuron’s additional input variables.

property ps_initialiser

Initialiser used for creating postsynaptic update model

set_ps_param_dynamic(self: pygenn._genn.SynapseGroup, param_name: str, dynamic: bool = True) None

Set whether weight update model parameter is dynamic or not i.e. it can be changed at runtime

set_ps_var_location(self: pygenn._genn.SynapseGroup, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of postsynaptic model state variable. This is ignored for simulations on hardware with a single memory space

set_wu_param_dynamic(self: pygenn._genn.SynapseGroup, param_name: str, dynamic: bool = True) None

Set whether weight update model parameter is dynamic or not i.e. it can be changed at runtime

set_wu_post_var_location(self: pygenn._genn.SynapseGroup, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of weight update model postsynaptic state variable. This is ignored for simulations on hardware with a single memory space

set_wu_pre_var_location(self: pygenn._genn.SynapseGroup, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of weight update model presynaptic state variable. This is ignored for simulations on hardware with a single memory space

set_wu_var_location(self: pygenn._genn.SynapseGroup, arg0: str, arg1: pygenn._genn.VarLocation) None

Set location of weight update model state variable. This is ignored for simulations on hardware with a single memory space

property sparse_connectivity_initialiser

Initialiser used for creating sparse connectivity

property sparse_connectivity_location

Location of sparse connectivity. This is ignored for simulations on hardware with a single memory space

property toeplitz_connectivity_initialiser

Initialiser used for creating toeplitz connectivity

property wu_initialiser

Initialiser used for creating weight update model

class pygenn.SynapseMatrixConnectivity(self: pygenn._genn.SynapseMatrixConnectivity, value: int)

Bases: pybind11_object

Flags defining how synaptic connectivity is represented

Members:

DENSE : Connectivity is dense with a synapse between each pair of pre and postsynaptic neurons

BITMASK : Connectivity is sparse and stored using a bitmask.

SPARSE : Connectivity is sparse and stored using a compressed sparse row data structure

PROCEDURAL : Connectivity is generated on the fly using a sparse connectivity initialisation snippet

TOEPLITZ : Connectivity is generated on the fly using a Toeplitz connectivity initialisation snippet

BITMASK = <SynapseMatrixConnectivity.BITMASK: 2>
DENSE = <SynapseMatrixConnectivity.DENSE: 1>
PROCEDURAL = <SynapseMatrixConnectivity.PROCEDURAL: 8>
SPARSE = <SynapseMatrixConnectivity.SPARSE: 4>
TOEPLITZ = <SynapseMatrixConnectivity.TOEPLITZ: 16>
property name
property value
class pygenn.SynapseMatrixType(self: pygenn._genn.SynapseMatrixType, value: int)

Bases: pybind11_object

Members:

DENSE : Synaptic matrix is dense and synaptic state variables are stored individually in memory.

DENSE_PROCEDURALG : Synaptic matrix is dense and all synaptic state variables must either be constant or generated on the fly using their variable initialisation snippets.

BITMASK : Connectivity is stored as a bitmask.

For moderately sparse (>3%) connectivity, this uses the least memory. However, connectivity of this sort cannot have any accompanying state variables. Which algorithm is used for propagating spikes through BITMASK connectivity can be hinted via SynapseGroup::ParallelismHint.

SPARSE : Connectivity is stored using a compressed sparse row data structure and synaptic state variables are stored individually in memory.

This is the most efficient choice for very sparse unstructured connectivity or if synaptic state variables are required.

PROCEDURAL : Sparse synaptic connectivity is generated on the fly using a sparse connectivity initialisation snippet and all state variables must be either constant or generated on the fly using variable initialisation snippets.

Synaptic connectivity of this sort requires very little memory allowing extremely large models to be simulated on a single GPU.

PROCEDURAL_KERNELG : Sparse synaptic connectivity is generated on the fly using a sparse connectivity initialisation snippet and state variables are stored in a shared kernel.

TOEPLITZ : Sparse structured connectivity is generated on the fly a Toeplitz connectivity initialisation snippet and state variables are stored in a shared kernel.

This is the most efficient choice for convolution-like connectivity

BITMASK = <SynapseMatrixType.BITMASK: 66>
DENSE = <SynapseMatrixType.DENSE: 65>
DENSE_PROCEDURALG = <SynapseMatrixType.DENSE_PROCEDURALG: 129>
PROCEDURAL = <SynapseMatrixType.PROCEDURAL: 136>
PROCEDURAL_KERNELG = <SynapseMatrixType.PROCEDURAL_KERNELG: 264>
SPARSE = <SynapseMatrixType.SPARSE: 68>
TOEPLITZ = <SynapseMatrixType.TOEPLITZ: 272>
property name
property value
class pygenn.SynapseMatrixWeight(self: pygenn._genn.SynapseMatrixWeight, value: int)

Bases: pybind11_object

Flags defining how synaptic state variables are stored

Members:

INDIVIDUAL : Synaptic state variables are stored individually in memory

PROCEDURAL : Synaptic state is generated on the fly using a sparse connectivity initialisation snippet

KERNEL : Synaptic state variables are stored in a kernel which is shared between synapses in

a manner defined by either a Toeplitz or sparse connectivity initialisation snippet

INDIVIDUAL = <SynapseMatrixWeight.INDIVIDUAL: 64>
KERNEL = <SynapseMatrixWeight.KERNEL: 256>
PROCEDURAL = <SynapseMatrixWeight.PROCEDURAL: 128>
property name
property value
class pygenn.VarAccess(self: pygenn._genn.VarAccess, value: int)

Bases: pybind11_object

Supported combinations of access mode and dimension for neuron and synapse variables

Members:

READ_WRITE : This variable can be read from and written to and stores separate values for each element and each batch

READ_ONLY : This variable can only be read from and stores separate values for each element but these are shared across batches

READ_ONLY_DUPLICATE : This variable can only be read from and stores separate values for each element and each batch

READ_ONLY_SHARED_NEURON : This variable can only be read from and stores separate values for each batch but these are shared across neurons

READ_ONLY = <VarAccess.READ_ONLY: 33>
READ_ONLY_DUPLICATE = <VarAccess.READ_ONLY_DUPLICATE: 97>
READ_ONLY_SHARED_NEURON = <VarAccess.READ_ONLY_SHARED_NEURON: 65>
READ_WRITE = <VarAccess.READ_WRITE: 98>
property name
property value
class pygenn.VarAccessDim(self: pygenn._genn.VarAccessDim, value: int)

Bases: pybind11_object

Flags defining dimensions this variables has

Members:

ELEMENT : This variable stores separate values for each element i.e. neuron or synapse

BATCH : This variable stores separate values for each batch

BATCH = <VarAccessDim.BATCH: 64>
ELEMENT = <VarAccessDim.ELEMENT: 32>
property name
property value
class pygenn.VarAccessMode(self: pygenn._genn.VarAccessMode, value: int)

Bases: pybind11_object

Members:

READ_WRITE : This variable can be read from or written to

READ_ONLY : This variable can only be read from

REDUCE_SUM : This variable is a target for a reduction with a sum operation

REDUCE_MAX : This variable is a target for a reduction with a max operation

READ_ONLY = <VarAccessMode.READ_ONLY: 1>
READ_WRITE = <VarAccessMode.READ_WRITE: 2>
REDUCE_MAX = <VarAccessMode.REDUCE_MAX: 20>
REDUCE_SUM = <VarAccessMode.REDUCE_SUM: 12>
property name
property value
class pygenn.VarAccessModeAttribute(self: pygenn._genn.VarAccessModeAttribute, value: int)

Bases: pybind11_object

Flags defining attributes of var access models Read-only and read-write are separate flags rather than read and write so you can test mode & VarAccessMode::READ_ONLY

Members:

READ_ONLY : This variable can only be read from

READ_WRITE : This variable can be read from or written to

REDUCE : This variable is a reduction target

SUM : This variable’s reduction operation is a summation

MAX : This variable’s reduction operation is a maximum

MAX = <VarAccessModeAttribute.MAX: 16>
READ_ONLY = <VarAccessModeAttribute.READ_ONLY: 1>
READ_WRITE = <VarAccessModeAttribute.READ_WRITE: 2>
REDUCE = <VarAccessModeAttribute.REDUCE: 4>
SUM = <VarAccessModeAttribute.SUM: 8>
property name
property value
class pygenn.VarLocation(self: pygenn._genn.VarLocation, value: int)

Bases: pybind11_object

Supported combination of VarLocationAttribute

Members:

DEVICE : Variable is only located on device. This can be used to save host memory.

HOST_DEVICE : Variable is located on both host and device. This is the default.

HOST_DEVICE_ZERO_COPY : Variable is shared between host and device using zero copy memory.

This can improve performance if data is frequently copied between host and device but, on non cache-coherent architectures e.g. Jetson, can also reduce access speed.

DEVICE = <VarLocation.DEVICE: 2>
HOST_DEVICE = <VarLocation.HOST_DEVICE: 3>
HOST_DEVICE_ZERO_COPY = <VarLocation.HOST_DEVICE_ZERO_COPY: 7>
property name
property value
class pygenn.VarLocationAttribute(self: pygenn._genn.VarLocationAttribute, value: int)

Bases: pybind11_object

Flags defining attributes of var locations

Members:

HOST : Variable is located on the host

DEVICE : Variable is located on the device

ZERO_COPY : Variable is located in zero-copy memory

DEVICE = <VarLocationAttribute.DEVICE: 2>
HOST = <VarLocationAttribute.HOST: 1>
ZERO_COPY = <VarLocationAttribute.ZERO_COPY: 4>
property name
property value
pygenn.create_current_source_model(class_name, params=None, vars=None, neuron_var_refs=None, derived_params=None, injection_code=None, extra_global_params=None)

Creates a new current source model. Within the injection_code code string, the variables, parameters, derived parameters, neuron variable references and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

Finally, the function injectCurrent(x) can be used to inject a current x into the attached neuron. The variable it goes into can be configured using the CurrentSource.target_var. It defaults to Isyn.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of model variables

  • neuron_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to variables in neuron population current source is attached to

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • injection_code (Optional[str]) – string containing the simulation code statements to be run every timestep

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of model extra global parameters

For example, we can define a simple current source that injects uniformly-distributed noise as follows:

uniform_noise_model = pygenn.create_current_source_model(
    "uniform_noise",
    params=["magnitude"],
    injection_code="injectCurrent(gennrand_uniform() * magnitude);")
pygenn.create_custom_connectivity_update_model(class_name, params=None, vars=None, pre_vars=None, post_vars=None, derived_params=None, var_refs=None, pre_var_refs=None, post_var_refs=None, row_update_code=None, host_update_code=None, extra_global_params=None, extra_global_param_refs=None)

Creates a new custom connectivity update model.

Within host update code, you have full access to parameters, derived parameters, extra global parameters and pre and postsynaptic variables. By design you do not have access to per-synapse variables or variable references and, currently, you cannot access pre and postsynaptic variable references as there are issues regarding delays. Each variable has an accompanying push and pull function to copy it to and from the device. For variables these have no parameters as illustrated in the example in Pushing and pulling, and for extra global parameters they have a single parameter specifying the size of the array. Within the row update code you have full access to parameters, derived parameters, extra global parameters, presynaptic variables and presynaptic variables references. Postsynaptic and synaptic variables and variables references can only be accessed from within one of the for_each_synapse loops illustrated below. Additionally, both the host and row update code cam refer to the following built-in read-only variables:

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • row_stride which represents the maximum number of synapses which each presynaptic neuron can have (this can be increased via SynapseGroup.max_connections).

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Host code can also access the current number of synapses emanating from each presynaptic neuron using the row_length array whereas, in row-update code, this contains the number of synapses emanating from the current presynaptic neuron (identified by id_pre).

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of per-synapse model variables

  • pre_vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of per-presynaptic neuron model variables

  • names (post_vars) – modifiers of per-postsynaptic neuron model variables

  • access (types and optional variable) – modifiers of per-postsynaptic neuron model variables

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to synaptic variables

  • pre_neuron_var_refs – names, types and optional variable access of references to be assigned to presynaptic neuron variables

  • post_neuron_var_refs – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • row_update_code (Optional[str]) – string containing the code statements to be run when custom update is launched

  • host_update_code (Optional[str]) – string containing the code statements to be run on CPU when custom connectivity update is launched

  • extra_global_params – names and types of model extra global parameters

  • extra_global_param_refs – names and types of extra global parameter references

  • post_vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) –

  • pre_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) –

  • post_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) –

The main GPU operation that custom connectivity updates expose is the ability to generate per-presynaptic neuron update code. This can be used to implement a very simple model which removes ‘diagonals’ from the connectivity matrix:

remove_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "remove_diagonal",
    row_update_code=
        """
        for_each_synapse {
            if(id_post == id_pre) {
                remove_synapse();
                break;
            }
        }
        """)

Similarly you could implement a custom connectivity model which adds diagonals back into the connection matrix like this:

add_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "add_diagonal",
    row_update_code=
        """
        add_synapse(id_pre);
        """)

One important issue here is that lots of other parts of the model (e.g. other custom connectivity updates or custom weight updates) might have state variables ‘attached’ to the same connectivity that the custom update is modifying. GeNN will automatically detect this and add and shuffle all these variables around accordingly which is fine for removing synapses but has no way of knowing what value to add synapses with. If you want new synapses to be created with state variables initialised to values other than zero, you need to use variables references to hook them to the custom connectivity update. For example, if you wanted to be able to provide weights for your new synapse, you could update the previous example model like:

add_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "add_diagonal",
    var_refs=[("g", "scalar")],
    row_update_code=
        """
        add_synapse(id_pre, 1.0);
        """)

Some common connectivity update scenarios involve some computation which can’t be easily parallelized. If, for example you wanted to determine which elements on each row you wanted to remove on the host, you can include host_update_code which gets run before the row update code:

remove_diagonal_model = pygenn.create_custom_connectivity_update_model(
    "remove_diagonal",
    pre_var_name_types=[("postInd", "unsigned int")],
    row_update_code=
        """
        for_each_synapse {
            if(id_post == postInd) {
                remove_synapse();
                break;
            }
        }
        """,
    host_update_code=
        """
        for(unsigned int i = 0; i < num_pre; i++) {
           postInd[i] = i;
        }
        pushpostIndToDevice();
        """)
pygenn.create_custom_update_model(class_name, params=None, vars=None, derived_params=None, var_refs=None, update_code=None, extra_global_params=None, extra_global_param_refs=None)

Creates a new custom update model. Within the update_code code string, the variables, parameters, derived parameters, variable references, extra global parameters and extra global parameter references defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

And, if a custom update using this model is attached to per-neuron variables:

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

or, to per-synapse variables:

  • id_pre which represents the index of the presynaptic neuron (starting from zero)

  • id_post which represents the index of the postsynaptic neuron (starting from zero)

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], CustomUpdateVarAccess]]]]) – names, types and optional variable access modifiers of model variables

  • var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to variables in population(s) custom update is attached to

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • update_code (Optional[str]) – string containing the code statements to be run when custom update is launched

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of model extra global parameters

  • extra_global_param_refs – names and types of extra global parameter references

For example, we can define a custom update which will set a referenced variable to the value of a custom update model state variable:

reset_model = pygenn.create_custom_update_model(
    "reset",
    vars=[("v", "scalar", pygenn.CustomUpdateVarAccess.READ_ONLY)],
    var_refs=[("r", "scalar", pygenn.VarAccessMode.READ_WRITE)],
    update_code="r = v;")

When used in a model with batch size > 1, whether custom updates of this sort are batched or not depends on the variables their references point to. If any referenced variables have VarAccess.READ_ONLY_DUPLICATE or VarAccess.READ_WRITE access modes, then the update will be batched and any variables associated with the custom update with VarAccess.READ_ONLY_DUPLICATE or VarAccess.READ_WRITE access modes will be duplicated across the batches.

As well as the standard variable access modes described previously, custom updates support variables with ‘batch reduction’ access modes such as CustomUpdateVarAccess.REDUCE_BATCH_SUM and CustomUpdateVarAccess.REDUCE_BATCH_MAX. These access modes allow values read from variables duplicated across batches to be reduced into variables that are shared across batches. For example, in a gradient-based learning scenario, a model like this could be used to sum gradients from across all batches so they can be used as the input to a learning rule operating on shared synaptic weights:

reduce_model = pygenn.create_custom_update_model(
    "gradient_batch_reduce",
    vars=[("reducedGradient", "scalar", pygenn.CustomUpdateVarAccess.REDUCE_BATCH_SUM)],
    var_refs=[("gradient", "scalar", pygenn.VarAccessMode.READ_ONLY)],
    update_code=
        """
        reducedGradient = gradient;
        gradient = 0;
        """)

Batch reductions can also be performed into variable references with the VarAccessMode.REDUCE_SUM or VarAccessMode.REDUCE_MAX access modes.

Similarly to the batch reduction modes discussed previously, custom updates also support variables with several ‘neuron reduction’ access modes such as CustomUpdateVarAccess.REDUCE_NEURON_SUM and CustomUpdateVarAccess.REDUCE_NEURON_MAX.

These access modes allow values read from per-neuron variables to be reduced into variables that are shared across neurons. For example, a model like this could be used to calculate the maximum value of a state variable in a population of neurons:

reduce_model = pygenn.create_custom_update_model(
    "neuron_reduce",
    vars=[("reduction", "scalar", pygenn.CustomUpdateVarAccess.REDUCE_NEURON_SUM)],
    var_refs=[("gradient", "scalar", pygenn.VarAccessMode.READ_ONLY)],
    update_code=
        """
        reduction = source;
        """)

Again, like batch reductions, neuron reductions can also be performed into variable references with the VarAccessMode.REDUCE_SUM or VarAccessMode.REDUCE_MAX access modes.

pygenn.create_egp_ref(*args, **kwargs)

Overloaded function.

  1. create_egp_ref(arg0: GeNN::NeuronGroup, arg1: str) -> GeNN::Models::EGPReference

Creates a reference to a neuron group extra global parameter.

  1. create_egp_ref(arg0: GeNN::CurrentSource, arg1: str) -> GeNN::Models::EGPReference

Creates a reference to a current source extra global parameter.

  1. create_egp_ref(arg0: GeNN::CustomUpdate, arg1: str) -> GeNN::Models::EGPReference

Creates a reference to a custom update extra global parameter.

  1. create_egp_ref(arg0: GeNN::CustomUpdateWU, arg1: str) -> GeNN::Models::EGPReference

Creates a reference to a custom weight update extra global parameter.

  1. create_egp_ref(arg0: GeNN::CustomConnectivityUpdate, arg1: str) -> GeNN::Models::EGPReference

Creates a reference to a custom connectivity update extra global parameter.

pygenn.create_neuron_model(class_name, params=None, vars=None, derived_params=None, sim_code=None, threshold_condition_code=None, reset_code=None, extra_global_params=None, additional_input_vars=None, auto_refractory_required=False)

Creates a new neuron model. Within all of the code strings, the variables, parameters, derived parameters, additional input variables and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt()).

  • Isyn which represents the total incoming synaptic input.

  • id which represents a neurons index within a population (starting from zero).

  • num_neurons which represents the number of neurons in the population.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of model variables

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • sim_code (Optional[str]) – string containing the simulation code statements to be run every timestep

  • threshold_condition_code (Optional[str]) – string containing a threshold condition expression to test whether a spike should be emitted

  • reset_code (Optional[str]) – string containing the reset code statements to run after emitting a spike

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of model extra global parameters

  • additional_input_vars – list of tuples with names and types as strings and initial values of additional local input variables

  • auto_refractory_required (bool) – does this model require auto-refractory logic to be generated?

For example, we can define a leaky integrator \(\tau\frac{dV}{dt}= -V + I_{{\rm syn}}\) solved using Euler’s method:

leaky_integrator_model = pygenn.create_neuron_model(
    "leaky_integrator",

    sim_code=
        """
        V += (-V + Isyn) * (dt / tau);
        """,
    threshold_condition_code="V >= 1.0",
    reset_code=
        """
        V = 0.0;
        """,

    params=["tau"],
    vars=[("V", "scalar", pygenn.VarAccess.READ_WRITE)])

Normally, neuron models receive the linear sum of the inputs coming from all of their synaptic inputs through the Isyn variable. However neuron models can define additional input variables, allowing input from different synaptic inputs to be combined non-linearly. For example, if we wanted our leaky integrator to operate on the the product of two input currents, we could modify our model as follows:

...
additional_input_vars=[("Isyn2", "scalar", 1.0)],
sim_code=
    """
    const scalar input = Isyn * Isyn2;
    sim_code="V += (-V + input) * (dt / tau);
    """,
...
pygenn.create_post_var_ref(arg0: GeNN::CustomConnectivityUpdate, arg1: str) GeNN::Models::VarReference

Creates a reference to a postsynaptic custom connectivity update variable.

pygenn.create_postsynaptic_model(class_name, params=None, vars=None, neuron_var_refs=None, derived_params=None, sim_code=None, extra_global_params=None)

Creates a new postsynaptic update model. Within all of the code strings, the variables, parameters, derived parameters and extra global parameters defined in this model can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables:

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

  • inSyn which contains the summed input received from the weight update model through addToPost() or addToPostDelay()

Finally, the function injectCurrent(x) can be used to inject a current x into the postsynaptic neuron. The variable it goes into can be configured using the SynapseGroup.post_target_var. By default it targets Isyn.

Parameters:
  • class_name – name of the new class (only for debugging)

  • params – name and optional types of model parameters

  • vars – names, types and optional variable access modifiers of model variables

  • neuron_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • sim_code (Optional[str]) – string containing the simulation code statements to be run every timestep

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of model extra global parameters

pygenn.create_pre_var_ref(arg0: GeNN::CustomConnectivityUpdate, arg1: str) GeNN::Models::VarReference

Creates a reference to a presynaptic custom connectivity update variable.

pygenn.create_psm_egp_ref(arg0: GeNN::SynapseGroup, arg1: str) GeNN::Models::EGPReference

Creates a reference to a postsynaptic model extra global parameter.

pygenn.create_psm_var_ref(arg0: GeNN::SynapseGroup, arg1: str) GeNN::Models::VarReference

Creates a reference to a postsynaptic model variable.

pygenn.create_sparse_connect_init_snippet(class_name, params=None, derived_params=None, row_build_code=None, col_build_code=None, calc_max_row_len_func=None, calc_max_col_len_func=None, calc_kernel_size_func=None, extra_global_params=None)

Creates a new sparse connectivity initialisation snippet. Within the code strings, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

  • thread when some procedural connectivity is used with multiple threads per presynaptic neuron, represents the index of the current thread

and, in row_build_code:

  • id_pre represents the index of the presynaptic neuron (starting from zero)

  • id_post_begin when some procedural connectivity is used with multiple threads per presynaptic neuron, represents the index of the first postsynaptic neuron to connect.

and, in col_build_code:

  • id_post which represents the index of the postsynaptic neuron (starting from zero).

Finally, the function addSynapse(x) can be used to add a new synapse to the connectivity where, in row_build_code, x is the index of the postsynaptic neuron to connect id_pre to and, in col_build_code, x is the index of the presynaptic neuron to connect to id_post

Parameters:
  • class_name (str) – name of the snippet (only for debugging)

  • params – name and optional types of model parameters

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from paramss

  • row_build_code (Optional[str]) – code for building connectivity row by row

  • col_build_code (Optional[str]) – code for building connectivity column by column

  • calc_max_row_len_func (Optional[Callable]) – used to calculate the maximum row length of the synaptic matrix created using this snippet

  • calc_max_col_len_func (Optional[Callable]) – used to calculate the maximum column length of the synaptic matrix created using this snippet

  • calc_kernel_size_func (Optional[Callable]) – used to calculate the size of the kernel if snippet requires one

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of snippet extra global parameters

  • param_names (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) –

For example, if we wanted to define a snippet to initialise connectivity where each presynaptic neuron targets a fixed number of postsynaptic neurons, sampled uniformly with replacement, we could define a snippet as follows:

from scipy.stats import binom

fixed_number_post = pygenn.create_sparse_connect_init_snippet(
    "fixed_number_post",
    params=[("num", "unsigned int")],
    row_build_code=
        """
        for(unsigned int c = num; c != 0; c--) {
            const unsigned int idPost = gennrand() % num_post;
            addSynapse(idPost + id_post_begin);
        }
        """,
    calc_max_row_len_func=lambda num_pre, num_post, pars: pars["num"],
    calc_max_col_len_func=lambda num_pre, num_post, pars: binom.ppf(0.9999 ** (1.0 / num_post),
                                                                    pars["num"] * num_pre,
                                                                    1.0 / num_post))

For full details of how maximum column lengths are calculated, you should refer to our paper [Knight2018] but, in short, the number of connections that end up in a column are distributed binomially with \(n=\text{num}\) and \(p=\frac{1}{\text{num_post}}\) Therefore, we can calculate the maximum column length by looking at the inverse cummulative distribution function (CDF) for the binomial distribution, looking at the point in the inverse CDF where there is a 0.9999 chance of the bound being correct when drawing synapses from num_post columns.

pygenn.create_toeplitz_connect_init_snippet(class_name, params=None, derived_params=None, diagonal_build_code=None, calc_max_row_len_func=None, calc_kernel_size_func=None, extra_global_params=None)

Creates a new Toeplitz connectivity initialisation snippet. Each diagonal of Toeplitz connectivity is initialised independently by running the snippet of code specified using the diagonal_build_code. Within the code strings, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

  • id_diag when some procedural connectivity is used with multiple threads

Additionally, the function addSynapse(id_post, id_kern_0, id_kern_1, ..., id_kern_N) can be used to generate a new synapse to postsynaptic neuron id_post using N-dimensional kernel variables indexed with id_kern_0, id_kern_1, ..., id_kern_N. Finally the for_each_synapse{} construct can be used to loop through incoming spikes and, inside this, id_pre will represent the index of the spiking presynaptic neuron.

Parameters:
  • class_name (str) – name of the snippet (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from paramss

  • diagonal_build_code (Optional[str]) – code for building connectivity row by row

  • calc_max_row_len_func (Optional[Callable]) – used to calculate the maximum row length of synaptic matrix created using this snippet

  • calc_kernel_size_func (Optional[Callable]) – used to calculate the size of the kernel

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of snippet extra global parameters

For example, the following Toeplitz connectivity initialisation snippet could be used to convolve a \(\text{kern_dim} \times \text{kern_dim}\) square kernel with the spikes from a population of \(\text{pop_dim} \times \text{pop_dim}\) neurons.

simple_conv2d_model = pynn.create_toeplitz_connect_init_snippet(
    "simple_conv2d",
    params=[("kern_size", "int"), ("pop_dim", "int")],
    diagonal_build_code=
        """
        const int kernRow = id_diag / kern_dim;
        const int kernCol = id_diag % kern_dim;

        for_each_synapse {
            const int preRow = id_pre / pop_dim;
            const int preCol = id_pre % pop_dim;
            // If we haven't gone off edge of output
            const int postRow = preRow + kernRow - 1;
            const int postCol = preCol + kernCol - 1;
            if(postRow >= 0 && postCol >= 0 && postRow < pop_dim && postCol < pop_dim) {
                // Calculate postsynaptic index
                const int postInd = (postRow * pop_dim) + postCol;
                addSynapse(postInd,  kernRow, kernCol);
            }
        }
        """,

    calc_max_row_len_func=lambda num_pre, num_post, pars: pars["kern_size"] * pars["kern_size"],
    calc_kernel_size_func=lambda pars: [pars["kern_size"], pars["kern_size"]])

For full details of how convolution-like connectivity is expressed in this way, please see our paper [Turner2022].

pygenn.create_var_init_snippet(class_name, params=None, derived_params=None, var_init_code=None, extra_global_params=None)

Creates a new variable initialisation snippet. Within the var_init_code, the parameters, derived parameters and extra global parameters defined in this snippet can all be referred to by name. Additionally, the code may refer to the following built-in read-only variables

  • dt which represents the simulation time step (as specified via GeNNModel.dt())

And, if the snippet is used to initialise a per-neuron variable:

  • id which represents a neurons index within a population (starting from zero)

  • num_neurons which represents the number of neurons in the population

or, a per-synapse variable:

  • id_pre which represents the index of the presynaptic neuron (starting from zero)

  • id_post which represents the index of the postsynaptic neuron (starting from zero)

  • num_pre which represents the number of presynaptic neurons

  • num_post which represents the number of postsynaptic neurons

Finally, the variable being initialised is represented by the write-only value variable.

Parameters:
  • class_name (str) – name of the new model (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from paramss

  • var_init_code (Optional[str]) – string containing the code statements required to initialise the variable

  • extra_global_params – names and types of model extra global parameters

For example, if we wanted to define a snippet to initialise variables by sampling from a normal distribution, redrawing if the value is negative (which could be useful to ensure delays remain causal):

normal_positive_model = pygenn.create_var_init_snippet(
    'normal_positive',
    params=['mean', 'sd'],
    var_init_code=
        """
        scalar normal;
        do {
            normal = mean + (gennrand_normal() * sd);
        } while (normal < 0.0);
        value = normal;
        """)
pygenn.create_var_ref(*args, **kwargs)

Overloaded function.

  1. create_var_ref(arg0: GeNN::NeuronGroup, arg1: str) -> GeNN::Models::VarReference

Creates a reference to a neuron group variable.

  1. create_var_ref(arg0: GeNN::CurrentSource, arg1: str) -> GeNN::Models::VarReference

Creates a reference to a current source variable.

  1. create_var_ref(arg0: GeNN::CustomUpdate, arg1: str) -> GeNN::Models::VarReference

Creates a reference to a custom update variable.

pygenn.create_weight_update_model(class_name, params=None, vars=None, pre_vars=None, pre_ post_vars=None, post_ pre_neuron_var_refs=None, post_neuron_var_refs=None, derived_params=None, pre_spike_syn_code=None, pre_event_syn_code=None, post_event_syn_code=None, post_spike_syn_code=None, synapse_dynamics_code=None, pre_ post_ pre_spike_code=None, post_spike_code=None, pre_dynamics_code=None, post_dynamics_code=None, extra_global_params=None)

Creates a new weight update model. GeNN operates on the assumption that the postsynaptic output of the synapses are added linearly at the postsynaptic neuron. Within all of the synaptic code strings (pre_spike_syn_code, pre_event_syn_code, post_event_syn_code, post_spike_syn_code and synapse_dynamics_code ) these currents are delivered using the addToPost(inc) function. For example,

pre_spike_syn_code="addToPost(inc);"

where inc is the amount to add to the postsynapse model’s inSyn variable for each pre-synaptic spike. Dendritic delays can also be inserted between the synapse and the postsynaptic neuron by using the addToPostDelay(inc, delay) function. For example,

pre_spike_syn_code="addToPostDelay(inc, delay);"

where, once again, inc is the amount to add to the postsynaptic neuron’s inSyn variable and delay is the length of the dendritic delay in timesteps. By implementing delay as a weight update model variable, heterogeneous synaptic delays can be implemented. For an example, see WeightUpdateModels::StaticPulseDendriticDelay for a simple synapse update model with heterogeneous dendritic delays.

When using dendritic delays, the maximum dendritic delay for a synapse populations must be specified via the SynapseGroup.max_dendritic_delay_timesteps property. One can also define synaptic effects that occur in the reverse direction, i.e. terms that are added to a target variable in the _presynaptic_ neuron using the addToPre(inc) function. For example,

pre_spike_syn_code="addToPre(inc * V_post);"

would add terms inc * V_post to for each outgoing synapse of a presynaptic neuron. Similar to postsynaptic models, by default these inputs are accumulated in Isyn in the presynaptic neuron but they can also be directed to additional input variables by setting the SynapseGroup.pre_target_var property. Unlike for normal forward synaptic actions, reverse synaptic actions with addToPre(inc) are not modulated through a post-synaptic model but added directly into the indicated presynaptic target input variable.

Parameters:
  • class_name (str) – name of the new class (only for debugging)

  • params (Optional[Sequence[Union[str, Tuple[str, Union[str, ResolvedType]]]]]) – name and optional types of model parameters

  • vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of per-synapse model variables

  • pre_vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) – names, types and optional variable access modifiers of per-presynaptic neuron model variables

  • names (post_vars) – modifiers of per-postsynaptic neuron model variables

  • access (types and optional variable) – modifiers of per-postsynaptic neuron model variables

  • pre_neuron_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to presynaptic neuron variables

  • post_neuron_var_refs (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccessMode]]]]) – names, types and optional variable access of references to be assigned to postsynaptic neuron variables

  • derived_params (Optional[Sequence[Tuple[str, Callable, Union[str, ResolvedType]]]]) – names, types and callables to calculate derived parameter values from params

  • pre_spike_syn_code (Optional[str]) – string with the presynaptic spike code

  • pre_event_syn_code (Optional[str]) – string with the presynaptic event code

  • post_event_syn_code (Optional[str]) – string with the postsynaptic event code

  • post_spike_syn_code (Optional[str]) – string with the postsynaptic spike code

  • synapse_dynamics_code (Optional[str]) – string with the synapse dynamics code

  • pre_event_threshold_condition_code (Optional[str]) – string with the presynaptic event threshold condition code

  • post_event_threshold_condition_code (Optional[str]) – string with the postsynaptic event threshold condition code

  • pre_spike_code (Optional[str]) – string with the code run once per spiking presynaptic neuron. Only presynaptic variables and variable references can be referenced from this code.

  • post_spike_code (Optional[str]) – string with the code run once per spiking postsynaptic neuron

  • pre_dynamics_code (Optional[str]) – string with the code run every timestep on presynaptic neuron. Only presynaptic variables and variable references can be referenced from this code.

  • post_dynamics_code (Optional[str]) – string with the code run every timestep on postsynaptic neuron. Only postsynaptic variables and variable references can be referenced from this code.

  • extra_global_params (Optional[Sequence[Tuple[str, Union[str, ResolvedType]]]]) – names and types of model extra global parameters

  • post_vars (Optional[Sequence[Union[Tuple[str, Union[str, ResolvedType]], Tuple[str, Union[str, ResolvedType], VarAccess]]]]) –

For example, we can define a simple additive STDP rule with nearest-neighbour spike pairing and the following time-dependence (equivalent to weight_update_models.STDP()):

\[\begin{split}\Delta w_{ij} & = \begin{cases} A_{+}\exp\left(-\frac{\Delta t}{\tau_{+}}\right) & if\, \Delta t>0\\ A_{-}\exp\left(\frac{\Delta t}{\tau_{-}}\right) & if\, \Delta t\leq0 \end{cases}\end{split}\]

in a fully event-driven manner as follows:

stdp_additive_model = pygenn.create_weight_update_model(
    "stdp_additive",
    params=["tauPlus", "tauMinus", "aPlus", "aMinus", "wMin", "wMax"],
    vars=[("g", "scalar")],

    pre_spike_syn_code=
        """
        addToPost(g);
        const scalar dt = t - st_post;
        if (dt > 0) {
            const scalar timing = exp(-dt / tauMinus);
            const scalar newWeight = g - (Aminus * timing);
            g = fmax(Wmin, fmin(Wmax, newWeight));
        }
        """,
    post_spike_syn_code=
        """
        const scalar dt = t - st_pre;
        if (dt > 0) {
            const scalar timing = exp(-dt / tauPlus);
            const scalar newWeight = g + (Aplus * timing);
            g = fmax(Wmin, fmin(Wmax, newWeight));
        }
        """)

The memory required for synapse variables and the computational cost of updating them tends to grow with \(O(N^2)\) with the number of neurons. Therefore, if it is possible, implementing synapse variables on a per-neuron rather than per-synapse basis is a good idea. The pre_var_name_types and post_var_name_types keyword arguments are used to define any pre or postsynaptic state variables. For example, using pre and postsynaptic variables, our event-driven STDP rule can be extended to use all-to-all spike pairing using pre and postsynaptic trace variables [Morrison2008] :

stdp_additive_2_model = genn_model.create_custom_weight_update_class(
    "stdp_additive_2",
    params=["tauPlus", "tauMinus", "aPlus", "aMinus", "wMin", "wMax"],
    vars=[("g", "scalar")],
    pre_vars=[("preTrace", "scalar")],
    post_vars=[("postTrace", "scalar")],

    pre_spike_syn_code=
        """
        addToPost(g);
        const scalar dt = t - st_post;
        if(dt > 0) {
            const scalar newWeight = g - (aMinus * postTrace);
            g = fmin(wMax, fmax(wMin, newWeight));
        }
        """,
    post_spike_syn_code=
        """
        const scalar dt = t - st_pre;
        if(dt > 0) {
            const scalar newWeight = g + (aPlus * preTrace);
            g = fmin(wMax, fmax(wMin, newWeight));
        }
        """,

    pre_spike_code="preTrace += 1.0;",
    pre_dynamics_code="preTrace *= tauPlusDecay;",
    post_spike_code="postTrace += 1.0;",
    post_dynamics_code="postTrace *= tauMinusDecay;")

Unlike the event-driven updates previously described, synapse dynamics code is run for each synapse and each timestep, i.e. it is time-driven. This can be used where synapses have internal variables and dynamics that are described in continuous time, e.g. by ODEs. However, using this mechanism is typically computationally very costly because of the large number of synapses in a typical network. By using the addToPost() and addToPostDelay() functions discussed in the context of pre_spike_syn_code, the synapse dynamics can also be used to implement continuous synapses for rate-based models. For example a continous synapse which multiplies a presynaptic neuron variable by the weight could be added to a weight update model definition as follows:

pre_neuron_var_refs=[("V_pre", "scalar")],
synapse_dynamics_code="addToPost(g * V_pre);",

As well as time-driven synapse dynamics and spike event-driven updates, GeNN weight update models also support “spike-like events”. These can be triggered by a threshold condition evaluated on the pre or postsynaptic neuron. This typically involves pre or postsynaptic weight update model variables or variable references respectively.

For example, to trigger a presynaptic spike-like event when the presynaptic neuron’s voltage is greater than 0.02, the following could be added to a weight update model definition:

pre_neuron_var_refs=[("V_pre", "scalar")],
pre_event_threshold_condition_code="V_pre > -0.02"

Whenever this expression evaluates to true, the event code in pre_event_code will be executed.

pygenn.create_wu_egp_ref(arg0: GeNN::SynapseGroup, arg1: str) GeNN::Models::EGPReference

Creates a reference to a weight update model extra global parameter.

pygenn.create_wu_post_var_ref(arg0: GeNN::SynapseGroup, arg1: str) GeNN::Models::VarReference

Creates a reference to a weight update model postsynapticvariable.

pygenn.create_wu_pre_var_ref(arg0: GeNN::SynapseGroup, arg1: str) GeNN::Models::VarReference

Creates a reference to a weight update model presynaptic variable.

pygenn.create_wu_var_ref(*args, **kwargs)

Overloaded function.

  1. create_wu_var_ref(sg: GeNN::SynapseGroup, var_name: str, transpose_sg: GeNN::SynapseGroup = None, transpose_var_name: str = ‘’) -> GeNN::Models::WUVarReference

Creates a reference to a weight update model variable.

  1. create_wu_var_ref(arg0: GeNN::CustomUpdateWU, arg1: str) -> GeNN::Models::WUVarReference

Creates a reference to a custom weight update variable.

  1. create_wu_var_ref(arg0: GeNN::CustomConnectivityUpdate, arg1: str) -> GeNN::Models::WUVarReference

Creates a reference to a custom connectivity update update variable.

pygenn.get_var_access_dim(*args, **kwargs)

Overloaded function.

  1. get_var_access_dim(arg0: pygenn._genn.VarAccess) -> pygenn._genn.VarAccessDim

Extract variable dimensions from its access enumeration

  1. get_var_access_dim(arg0: pygenn._genn.CustomUpdateVarAccess, arg1: pygenn._genn.VarAccessDim) -> pygenn._genn.VarAccessDim

Extract custom update variable dimensions from its access enumeration and dimensions of the custom update itself

pygenn.init_postsynaptic(snippet, params={}, vars={}, var_refs={})

Initialises a postsynaptic model with parameter values, variable initialisers and variable references

Parameters:
  • snippet (Union[PostsynapticModelBase, str]) – postsynaptic model either as a string referencing a built-in model (see postsynaptic_models) or an instance of PostsynapticModelBase (for example returned by create_postsynaptic_model())

  • params (Dict[str, Union[int, float]]) – parameter values for the postsynaptic model (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial synaptic variable values or initialisers for the postsynaptic model (see Variables)

  • var_refs (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – references to postsynaptic neuron variables, typically created using create_var_ref() (see Variables references)

For example, the built-in conductance model with exponential current shaping could be initialised as follows:

postsynaptic_init = init_postsynaptic("ExpCond", {"tau": 1.0, "E": -80.0},
                                      var_refs={"V": create_var_ref(pop1, "V")})

where pop1 is a reference to the postsynaptic neuron population (as returned by GeNNModel.add_neuron_population())

pygenn.init_sparse_connectivity(snippet, params={})

Initialises a sparse connectivity initialisation snippet with parameter values

Parameters:
  • snippet (Union[InitSparseConnectivitySnippetBase, str]) – sparse connectivity init snippet, either as a string referencing a built-in snippet (see init_sparse_connectivity_snippets) or an instance of InitSparseConnectivitySnippetBase (for example returned by create_sparse_connect_init_snippet())

  • params (Dict[str, Union[int, float]]) – parameter values for the sparse connectivity init snippet (see Parameters)

For example, the built-in “FixedProbability” snippet could be used to generate connectivity where each pair of pre and postsynaptic neurons is connected with a probability of 0.1:

init = init_sparse_connectivity("FixedProbability", {"prob": 0.1})
pygenn.init_toeplitz_connectivity(init_toeplitz_connect_snippet, params={})

Initialises a toeplitz connectivity initialisation snippet with parameter values

Parameters:

For example, the built-in “Conv2D” snippet could be used to generate 2D convolutional connectivity with a \(3 \times 3\) kernel, a \(64 \times 64 \times 1\) input and a \(62 \times 62 \times 1\) output:

params = {"conv_kh": 3, "conv_kw": 3,
          "conv_ih": 64, "conv_iw": 64, "conv_ic": 1,
          "conv_oh": 62, "conv_ow": 62, "conv_oc": 1}

init = init_toeplitz_connectivity("Conv2D", params))

Note

This should be used to connect a presynaptic neuron population with \(64 \times 64 \times 1 = 4096\) neurons to a postsynaptic neuron population with \(62 \times 62 \times 1 = 3844\) neurons.

pygenn.init_var(snippet, params={})

Initialises a variable initialisation snippet with parameter values

Parameters:
  • snippet (Union[InitVarSnippetBase, str]) – variable init snippet, either as a string referencing a built-in snippet (see init_var_snippets) or an instance of InitVarSnippetBase (for example returned by create_var_init_snippet())

  • params (Dict[str, Union[int, float]]) – parameter values for the variable init snippet (see Parameters)

For example, the built-in model “Normal” could be used to initialise a variable by sampling from the normal distribution with a mean of 0 and a standard deviation of 1:

init = init_var("Normal", {"mean": 0.0, "sd": 1.0})
pygenn.init_weight_update(snippet, params={}, vars={}, pre_vars={}, post_vars={}, pre_var_refs={}, post_var_refs={})

Initialises a weight update model with parameter values, variable initialisers and variable references.

Parameters:
  • snippet – weight update model either as a string referencing a built-in model (see weight_update_models) or an instance of WeightUpdateModelBase (for example returned by create_weight_update_model())

  • params (Dict[str, Union[int, float]]) – parameter values (see Parameters)

  • vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial synaptic variable values or initialisers (see Variables)

  • pre_vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial presynaptic variable values or initialisers (see Variables)

  • post_vars (Dict[str, Union[VarInit, int, float, ndarray, Sequence]]) – initial postsynaptic variable values or initialisers (see Variables)

  • pre_var_refs (Dict[str, VarReference]) – references to presynaptic neuron variables, typically created using create_var_ref() (see Variables references)

  • post_var_refs (Dict[str, VarReference]) – references to postsynaptic neuron variables, typically created using create_var_ref() (see Variables references)

For example, the built-in static pulse model with constant weights could be initialised as follows:

weight_init = init_weight_update("StaticPulseConstantWeight", {"g": 1.0})

Submodules

pygenn.cuda_backend module

class pygenn.cuda_backend.BlockSizeSelect(self: pygenn.cuda_backend.BlockSizeSelect, value: int)

Bases: pybind11_object

Methods for selecting CUDA kernel block size

Members:

OCCUPANCY : Pick optimal blocksize for each kernel based on occupancy

MANUAL : Use block sizes specified by user

MANUAL = <BlockSizeSelect.MANUAL: 1>
OCCUPANCY = <BlockSizeSelect.OCCUPANCY: 0>
property name
property value
class pygenn.cuda_backend.DeviceSelect(self: pygenn.cuda_backend.DeviceSelect, value: int)

Bases: pybind11_object

Methods for selecting CUDA device

Members:

OPTIMAL : Pick optimal device based on how well kernels can be simultaneously simulated and occupancy

MOST_MEMORY : Pick device with most global memory

MANUAL : Use device specified by user

MANUAL = <DeviceSelect.MANUAL: 2>
MOST_MEMORY = <DeviceSelect.MOST_MEMORY: 1>
OPTIMAL = <DeviceSelect.OPTIMAL: 0>
property name
property value
class pygenn.cuda_backend.Preferences(self: pygenn.cuda_backend.Preferences)

Bases: PreferencesBase

Preferences for CUDA backend

property block_size_select_method

How to select CUDA blocksize

property constant_cache_overhead

How much constant cache is already used and therefore can’t be used by GeNN? Each of the four modules which includes CUDA headers(neuronUpdate, synapseUpdate, custom update, init and runner) Takes 72 bytes of constant memory for a lookup table used by cuRAND. If your application requires additional constant cache, increase this

property device_select_method

How to select GPU device

property enable_nccl_reductions

Generate corresponding NCCL batch reductions

property generate_line_info

Should line info be included in resultant executable for debugging/profiling purposes?

property manual_block_sizes

If block size select method is set to BlockSizeSelect::MANUAL, block size to use for each kernel

property manual_device_id

If device select method is set to DeviceSelect::MANUAL, id of device to use

property show_ptx_info

Should PTX assembler information be displayed for each CUDA kernel during compilation?

pygenn.current_source_models module

pygenn.current_source_models.DC() pygenn._genn.CurrentSourceModelBase

DC source It has a single parameter:

  • amp - amplitude of the current [nA]

pygenn.current_source_models.GaussianNoise() pygenn._genn.CurrentSourceModelBase

Noisy current source with noise drawn from normal distribution It has 2 parameters:

  • mean - mean of the normal distribution [nA]

  • sd - standard deviation of the normal distribution [nA]

pygenn.current_source_models.PoissonExp() pygenn._genn.CurrentSourceModelBase

Current source for injecting a current equivalent to a population of Poisson spike sources, one-to-one connected with exponential synapses It has 3 parameters:

  • weight - synaptic weight of the Poisson spikes [nA]

  • tauSyn - decay time constant [ms]

  • rate - mean firing rate [Hz]

pygenn.custom_connectivity_update_models module

pygenn.custom_update_models module

pygenn.custom_update_models.Transpose() pygenn._genn.CustomUpdateModelBase

Minimal custom update model for calculating tranpose

pygenn.genn_groups module

class pygenn.genn_groups.CurrentSourceMixin

Bases: GroupMixin

Mixin added to current source objects

class pygenn.genn_groups.CustomConnectivityUpdateMixin

Bases: GroupMixin

Mixin added to custom connectivity update objects

Variables:
class pygenn.genn_groups.CustomUpdateMixin

Bases: GroupMixin

Mixin added to custom update objects

class pygenn.genn_groups.CustomUpdateWUMixin

Bases: GroupMixin

Mixin added to custom update WU objects

class pygenn.genn_groups.GroupMixin

Bases: object

This is the base class for the mixins added to all types of groups. It provides basic functionality for handling variables, extra global parameters and dynamic parameters

Variables:
set_dynamic_param_value(name, value)

Set the value of a dynamic parameter at runtime

Parameters:
  • name (str) – name of the parameter

  • value (Union[float, int]) – numeric value to assign to parameters

class pygenn.genn_groups.NeuronGroupMixin

Bases: GroupMixin

Mixin added to neuron group objects It provides additional functionality for recording spikes

Variables:
property spike_recording_data: List[Tuple[ndarray, ndarray]]

Spike recording data associated with this neuron group.

Before accessing this property, GeNNModel.pull_recording_buffers_from_device() must be called to copy spike recording data from device

class pygenn.genn_groups.SynapseGroupMixin

Bases: GroupMixin

Mixin added to synapse group objects It provides additional functionality for recording spike events and handling connectivity

Variables:
get_sparse_post_inds()

Get postsynaptic indices of synapse group connections

Returns:

postsynaptic indices

Return type:

ndarray

get_sparse_pre_inds()

Get presynaptic indices of synapse group connections

Returns:

presynaptic indices

Return type:

ndarray

property post_spike_event_recording_data: List[Tuple[ndarray, ndarray]]

Postsynaptic spike-event recording data associated with this synapse group.

Before accessing this property, GeNNModel.pull_recording_buffers_from_device() must be called to copy spike recording data from device

property pre_spike_event_recording_data: List[Tuple[ndarray, ndarray]]

Presynaptic spike-event recording data associated with this synapse group.

Before accessing this property, GeNNModel.pull_recording_buffers_from_device() must be called to copy spike recording data from device

pull_connectivity_from_device()

Pull connectivity from device

push_connectivity_to_device()

Push connectivity to device

set_sparse_connections(pre_indices, post_indices)

Manually provide indices of sparse synapses between two groups of neurons

Parameters:
  • pre_indices (Union[Sequence[int], ndarray]) – presynaptic indices

  • post_indices (Union[Sequence[int], ndarray]) – postsynaptic indices

property synapse_group
property weight_update_var_size: int

Size of each weight update variable

pygenn.init_sparse_connectivity_snippets module

pygenn.init_sparse_connectivity_snippets.Conv2D() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises 2D convolutional connectivity Row build state variables are used to convert presynaptic neuron index to rows, columns and channels and, from these, to calculate the range of postsynaptic rows, columns and channels connections will be made within. This sparse connectivity snippet does not support multiple threads per neuron This snippet takes 12 parameter:

  • conv_kh - height of 2D convolution kernel.

  • conv_kw - width of 2D convolution kernel.

  • conv_sh - height of convolution stride

  • conv_sw - width of convolution stride

  • conv_padh - width of padding around input

  • conv_padw - height of padding around input

  • conv_ih - width of input to this convolution

  • conv_iw - height of input to this convolution

  • conv_ic - number of input channels to this convolution

  • conv_oh - width of output from this convolution

  • conv_ow - height of output from this convolution

  • conv_oc - number of output channels from this convolution

Note

conv_ih * conv_iw * conv_ic should equal the number of neurons in the presynaptic neuron population and conv_oh * conv_ow * conv_oc should equal the number of neurons in the postsynaptic neuron population.

pygenn.init_sparse_connectivity_snippets.FixedNumberPostWithReplacement() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity with a fixed number of random synapses per row. The postsynaptic targets of the synapses can be initialised in parallel by sampling from the discrete uniform distribution. However, to sample connections in ascending order, we sample from the 1st order statistic of the uniform distribution – Beta[1, Npost] – essentially the next smallest value. In this special case this is equivalent to the exponential distribution which can be sampled in constant time using the inversion method. This snippet takes 1 parameter:

  • num - number of postsynaptic neurons to connect each presynaptic neuron to.

pygenn.init_sparse_connectivity_snippets.FixedNumberPreWithReplacement() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity with a fixed number of random synapses per column. No need for ordering here so fine to sample directly from uniform distribution This snippet takes 1 parameter:

  • num - number of presynaptic neurons to connect each postsynaptic neuron to.

pygenn.init_sparse_connectivity_snippets.FixedNumberTotalWithReplacement() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity with a total number of random synapses. The first stage in using this connectivity is to determine how many of the total synapses end up in each row. This can be determined by sampling from the multinomial distribution. However, this operation cannot be efficiently parallelised so must be performed on the host and the result passed as an extra global parameter array. Once the length of each row is determined, the postsynaptic targets of the synapses can be initialised in parallel by sampling from the discrete uniform distribution. However, to sample connections in ascending order, we sample from the 1st order statistic of the uniform distribution – Beta[1, Npost] – essentially the next smallest value. In this special case this is equivalent to the exponential distribution which can be sampled in constant time using the inversion method. This snippet takes 1 parameter:

  • num - total number of synapses to distribute throughout synaptic matrix.

pygenn.init_sparse_connectivity_snippets.FixedProbability() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity with a fixed probability of a synapse existing between a pair of pre and postsynaptic neurons. Whether a synapse exists between a pair of pre and a postsynaptic neurons can be modelled using a Bernoulli distribution. While this COULD be sampled directly by repeatedly drawing from the uniform distribution, this is inefficient. Instead we sample from the geometric distribution which describes “the probability distribution of the number of Bernoulli trials needed to get one success” – essentially the distribution of the ‘gaps’ between synapses. We do this using the “inversion method” described by Devroye (1986) – essentially inverting the CDF of the equivalent continuous distribution (in this case the exponential distribution) This snippet takes 1 parameter:

  • prob - probability of connection in [0, 1]

pygenn.init_sparse_connectivity_snippets.FixedProbabilityNoAutapse() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity with a fixed probability of a synapse existing between a pair of pre and postsynaptic neurons. This version ensures there are no autapses - connections between neurons with the same id so should be used for recurrent connections. Whether a synapse exists between a pair of pre and a postsynaptic neurons can be modelled using a Bernoulli distribution. While this COULD br sampling directly by repeatedly drawing from the uniform distribution, this is innefficient. Instead we sample from the gemetric distribution which describes “the probability distribution of the number of Bernoulli trials needed to get one success” – essentially the distribution of the ‘gaps’ between synapses. We do this using the “inversion method” described by Devroye (1986) – essentially inverting the CDF of the equivalent continuous distribution (in this case the exponential distribution) This snippet takes 1 parameter:

  • prob - probability of connection in [0, 1]

pygenn.init_sparse_connectivity_snippets.OneToOne() pygenn._genn.InitSparseConnectivitySnippetBase

Initialises connectivity to a ‘one-to-one’ diagonal matrix This snippet has no parameters

pygenn.init_sparse_connectivity_snippets.Uninitialised() pygenn._genn.InitSparseConnectivitySnippetBase

Used to mark connectivity as uninitialised - no initialisation code will be run

pygenn.init_toeplitz_connectivity_snippets module

pygenn.init_toeplitz_connectivity_snippets.AvgPoolConv2D() pygenn._genn.InitToeplitzConnectivitySnippetBase

Initialises 2D convolutional connectivity preceded by averaging pooling Row build state variables are used to convert presynaptic neuron index to rows, columns and channels and, from these, to calculate the range of postsynaptic rows, columns and channels connections will be made within. This snippet takes 12 parameter:

  • conv_kh - height of 2D convolution kernel.

  • conv_kw - width of 2D convolution kernel.

  • pool_kh - height of 2D average pooling kernel.

  • pool_kw - width of 2D average pooling kernel.

  • pool_sh - height of average pooling stride

  • pool_sw - width of average pooling stride

  • pool_ih - width of input to the average pooling

  • pool_iw - height of input to the average pooling

  • pool_ic - number of input channels to the average pooling

  • conv_oh - width of output from the convolution

  • conv_ow - height of output from the convolution

  • conv_oc - number of output channels the this convolution

pygenn.init_toeplitz_connectivity_snippets.Conv2D() pygenn._genn.InitToeplitzConnectivitySnippetBase

Initialises 2D convolutional connectivity Row build state variables are used to convert presynaptic neuron index to rows, columns and channels and, from these, to calculate the range of postsynaptic rows, columns and channels connections will be made within. This snippet takes 8 parameter:

  • conv_kh - height of 2D convolution kernel.

  • conv_kw - width of 2D convolution kernel.

  • conv_ih - width of input to this convolution

  • conv_iw - height of input to this convolution

  • conv_ic - number of input channels to this convolution

  • conv_oh - width of output from this convolution

  • conv_ow - height of output from this convolution

  • conv_oc - number of output channels from this convolution

pygenn.init_toeplitz_connectivity_snippets.Uninitialised() pygenn._genn.InitToeplitzConnectivitySnippetBase

Used to mark connectivity as uninitialised - no initialisation code will be run

pygenn.init_var_snippets module

pygenn.init_var_snippets.Binomial() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the binomial distribution This snippet takes 2 parameters:

  • n - number of trials

  • p - success probability for each trial

pygenn.init_var_snippets.Constant() pygenn._genn.InitVarSnippetBase

Initialises variable to a constant value This snippet takes 1 parameter:

  • value - The value to intialise the variable to

Note

This snippet type is seldom used directly - InitVarSnippet::Init has an implicit constructor that, internally, creates one of these snippets

pygenn.init_var_snippets.Exponential() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the exponential distribution This snippet takes 1 parameter:

  • lambda - mean event rate (events per unit time/distance)

pygenn.init_var_snippets.Gamma() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the gamma distribution This snippet takes 2 parameters:

  • a - distribution shape

  • b - distribution scale

pygenn.init_var_snippets.Kernel() pygenn._genn.InitVarSnippetBase

Used to initialise synapse variables from a kernel. This snippet type is used if you wish to initialise sparse connectivity using a sparse connectivity initialisation snippet with a kernel such as InitSparseConnectivitySnippet::Conv2D.

pygenn.init_var_snippets.Normal() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the normal distribution This snippet takes 2 parameters:

  • mean - The mean

  • sd - The standard deviation

pygenn.init_var_snippets.NormalClipped() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the normal distribution, Resamples value if out of range specified my min and max This snippet takes 2 parameters:

  • mean - The mean

  • sd - ThGeNN::e standard deviation

  • min - The minimum value

  • max - The maximum value

pygenn.init_var_snippets.NormalClippedDelay() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the normal distribution, Resamples value of out of range specified my min and max. This snippet is intended for initializing (dendritic) delay parameters where parameters are specified in ms but converted to timesteps. This snippet takes 2 parameters:

  • mean - The mean [ms]

  • sd - The standard deviation [ms]

  • min - The minimum value [ms]

  • max - The maximum value [ms]

pygenn.init_var_snippets.Uniform() pygenn._genn.InitVarSnippetBase

Initialises variable by sampling from the uniform distribution This snippet takes 2 parameters:

  • min - The minimum value

  • max - The maximum value

pygenn.init_var_snippets.Uninitialised() pygenn._genn.InitVarSnippetBase

Used to mark variables as uninitialised - no initialisation code will be run

pygenn.model_preprocessor module

class pygenn.model_preprocessor.Array(variable_type, group)

Bases: ArrayBase

Array class used for exposing internal GeNN state

Parameters:

variable_type (Union[ResolvedType, UnresolvedType]) –

property view: ndarray

Memory view of array

class pygenn.model_preprocessor.ArrayBase(variable_type, group)

Bases: object

Base class for classes which access arrays of memory in running model

Parameters:
  • variable_type (Union[ResolvedType, UnresolvedType]) – data type of array elements

  • group – group array belongs to

pull_from_device()

Copy array device to host

push_to_device()

Copy array from host to device

set_array(array, view_shape=None)

Assign an array obtained from runtime to object

Parameters:
  • array – array object obtained from runtime

  • view_shape – shape to reshape array with

class pygenn.model_preprocessor.ExtraGlobalParameter(variable_name, variable_type, group, init_values=None)

Bases: Array

Array class used for exposing GeNN extra global parameters

Parameters:
  • variable_name (str) – name of the extra global parameter

  • variable_type (Union[ResolvedType, UnresolvedType]) – data type of the extra global parameter

  • group – group extra global parameter belongs to

  • init_values – values to initialise extra global parameter with

set_init_values(init_values)

Set values extra global parameter is initialised with

Parameters:

init_values – values to initialise extra global parameter with

property values: ndarray

Copy of extra global parameter values

property view: ndarray

Memory view of extra global parameter

class pygenn.model_preprocessor.SynapseVariable(variable_name, variable_type, init_values, group)

Bases: VariableBase

Array class used for exposing per-synapse GeNN variables

Parameters:
  • variable_name (str) – name of the variable

  • variable_type (Union[ResolvedType, UnresolvedType]) – data type of the variable

  • init_values – values to initialise variable with

  • group – group variable belongs to

property current_values: ndarray
property current_view: ndarray

Memory view of variable. This operation is not supported for variables associated with SynapseMatrixConnectivity.SPARSE connectivity.

property values: ndarray

Copy of variable’s values. Variables associated with SynapseMatrixConnectivity.SPARSE

property view: ndarray

Memory view of variable. This operation is not supported for variables associated with SynapseMatrixConnectivity.SPARSE connectivity.

class pygenn.model_preprocessor.Variable(variable_name, variable_type, init_values, group)

Bases: VariableBase

Array class used for exposing per-neuron GeNN variables

Parameters:
  • variable_name (str) – name of the variable

  • variable_type (Union[ResolvedType, UnresolvedType]) – data type of the variable

  • init_values – values to initialise variable with

  • group – group variable belongs to

property current_values: ndarray

Copy of variable’s values written in last timestep

property current_view: ndarray

Memory view of variable’s values written in last timestep

property values: ndarray

Copy of entire variable. If variable is delayed this will contain multiple delayed values.

property view: ndarray

Memory view of entire variable. If variable is delayed this will contain multiple delayed values.

class pygenn.model_preprocessor.VariableBase(variable_name, variable_type, init_values, group)

Bases: ArrayBase

Base class for arrays used to expose GeNN variables

Parameters:
  • variable_name (str) – name of the variable

  • variable_type (Union[ResolvedType, UnresolvedType]) – data type of the variable

  • init_values – values to initialise variable with

  • group – group variable belongs to

set_array(array, view_shape, delay_group)

Assign an array obtained from runtime to object

Parameters:
  • array – array object obtained from runtime

  • view_shape – shape to reshape array with

  • delay_group – neuron group which defines this array’s delays

set_init_values(init_values)

Set values variable is initialised with

Parameters:

init_values – values to initialise variable with

pygenn.neuron_models module

pygenn.neuron_models.Izhikevich() pygenn._genn.NeuronModelBase

Izhikevich neuron with fixed parameters [Izhikevich2003]. It is usually described as

\begin{eqnarray*} \frac{dV}{dt} &=& 0.04 V^2 + 5 V + 140 - U + I, \\ \frac{dU}{dt} &=& a (bV-U), \end{eqnarray*}

I is an external input current and the voltage V is reset to parameter c and U incremented by parameter d, whenever V >= 30 mV. This is paired with a particular integration procedure of two 0.5 ms Euler time steps for the V equation followed by one 1 ms time step of the U equation. Because of its popularity we provide this model in this form here event though due to the details of the usual implementation it is strictly speaking inconsistent with the displayed equations.

Variables are:

  • V - Membrane potential

  • U - Membrane recovery variable

Parameters are:

  • a - time scale of U

  • b - sensitivity of U

  • c - after-spike reset value of V

  • d - after-spike reset value of U

pygenn.neuron_models.IzhikevichVariable() pygenn._genn.NeuronModelBase

Izhikevich neuron with variable parameters [Izhikevich2003]. This is the same model as NeuronModels::Izhikevich but parameters are defined as “variables” in order to allow users to provide individual values for each individual neuron instead of fixed values for all neurons across the population.

Accordingly, the model has the variables:

  • V - Membrane potential

  • U - Membrane recovery variable

  • a - time scale of U

  • b - sensitivity of U

  • c - after-spike reset value of V

  • d - after-spike reset value of U

and no parameters.

pygenn.neuron_models.LIF() pygenn._genn.NeuronModelBase
pygenn.neuron_models.Poisson() pygenn._genn.NeuronModelBase

Poisson neurons This neuron model emits spikes according to the Poisson distribution with a mean firing rate as determined by its single parameter. It has 1 state variable:

  • timeStepToSpike - Number of timesteps to next spike

and 1 parameter:

  • rate - Mean firing rate (Hz)

Note

Internally this samples from the exponential distribution using the C++ 11 <random> library on the CPU and by transforming the uniform distribution, generated using cuRAND, with a natural log on the GPU.

Note

If you are connecting Poisson neurons one-to-one to another neuron population, it is more efficient to add a CurrentSourceModels::PoissonExp instead.

pygenn.neuron_models.RulkovMap() pygenn._genn.NeuronModelBase

Rulkov Map neuron The RulkovMap type is a map based neuron model based on [Rulkov2002] but in the 1-dimensional map form used in [Nowotny2005]:

\begin{eqnarray*} V(t+\Delta t) &=& \left\{ \begin{array}{ll} V_{\rm spike} \Big(\frac{\alpha V_{\rm spike}}{V_{\rm spike}-V(t) \beta I_{\rm syn}} + y \Big) & V(t) \leq 0 \\ V_{\rm spike} \big(\alpha+y\big) & V(t) \leq V_{\rm spike} \big(\alpha + y\big) \; \& \; V(t-\Delta t) \leq 0 \\ -V_{\rm spike} & {\rm otherwise} \end{array} \right. \end{eqnarray*}

Note

The RulkovMap type only works as intended for the single time step size of `DT`= 0.5.

The RulkovMap type has 2 variables:

  • V - the membrane potential

  • preV - the membrane potential at the previous time step

and it has 4 parameters:

  • Vspike - determines the amplitude of spikes, typically -60mV

  • alpha - determines the shape of the iteration function, typically :math:`alpha `= 3

  • y - “shift / excitation” parameter, also determines the iteration function,originally, y= -2.468

  • beta - roughly speaking equivalent to the input resistance, i.e. it regulates the scale of the input into the neuron, typically \(\beta`= 2.64 :math:`{\rm M}\Omega\).

Note

The initial values array for the RulkovMap type needs two entries for V and preV and the parameter array needs four entries for Vspike, alpha, y and beta, in that order.

pygenn.neuron_models.SpikeSourceArray() pygenn._genn.NeuronModelBase

Spike source array A neuron which reads spike times from a global spikes array. It has 2 variables:

  • startSpike - Index of the next spike in the global array

  • endSpike - Index of the spike next to the last in the globel array

and 1 extra global parameter:

  • spikeTimes - Array with all spike times

pygenn.neuron_models.TraubMiles() pygenn._genn.NeuronModelBase

Hodgkin-Huxley neurons with Traub & Miles algorithm. This conductance based model has been taken from [Traub1991] and can be described by the equations:

\begin{eqnarray*} C \frac{d V}{dt} &=& -I_{{\rm Na}} -I_K-I_{{\rm leak}}-I_M-I_{i,DC}-I_{i,{\rm syn}}-I_i, \\ I_{{\rm Na}}(t) &=& g_{{\rm Na}} m_i(t)^3 h_i(t)(V_i(t)-E_{{\rm Na}}) \\ I_{{\rm K}}(t) &=& g_{{\rm K}} n_i(t)^4(V_i(t)-E_{{\rm K}}) \\ \frac{dy(t)}{dt} &=& \alpha_y (V(t))(1-y(t))-\beta_y(V(t)) y(t), \end{eqnarray*}

where \(y_i= m, h, n\), and

\begin{eqnarray*} \alpha_n&=& 0.032(-50-V)/\big(\exp((-50-V)/5)-1\big) \\ \beta_n &=& 0.5\exp((-55-V)/40) \\ \alpha_m &=& 0.32(-52-V)/\big(\exp((-52-V)/4)-1\big) \\ \beta_m &=& 0.28(25+V)/\big(\exp((25+V)/5)-1\big) \\ \alpha_h &=& 0.128\exp((-48-V)/18) \\ \beta_h &=& 4/\big(\exp((-25-V)/5)+1\big). \end{eqnarray*}

and typical parameters are \(C=0.143\) nF, \(g_{{\rm leak}}= 0.02672\) \(\mu`S, :math:`E_{{\rm leak}}= -63.563\) mV, \(g_{{\rm Na}}=7.15\) \(\mu`S, :math:`E_{{\rm Na}}= 50\) mV, \(g_{{\rm {\rm K}}}=1.43\) \(\mu`S, :math:`E_{{\rm K}}= -95\) mV.

It has 4 variables:

  • V - membrane potential E

  • m - probability for Na channel activation m

  • h - probability for not Na channel blocking h

  • n - probability for K channel activation n

and 7 parameters:

  • gNa - Na conductance in 1/(mOhms * cm^2)

  • ENa - Na equi potential in mV

  • gK - K conductance in 1/(mOhms * cm^2)

  • EK - K equi potential in mV

  • gl - Leak conductance in 1/(mOhms * cm^2)

  • El - Leak equi potential in mV

  • C - Membrane capacity density in muF/cm^2

Note

Internally, the ordinary differential equations defining the model are integrated with a linear Euler algorithm and GeNN integrates 25 internal time steps for each neuron for each network time step. I.e., if the network is simulated at DT= 0.1 ms, then the neurons are integrated with a linear Euler algorithm with lDT= 0.004 ms. This variant uses IF statements to check for a value at which a singularity would be hit. If so, value calculated by L’Hospital rule is used.

pygenn.postsynaptic_models module

pygenn.postsynaptic_models.DeltaCurr() pygenn._genn.PostsynapticModelBase

Simple delta current synapse. Synaptic input provides a direct inject of instantaneous current

pygenn.postsynaptic_models.ExpCond() pygenn._genn.PostsynapticModelBase

Exponential decay with synaptic input treated as a conductance value. This model has no variables, two parameters:

  • tau - Decay time constant

  • E - Reversal potential

and a variable reference:

  • V - A reference to the neuron’s membrane voltage

tau is used by the derived parameter expdecay which returns expf(-dt/tau).

pygenn.postsynaptic_models.ExpCurr() pygenn._genn.PostsynapticModelBase

Exponential decay with synaptic input treated as a current value. This model has no variables and a single parameter:

  • tau - Decay time constant

pygenn.single_threaded_cpu_backend module

class pygenn.single_threaded_cpu_backend.Preferences(self: pygenn.single_threaded_cpu_backend.Preferences)

Bases: PreferencesBase

pygenn.types module

pygenn.weight_update_models module

pygenn.weight_update_models.STDP() pygenn._genn.WeightUpdateModelBase

Simply asymmetrical STDP rule. This rule makes purely additive weight updates within hard bounds and uses nearest-neighbour spike pairing and the following time-dependence:

\[\begin{split}\Delta w_{ij} = \ \begin{cases} A_{+}\exp\left(-\frac{\Delta t}{\tau_{+}}\right) & if\, \Delta t>0\\ A_{-}\exp\left(\frac{\Delta t}{\tau_{-}}\right) & if\, \Delta t\leq0 \end{cases}\end{split}\]

The model has 1 variable:

  • g - synaptic weight

and 6 parameters:

  • tauPlus - Potentiation time constant (ms)

  • tauMinus - Depression time constant (ms)

  • Aplus - Rate of potentiation

  • Aminus - Rate of depression

  • Wmin - Minimum weight

  • Wmax - Maximum weight

pygenn.weight_update_models.StaticGraded() pygenn._genn.WeightUpdateModelBase

Graded-potential, static synapse In a graded synapse, the conductance is updated gradually with the rule:

\[gSyn= g * tanh((V - E_{pre}) / V_{slope}\]

whenever the membrane potential \(V\) is larger than the threshold \(E_{pre}\). The model has 1 variable:

  • g - synaptic weight

The model also has 1 presynaptic neuron variable reference:

  • V - Presynaptic membrane potential

The parameters are:

  • Epre - Presynaptic threshold potential

  • Vslope - Activation slope of graded release

pygenn.weight_update_models.StaticPulse() pygenn._genn.WeightUpdateModelBase

Pulse-coupled, static synapse with heterogeneous weight. No learning rule is applied to the synapse and for each pre-synaptic spikes, the synaptic conductances are simply added to the postsynaptic input variable. The model has 1 variable:

  • g - synaptic weight

and no other parameters.

pygenn.weight_update_models.StaticPulseConstantWeight() pygenn._genn.WeightUpdateModelBase

Pulse-coupled, static synapse with homogeneous weight. No learning rule is applied to the synapse and for each pre-synaptic spikes, the synaptic conductances are simply added to the postsynaptic input variable. The model has 1 parameter:

  • g - synaptic weight

and no other variables.

pygenn.weight_update_models.StaticPulseDendriticDelay() pygenn._genn.WeightUpdateModelBase

Pulse-coupled, static synapse with heterogenous weight and dendritic delays No learning rule is applied to the synapse and for each pre-synaptic spikes, the synaptic conductances are simply added to the postsynaptic input variable. The model has 2 variables:

  • g - synaptic weight

  • d - dendritic delay in timesteps

and no other parameters.