Simulating networks
Once you have built a network using the GeNNModel API described in Building networks and
before you can simulate it you first need to launch the GeNN code generator using GeNNModel.build()
and then load the model into memory using GeNNModel.load().
Code generation is ‘lazy’ so if your model hasn’t changed, code generation will be almost instantaneous.
If no errors are reported, the simplest simulation looks like the following:
...
model.build()
model.load()
while model.timestep < 100:
model.step_time()
As well as the integer timestep, the current time in ms can be accessed with GeNNModel.t.
On GPU platforms like CUDA, the above simulation will run asynchronously with the loop
launching the kernels to simulate each timestep but not synchronising with the CPU at any point.
Spike recording
Because recording spikes and spike-like events is a common requirement and their sparse nature can make them inefficient to access,
GeNN has a dedicated events recording system which collects events, emitted over a number of timesteps, in GPU memory before transferring to the host.
Spike recording can be enabled on chosen neuron groups by setting the NeuronGroup.spike_recording_enabled and NeuronGroup.spike_event_recording_enabled properties.
Memory can then be allocated at runtime for spike recording by using the num_recording_timesteps keyword argument to GeNNModel.load().
Spikes can then be copied from the GPU to the host using the GeNNModel.pull_recording_buffers_from_device() method and the spikes emitted by a population
can be accessed via the NeuronGroupMixin.spike_recording_data property. Similarly, pre and postsynaptic spike-like events used by a synapse group
can be accessed via the SynapseGroupMixin.pre_spike_event_recording_data and SynapseGroupMixin.post_spike_event_recording_data properties, respectively.
For example, the previous example could be extended to record spikes from a NeuronGroup pop as follows:
...
pop.spike_recording_enabled = True
model.build()
model.load(num_recording_timesteps=100)
while model.timestep < 100:
model.step_time()
model.pull_recording_buffers_from_device()
spike_times, spike_ids = pop.spike_recording_data[0]
If batching was enabled, spike recording data from batch b would be accessed with e.g. pop.spike_recording_data[b].
Variables
In real simulations, as well as spikes, you often want to interact with model state variables as the simulation runs.
State variables are encapsulated in pygenn.model_preprocessor.VariableBase objects and all populations own dictionaries of these, accessible by variable name.
For example all groups have GroupMixin.vars whereas, synapse groups additionally have SynapseGroupMixin.pre_vars and SynapseGroupMixin.post_vars.
By default, copies of GeNN variables are allocated both on the GPU device and the host from where they can be accessed from Python.
However, if variable’s location is set to VarLocation.DEVICE, they cannot be accessed from Python.
Pushing and pulling
The contents of the host copy of a variable can be ‘pushed’ to the GPU device by calling pygenn.model_preprocessor.ArrayBase.push_to_device()
and ‘pulled’ from the GPU device into the host copy by calling pygenn.model_preprocessor.ArrayBase.pull_from_device().
In practice this takes the shape of, for example,
pop.vars["V"].push_to_device()
in order to push the CPU copy of the variable “V” in population pop to the GPU memory, and
pop.vars["V"].pull_from_device()
to make the reverse transfer.
When using the single-threaded CPU backend, these operations do nothing but we recommend leaving them in place so models will work transparantly across all backends.
Values and views
To access the data associated with a variable, you can use the current_values property. For example to save the current values of a variable:
np.save("values.npy", pop.vars["V"].current_values)
This will make a copy of the data owned by GeNN and apply any processing required to transform it into a user-friendly format.
For example, state variables associated with sparse matrices will be re-ordered into the same order as the indices used to construct the matrix
and the values from the current delay step will be extracted for per-neuron variables which are accessed from synapse groups with delays.
If you wish to access the values across all delay steps, the values property can be used.
Additionally, you can can directly access the memory owned by GeNN using a ‘memory view’ for example to set all elements of a variable:
pop.vars["V"].current_view[:] = 1.0
Note
The memory access is always to the host memory space (unless it is them same as the backend memory space for “single_threaded_cpu” or through zero copy memory). Therefore, typically, memory access would look like
pop.vars["V"].pull_from_device()
np.save("values.npy", pop.vars["V"].current_values)
and similarly,
pop.vars["V"].current_view[:] = 1.0
pop.vars["V"].push_to_device()
Extra global parameters
Extra global parameters behave very much like variables.
They are encapsulated in pygenn.model_preprocessor.ExtraGlobalParameter objects which are derived from the same
pygenn.model_preprocessor.ArrayBase base class and thus share much of the functionality described above.
Populations also own dictionaries of extra global parameters, accessible by name.
For example NeuronGroup has NeuronGroup.extra_global_params whereas, SynapseGroup has
SynapseGroup.extra_global_params to hold extra global parameters associated with the weight update model and
SynapseGroup.psm_extra_global_params to hold extra global parameters associated with the postsynaptic model.
One very important difference between extra global parameters and variables is that extra global parameters need to be allocated and provided with initial contents before the model is loaded. For example, to allocate an extra global parameter called “X” to hold 100 elements which are initially all zero you could do the following:
...
pop.extra_global_params["X"].set_init_values(np.zeros(100))
model.build()
model.load()
After allocation, extra global parameters can be accessed just like variables, for example:
pop.extra_global_params["X"].current_view[:] = 1.0
pop.extra_global_params["X"].push_to_device()
Performance profiling
GeNN provides timers to profile the performance of different simulation phases.
This is useful for identifying bottlenecks and optimizing your models.
To enable timing measurements, set ModelSpec.timing_enabled to True before building the model:
model = GeNNModel("float", "profiled_model")
model.timing_enabled = True
# ... add populations and synapses ...
model.build()
model.load()
Once timing is enabled, you can access timing counters after running the simulation:
# Run simulation
for i in range(1000):
model.step_time()
# Access timing counters (all return time in seconds)
print(f"Neuron update time: {model.neuron_update_time:.6f}s")
print(f"Presynaptic update time: {model.presynaptic_update_time:.6f}s")
print(f"Postsynaptic update time: {model.postsynaptic_update_time:.6f}s")
print(f"Synapse dynamics time: {model.synapse_dynamics_time:.6f}s")
Available timing properties
The following timing counters are available on GeNNModel:
- property GeNNModel.neuron_update_time: float
Time in seconds spent in neuron update kernel. Only available if
ModelSpec.timing_enabledis set
- property GeNNModel.init_time: float
Time in seconds spent initialisation kernel. Only available if
ModelSpec.timing_enabledis set
- property GeNNModel.init_sparse_time: float
Time in seconds spent in sparse initialisation kernel. Only available if
ModelSpec.timing_enabledis set
- property GeNNModel.presynaptic_update_time: float
Time in seconds spent in presynaptic update kernel. Only available if
ModelSpec.timing_enabledis set
- property GeNNModel.postsynaptic_update_time: float
Time in seconds spent in postsynaptic update kernel. Only available if
ModelSpec.timing_enabledis set
- property GeNNModel.synapse_dynamics_time: float
Time in seconds spent in synapse dynamics kernel. Only available if
ModelSpec.timing_enabledis set
All timing values accumulate over the lifetime of the model and are returned in seconds.
Note
Enabling timing adds synchronization overhead on GPU backends which may affect
performance. Timing counters are only meaningful when timing_enabled is set to
True before building the model.
Dynamic parameters
As discussed previously, when building a model, parameters can be made dynamic e.g. by calling pygenn.NeuronGroup.set_param_dynamic() on a NeuronGroup.
The values of these parameters can then be set at runtime using the pygenn.genn_groups.GroupMixin.set_dynamic_param_value() method. For example to increase the value of a
parameter called “tau” on a population pop, you could do the following:
...
pop.set_param_dynamic("tau")
model.build()
model.load()
tau = np.arange(0, 100, 10)
while model.timestep < 100:
if (model.timestep % 10) == 0:
pop.set_dynamic_param_value("tau", tau[model.timestep // 10])
model.step_time()