GeNN  4.9.0
GPU enhanced Neuronal Networks (GeNN)
Release Notes

Release Notes for GeNN 4.9.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.8.1 release. It is intended as the last release for GeNN 4.X.X. Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 5.

User Side Changes

  1. Implemented pygenn.GeNNModel.unload to manually unload GeNN models to improve control in scenarios such as parameter sweeping where multiple PyGeNN models need to be instantiated.
  2. Added Extra Global Parameter references to custom updates (see Defining custom updates, Defining your own custom update model and Extra Global Parameter references).
  3. Expose $(num_pre), $(num_post), $(num_batches) to all user code strings

Bug fixes

  1. Fixed handling of indices specified as sequences types other than numpy arrays in pygenn.SynapseGroup.set_sparse_connections.
  2. Fixed bug in CUDA constant cache estimation bug which could cause nvLink errors in models with learning rules which required previous spike times.
  3. Fixed longstanding issue with setuptools that meant pygenn sometimes had to be built twice to obtain a functional version. Massive thanks to Enrico Trombetta for contributing this fix.

Optimisations

  1. Reduced the number of layers and generally optimised Docker image. Massive thanks to Benjamin Evans for his work on this.

Release Notes for GeNN v4.8.1

This release fixes a number of issues found in the 4.8.0 release and also includes some optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed bug relating to merging populations with variable references pointing to variables with different access duplication modes.
  2. Fixed infinite loop that could occur in code generator if a bracket was missed calling a GeNN function in a code snippet.
  3. Fixed bug that meant batched models which required previous spike times failed to compile.
  4. Fixed bug with DLL-searching logic on Windows which meant CUDA backend failed to load on some systems.
  5. Fixed a number of corner cases in the handling of VarAccessDuplication::SHARED_NEURON variables.

Optimisations

  1. When building models with large numbers of populations using the CUDA backend, compile times could be very large. This was at least in part due to over-verbose error handling code being generated. CodeGenerator::CUDA::Preferences::generateSimpleErrorHandling enables the generation of much more minimal error-handling code and can speed up compilation by up to 10x.
  2. Turned on multi-processor compilation option in Visual Studio solutions which speeds up compilation of GeNN by a significant amount.
  3. Fusing postsynaptic models was previously overly-conservative meaning large, highly-connected models using a postsynaptic model with additional state variables would perform poorly. These checks have been relaxed and brough into line with those used for fusing pre and postsynaptic updates coming from weight update models.

Release Notes for GeNN 4.8.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release.

User Side Changes

  1. Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables.
  2. Custom updates extended to perform reduction operations across neurons as well as batches (see Neuron reductions).
  3. PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env
  4. GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to Edward Stevinson, James Turner and Benjamin Evans for their help on this (see the README for more information).

Bug fixes

  1. Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates.
  2. Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created.
  3. Correctly substitute 0 for $(batch) when using single-threaded CPU backend.
  4. Fixed issues building PyGeNN with Visual Studio 2017.
  5. Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed.
  6. Fixed longstanding bug in the gen_input_structured tool – used by some userprojects – where data was written outside of array bounds.
  7. Fixed issue with debug mode of genn-builmodel.bat when used with single-threaded CPU backend.
  8. Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated.

Release Notes for GeNN v4.7.1

This release fixes a plethora of issues found in the 4.7.0 release and also includes an optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed issue meaning that manual changes to max synaptic row length (via SynapseGroup::setMaxConnections) were not detected and model might not be rebuilt. Additionally, reduce the strictness of checks in SynapseGroup::setMaxConnections and SynapseGroup::setMaxSourceConnections so maximum synapstic row and column lengths can be overriden when sparse connectivity initialisation snippets are in use as long as overriding values are larger than those provided by snippet.
  2. Fixed issue preventing PyGeNN being built on Python 2.7
  3. Fixed issue meaning that inSyn, denDelayInSyn and revInSynOutSyn variables were not properly zeroed during initialisation (or reinitialisation) of batched models.
  4. Fixed issue where initialization code for synapse groups could be incorrectly merged.
  5. Fixed issue when using custom updates on batched neuron group variables.
  6. Fixed issue in spike recording system where some permutations of kernel and neuron population size would result in memory corruption.
  7. Fixed (long-standing) issue where LLDB wasn't correctly invoked when running genn-buildmodel.sh -d on Mac.
  8. Fixed issue where sparse initialisation kernels weren't correctly generated if they were only required to initialise custom updates.

Optimisations

  1. Using synapse dynamics with sparse connectivity previously had very high memory requirements and poor performance. Both issues have been solved with a new algorithm.

Release Notes for GeNN v4.7.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release.

User Side Changes

  1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (see Toeplitz connectivity initialisation)
  2. Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (see Synaptic matrix types).
  3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the $(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (see Defining a new weight update model)
  4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL.
  5. Neuron code can now sample the binomial distribution using $(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (see Random number generation and Variable initialisation)
  6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths.

Bug fixes:

  1. Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity.
  2. Fixed issue where, if $(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created.
  3. Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated.

Release Notes for GeNN v4.6.0

This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN. It also includes a number of bug fixes that have been identified since the 4.5.1 release.

User Side Changes

  1. As well as performing arbitrary updates and calculating transposes of weight update model variables, custom updates can now be used to implement 'reductions' so, for example, duplicated variables can be summed across model batches (see Batch reduction).
  2. Previously, to connect a synapse group to a postsynaptic neuron's additional input variable, a custom postsynaptic model had to be used. SynapseGroup::setPSTargetVar and pygenn.SynapseGroup.ps_target_var can now be used to set the target variable of any synapse group.
  3. Previously, weight update model pre and postsynaptic updates and variables got duplicated in the neuron kernel. This was very innefficient and these can now be 'fused' together by setting ModelSpec::setFusePrePostWeightUpdateModels.
  4. PyGeNN now shares a version with GeNN itself and this will be accessible via pygenn.__version__.
  5. The names of populations and variables are now validated to prevent code with invalid variable names being generated.
  6. As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set.
  7. Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via pygenn.NeuronGroup.pull_spike_events_from_device, pygenn.NeuronGroup.push_spike_events_to_device, pygenn.NeuronGroup.pull_current_spike_events_from_device and pygenn.NeuronGroup.push_current_spike_events_to_device; and accessed via pygenn.NeuronGroup.current_spike_events.
  8. Added additional error handling to prevent properties of pygenn.GeNNModel that can only be set before the model was built being set afterwards.
  9. Variable references can now reference custom update variables (see Variable references).
  10. Updated the default parameters used in the MBody1 example to be more sensible.

Bug fixes:

  1. Fixed an issue that was preventing genn-buildmodel.sh correctly handling paths with spaces
  2. Fix multiple issues with sparse synapse index narrowing
  3. Fixed issue where, if GeNN is run in a locale where , is used for decimal point, some generated code was incorrectly formated.
  4. Fixed several small issues preventing GeNN from building on GCC 5 Visual C++ 2017

Release Notes for GeNN v4.5.1 (PyGeNN 0.4.6)

This release fixes several small issues found in the 4.5.0 release.

Bug fixes:

  1. Fixed cause of the warnings about memory leaks which were generated when sparse connectivity initialisation snippets were defined in PyGeNN.
  2. Fixed bug in model change detection which resulted in memory usage estimate increasing every time the model subsequently changed.
  3. Fixed several bugs effecting the implementation of custom update models in CUDA and OpenCL.

Release Notes for GeNN v4.5.0 (PyGeNN 0.4.5)

This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN. It also includes a number of bug fixes that have been identified since the 4.4.0 release.

User Side Changes

  1. When performing inference on datasets, batching helps fill the GPU and improve performance. This could be previously achieved using "master" and "slave" synapse populations but this didn't scale well. Models can now be automatically batched using ModelSpec::setBatchSize or pygenn.GeNNModel.batch_size.
  2. As well as more typical neuron, weight update, postsynaptic and current source models, you can now define custom update models which define a process which can be applied to any variable in the model. These can be used for e.g. resetting state variables or implementing optimisers for gradient-based learning (see Defining custom updates).
  3. Model compilation and CUDA block size optimisation could be rather slow in previous versions. More work is still required in this area but, code will now only be re-generated if the model has actually changed and block sizes will only be re-optimised for modules which have changed. Rebuilding can be forced with the -f flag to genn-buildmodel or the force_rebuild flag to pygenn.GeNNModel.build.
  4. Binary PyGeNN wheels are now always built with Python 3.
  5. To aid debugging, debug versions of PyGeNN can now be built (see Debugging suggestions).
  6. OpenCL performance on AMD devices is improved - this has only been tested on a Radeon RX 5700 XT so any feedback from users with other devices would be much appreciated.
  7. Exceptions raised by GeNN are now correctly passed through PyGeNN to Python.
  8. Spike times (and spike-like event times) can now be accessed, pushed and pulled from PyGeNN (see pygenn.NeuronGroup.spike_times, pygenn.NeuronGroup.push_spike_times_to_device and pygenn.NeuronGroup.pull_spike_times_from_device )
  9. On models where postsynaptic merging isn't enabled, the postsynaptic input current from a synapse group can now be accessed from PyGeNN via pygenn.SynapseGroup.in_syn; and pushed and pulled with pygenn.SynapseGroup.push_in_syn_to_device and pygenn.SynapseGroup.pull_in_syn_from_device respectively.
  10. Accessing extra global parameters from PyGeNN was previously rather cumbersome. Now, you don't need to manually pass a size to e.g. pygenn.NeuronGroup.pull_extra_global_param_from_device and, if you are using non-pointer extra global parameters, you no longer need to call e.g. pygenn.NeuronGroup.set_extra_global_param before loading your model.

Bug fixes:

  1. cudaFree was incorrectly called twice on zero-copy variables, causing crashes on exit
  2. Build in Izhikevich neurons incorrectly used auto-refractory mechanism, limiting their maximum firing rate
  3. On Windows, 64-bit version of compiler is now always used
  4. Fixed issues with CUDA 9.0 and 9.1 introduced in v4.4.0 release
  5. Fixed race condition relating to accessing previous spike times
  6. Fixed bug in column-wise connectivity initialisation
  7. Fixed issue with binomialInverseCDF function (used for calculating the maximum row length of probabilistic connectivity) which could fail when using some parameter combinations

Release Notes for GeNN v4.4.0 (PyGeNN 0.4.4)

This release adds a number of significant new features to GeNN and expands the documentation to cover using GeNN from Python with PyGeNN. It also includes a number of bug fixes that have been identified since the 4.3.3 release.

User Side Changes

  1. New system for efficiently recording spikes from multiple timesteps into GPU memory (see Spike Recording).
  2. Connectivity can now be initialised using column-wise as well as row-wise sparse connectivity initialisation snippets (see Defining a new sparse connectivity snippet).
  3. Support for 'kernel-based' connectivity, allowing efficient support for connectivity such as convolutions (see Kernel-based connectivity).
  4. Improved access to spike times from weight update models - previous spike times can now be accessed via $(prev_sT_pre) and $(prev_sT_post) (see Defining a new weight update model).
  5. Added support for accessing spike-like-event times from weight update models via $(seT_pre) and $(prev_seT_pre) variables (see Spike-like events)
  6. Added support for continuous as well as spike-driven dynamics for pre and postsynaptic weight update model variables (see Defining a new weight update model).
  7. Added experimental OpenCL backend - there are still issues outstanding but any feedback would be much appreciated.
  8. Improved supression of irrelevant NVCC warnings when optimizing block sizes.
  9. Added support for SM 8.0 and 8.6 architectures to CUDA backend.

Bug fixes:

  1. Fixed support for CUDA 8 and older.
  2. Replaced deprecated __linux macro with __linux__ making GeNN compatible with compiler on POWER9 Linux.
  3. Fixed bug where the initialisation of neuron groups which are identical apart from one needing an RNG could be incorrectly merged.

Release Notes for GeNN v4.3.3 (PyGeNN 0.4.3)

This release fixes several small issues found in the 4.3.2 release.

Bug fixes:

  1. Fixed bug in bitmask connectivity and procedural connectivity kernels.
  2. Fixed issues with setting model precision in PyGeNN. Time precision can now be set seperately using the time_precision option to the pygenn.GeNNModel constructor.

Release Notes for GeNN v4.3.2 (PyGeNN 0.4.2)

This release fixes several small issues found in the 4.3.1 release.

Bug fixes:

  1. Fixed bug when simulating models with very small timesteps.
  2. Fixed bug in code generator where synapse groups with procedural variables could be incorrectly merged.
  3. Fixed bug in code generator where references to parameters in sparse connectivity initialisation snippet row build state variables were not found.
  4. Fixed issue with PyGeNN and custom sparse connectivity init snippets that led to segfaults.
  5. Fixed bug that prevented PyGeNN being used with procedural connectivity.
  6. In models with a large number of populations, the CUDA constant cache could overflow. This has been fixed and, if your application has additional constant cache requirements, these can be added to GENN_PREFERENCES.constantCacheOverhead.

Release Notes for GeNN v4.3.1 (PyGeNN 0.4.1)

This release fixes several small issues found in the 4.3.0 release.

Bug fixes:

  1. Fixed reference-counting bugs in PyGeNN that prevented multiple models from being instantiated.
  2. Fixed PyGeNN interface for downloading connectivity from GPU.
  3. Fixed host initialisation of sparse connectivity initialisation snippet extra global parameters using the CPU backend.
  4. Upgraded third-party logging library to fix issues compiling SpineML generator with Visual Studio 2019.
  5. Fixed bug in new code generator that didn't disambiguate between pre or postsynaptic neuron parameters and synapse parameters with the same name.

Release Notes for GeNN v4.3.0 (PyGeNN 0.4.0)

This release adds a number of significant new features to GeNN as well as making small improvements to PyGeNN. It also includes a number of bug fixes that have been identified since the 4.2.1 release.

User Side Changes

  1. Previously GeNN performed poorly with large numbers of populations. This version includes a new code generator which effectively solves this problem (see [3]).
  2. InitSparseConnectivitySnippet::Base row build state and NeuronModels::Base additional input variables could previously only be initialised with a numeric value. Now they can be initialised with a code string supporting substitutions etc.
  3. Added GeNN implementation of cortical microcircuit model [6] to userprojects (discussed further in [2]). Also demonstrates how to dynamically load GeNN models rather than linking against them.
  4. Previously one pushed states and spikes to and from device in PyGeNN using methods like pygenn.GeNNModel.push_current_spikes_to_device which was somewhat cumbersome. These have now been wrapped in methods like pygenn.NeuronGroup.push_current_spikes_to_device which is somewhat nicer.
  5. The CodeGenerator::generateAll function now returns memory estimates which are, in turn, returned from pygenn.GeNNModel.build.
  6. To better support batching of inputs into multiple instances of the same model, added ModelSpec::addSlaveSynapsePopulation to add synapse populations which share per-synapse state with a 'master' synapse group.
  7. Added extra global parameters to variable initialisation snippets - can be used for lookup table style functionality.
  8. Added support for host initialisation of sparse connectivity initialisation snippet extra global parameters. This allows host-based initialisation to be encapsulated within an InitSparseConnectivitySnippet::Base class.

Bug fixes:

  1. Fixed issues preventing spike recorder classes from compiling with GCC 4.9, thanks to Christoph Ostrau for this one!
  2. The initialisers for pre and postsynaptic weight update model variables were not searched for references to an RNG when determining whether a neuron group requires an initialisation RNG.
  3. Fixed issue with PyGeNN and custom var init snippets that led to segfaults.

Release Notes for GeNN v4.2.1 (PyGeNN 0.3.1)

This release fixes several small issues including several relating to Brian2GeNN compatibility.

User Side Changes

  1. Added -s option to genn-buildmodel.bat on Windows to turn off Visual C++ additional security checks (SDL), allowing Brian2GeNN libraries to be included in code generator.

Bug fixes:

  1. Fixed bug where $(sT_pre) and $(sT_post) were incorrect when accessed in weight update model pre and postsynaptic spike code respectively when using the single-threaded CPU backend.
  2. Fixed a corner case where valid models might result in compiler errors about Isyn not being defined.
  3. Fixed a bug preventing multiple include paths being passed to genn-buildmodel.bat on Windows.

Release Notes for GeNN v4.2.0 (PyGeNN 0.3)

This release adds a number of new features to GeNN and its Python interface as well as fixing a number of bugs that have been identified since the 4.1.0 release.

User Side Changes

  1. Kernel timings can now be enabled from python with pygenn.GeNNModel.timing_enabled and subsequently accessed with pygenn.GeNNModel.neuron_update_time, pygenn.GeNNModel.init_time, pygenn.GeNNModel.presynaptic_update_time, pygenn.GeNNModel.postsynaptic_update_time, pygenn.GeNNModel.synapse_dynamics_time and pygenn.GeNNModel.init_sparse_time.
  2. Backends now generate getFreeDeviceMemBytes() function to allow free device memory to be queried from user simulation code. This is also exposed to Python via GeNNModel.free_device_mem_bytes property.
  3. GeNN preferences are now fully exposed to PyGeNN by passing kwargs to pygenn.GeNNModel.__init__.
  4. Logging level can now be seperately specified for GeNN, the code generator, the SpineML generator and the backend and is accessible from PyGeNN.
  5. CodeGenerator::PreferencesBase::enableBitmaskOptimisations flag enables an alternative algorithm for updating synaptic matrices implemented with SynapseMatrixConnectivity::BITMASK which performs better on smaller GPUs and CPUs. If you are manually initialising matrices this adds padding to align words to rows of the matrix.
  6. SynapseMatrixConnectivity::PROCEDURAL and SynapseMatrixWeight::PROCEDURAL allow connectivity and synaptic weights to be generated on the fly rather than stored in memory.
  7. CodeGenerator::PreferencesBase::automaticCopy flag allows models to be built without the need for explicitly copying data between host and device. For CUDA backend this uses unified memory (https://devblogs.nvidia.com/unified-memory-cuda-beginners/).
  8. Speed of code compilation can be improved by building using multiple threads. This is now done everywhere where make or MSBuild is invocated automatically.

Bug fixes:

  1. Fixed several bugs in extra global parameter implementation in PyGeNN.
  2. Floating point min and max should be calculated with fmax and fmin in code snippets - fixed in several models and user projects.
  3. Fixed issues with version of numpy required in PyGeNN (previously held back by an issue with PyNN)

Release Notes for GeNN v4.1.0

This release adds a number of new features to GeNN and its SpineML interface as well as fixing a number of bugs that have been identified since the 4.0.2 release.

User Side Changes

  1. The SpineML simulator could previously only be used as a standalone application. This functionality is now provided by the spineml_simulator library and can be used via the SpineMLSimulator::Simulator class.
  2. When declaring a model's variables using SET_VAR, they can be marked as read-only by adding a 3rd parameter set to VarAccess::READ_ONLY to enable further optimisations. See Defining your own neuron type for more details.
  3. Previously, unless models were very large or had very high spike rates, using SynapseGroup::SpanType::PRESYNAPTIC typically resulted in poor performance. When using the CUDA backend, SynapseGroup::setNumThreadsPerSpike can now be used to increase parallelism.
  4. There were useful helpers for recording spikes (SpikeRecorder) and timing (Timer, TimerAccumulate) in "userproject\include" which were not easily usable to user projects. genn-create-userproject.sh and genn-create-userproject.bat now have a "-u" option which puts this in the include path of the generated project.
  5. Timing information generated when ModelSpec::setTiming is enabled was not accesible to SpineML models. This is now exposed through the SpineMLSimulator::Simulator class.
  6. Neuron population state variables were not easily accessible if the populations had incoming or outgoing connections with synaptic delays. Additional helper functions are now generated. See Simulating a network model for more details.
  7. SpineML interface will now use heterogeneous dendritic delay system introduced in GeNN 3.2.0 if required.
  8. Add CodeGenerator::CUDA::Preferences::generateLineInfo option to output CUDA line info for profiling.
  9. CUDA backend supports half datatype allowing memory savings through reduced precision. Host C++ code does not support half-precision types so such state variables must have their location set to VarLocation::DEVICE.
  10. If ModelSpec::setDefaultNarrowSparseIndEnabled is set on a model or SynapseGroup::setNarrowSparseIndEnabled is set on an individual synapse population with sparse connectivity, 16-bit numbers will be used for postsynaptic indices, almost halving memory requirements.
  11. Manual selection of CUDA devices is now exposed to PyGeNN via the pygenn.GeNNModel.selected_gpu property.

Bug fixes:

  1. Fixed incomaptibilies with GCC 4.9
  2. Fixed bug that occured if derived parameters were used in spike-like-event threshold conditions.
  3. Fixed bug that occured when merging of postsynaptic models is enabled and GeNN decides to employ specific CUDA optimizations.
  4. Increase maximum supported CUDA kernel grid size - a bug was limiting this to 65536.
  5. Fixed bugs in timing system when used with synapse dynamics kernels.

Release Notes for GeNN v4.0.2

This release fixes several small issues with the generation of binary wheels for Python:

Bug fixes:

  1. There was a conflict between the versions of numpy used to build the wheels and the version required for the PyGeNN packages
  2. Wheels were renamed to include the CUDA version which broke them.

Release Notes for GeNN v4.0.1

This release fixes several small bugs found in GeNN 4.0.0 and implements some small features:

User Side Changes

  1. Improved detection and handling of errors when specifying model parameters and values in PyGeNN.
  2. SpineML simulator is now implemented as a library which can be used directly from user applications as well as from command line tool.

Bug fixes:

  1. Fixed typo in pygenn.GeNNModel.push_var_to_device function in PyGeNN.
  2. Fixed broken support for Visual C++ 2013.
  3. Fixed zero-copy mode.
  4. Fixed typo in tutorial 2.

Release Notes for GeNN v4.0.0

This release is the result of a second round of fairly major refactoring which we hope will make GeNN easier to use and allow it to be extended more easily in future. However, especially if you have been using GeNN 2.XX syntax, it breaks backward compatibility.

User Side Changes

  1. Totally new build system - make install can be used to install GeNN to a system location on Linux and Mac and Windows projects work much better in the Visual Studio IDE.
  2. Python interface now supports Windows and can be installed using binary 'wheels' (see Python interface (PyGeNN) for more details).
  3. No need to call initGeNN() at start and model.finalize() at end of all models.
  4. Initialisation system simplified - if you specify a value or initialiser for a variable or sparse connectivity, it will be initialised by your chosen backend. If you mark it as uninitialised, it is up to you to initialize it in user code between the calls to initialize() and initializeSparse() (where it will be copied to device).
  5. genn-create-user-project helper scripts to create Makefiles or MSBuild projects for building user code
  6. State variables can now be pushed and pulled individually using the pull<var name><neuron or synapse name>FromDevice() and push<var name><neuron or synapse name>ToDevice() functions.
  7. Management of extra global parameter arrays has been somewhat automated (see Extra Global Parameters for more details).
  8. GENN_PREFERENCES is no longer a namespace - it's a global struct so members need to be accessed with . rather than ::.
  9. NeuronGroup, SynapseGroup, CurrentSource and NNmodel all previously exposed a lot of methods that the user wasn't supposed to call but could. These have now all been made protected and are exposed to GeNN internals using derived classes (NeuronGroupInternal, SynapseGroupInternal, CurrentSourceInternal, ModelSpecInternal) that make them public using using directives.
  10. Auto-refractory behaviour was controlled using GENN_PREFERENCES::autoRefractory, this is now controlled on a per-neuron-model basis using the SET_NEEDS_AUTO_REFRACTORY macro.
  11. The functions used for pushing and pulling have been unified somewhat this means that copyStateToDevice and copyStateFromDevice functions no longer copy spikes and pus<neuron or synapse name>SpikesToDevice and pull<neuron or synapse name>SpikesFromDevice no longer copy spike times or spike-like events.
  12. Standard models of leaky-integrate-and-fire neuron (NeuronModels::LIF) and of exponentially shaped postsynaptic current (PostsynapticModels::ExpCurr) have been added.
  13. When a model is built using the CUDA backend, the device it was built for is stored using it's PCI bus ID so it will always use the same device.

Deprecations

  1. Yale-format sparse matrices are no longer supported.
  2. GeNN 2.X syntax for implementing neuron and synapse models is no longer supported.
  3. $(addtoinSyn) = X; $(updatelinsyn); idiom in weight update models has been replaced by function style $(addToInSyn, X);.

Release Notes for GeNN v3.3.0

This release is intended as the last service release for GeNN 3.X.X. Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 4.

User Side Changes

  1. Postsynaptic models can now have Extra Global Parameters.
  2. Gamma distribution can now be sampled using $(gennrand_gamma, a). This can be used to initialise variables using InitVarSnippet::Gamma.
  3. Experimental Python interface - All features of GeNN are now exposed to Python through the pygenn module (see Python interface (PyGeNN) for more details).

Bug fixes:

  1. Devices with Streaming Multiprocessor version 2.1 (compute capability 2.0) now work correctly in Windows.
  2. Seeding of on-device RNGs now works correctly.
  3. Improvements to accuracy of memory usage estimates provided by code generator.

Release Notes for GeNN v3.2.0

This release extends the initialisation system introduced in 3.1.0 to support the initialisation of sparse synaptic connectivity, adds support for networks with more sophisticated models of synaptic plasticity and delay as well as including several other small features, optimisations and bug fixes for certain system configurations. This release supports GCC >= 4.9.1 on Linux, Visual Studio >= 2013 on Windows and recent versions of Clang on Mac OS X.

User Side Changes

  1. Sparse synaptic connectivity can now be initialised using small snippets of code run either on GPU or CPU. This can save significant amounts of initialisation time for large models. See Sparse connectivity initialisation for more details.
  2. New 'ragged matrix' data structure for representing sparse synaptic connections – supports initialisation using new sparse synaptic connecivity initialisation system and enables future optimisations. See Synaptic matrix types for more details.
  3. Added support for pre and postsynaptic state variables for weight update models to allow more efficient implementatation of trace based STDP rules. See Defining a new weight update model for more details.
  4. Added support for devices with Compute Capability 7.0 (Volta) to block-size optimizer.
  5. Added support for a new class of 'current source' model which allows non-synaptic input to be efficiently injected into neurons. See Current source models for more details.
  6. Added support for heterogeneous dendritic delays. See Defining a new weight update model for more details.
  7. Added support for (homogeneous) synaptic back propagation delays using SynapseGroup::setBackPropDelaySteps.
  8. For long simulations, using single precision to represent simulation time does not work well. Added NNmodel::setTimePrecision to allow data type used to represent time to be set independently.

Optimisations

  1. GENN_PREFERENCES::mergePostsynapticModels flag can be used to enable the merging together of postsynaptic models from a neuron population's incoming synapse populations - improves performance and saves memory.
  2. On devices with compute capability > 3.5 GeNN now uses the read only cache to improve performance of postsynaptic learning kernel.

Bug fixes:

  1. Fixed bug enabling support for CUDA 9.1 and 9.2 on Windows.
  2. Fixed bug in SynDelay example where membrane voltage went to NaN.
  3. Fixed bug in code generation of SCALAR_MIN and SCALAR_MAX values.
  4. Fixed bug in substitution of trancendental functions with single-precision variants.
  5. Fixed various issues involving using spike times with delayed synapse projections.

Release Notes for GeNN v3.1.1

This release fixes several small bugs found in GeNN 3.1.0 and implements some small features:

User Side Changes

  1. Added new synapse matrix types SPARSE_GLOBALG_INDIVIDUAL_PSM, DENSE_GLOBALG_INDIVIDUAL_PSM and BITMASK_GLOBALG_INDIVIDUAL_PSM to handle case where synapses with no individual state have a postsynaptic model with state variables e.g. an alpha synapse. See Synaptic matrix types for more details.

Bug fixes

  1. Correctly handle aliases which refer to other aliases in SpineML models.
  2. Fixed issues with presynaptically parallelised synapse populations where the postsynaptic population is small enough for input to be accumulated in shared memory.

Release Notes for GeNN v3.1.0

This release builds on the changes made in 3.0.0 to further streamline the process of building models with GeNN and includes several bug fixes for certain system configurations.

User Side Changes

  1. Support for simulating models described using the SpineML model description language with GeNN (see SpineML and SpineCreator for more details).
  2. Neuron models can now sample from uniform, normal, exponential or log-normal distributions - these calls are translated to cuRAND when run on GPUs and calls to the C++11 <random> library when run on CPU. See Defining your own neuron type for more details.
  3. Model state variables can now be initialised using small snippets of code run either on GPU or CPU. This can save significant amounts of initialisation time for large models. See Defining a new variable initialisation snippet for more details.
  4. New MSBuild build system for Windows - makes developing user code from within Visual Studio much more streamlined. See Debugging suggestions for more details.

Bug fixes:

  1. Workaround for bug found in Glibc 2.23 and 2.24 which causes poor performance on some 64-bit Linux systems (namely on Ubuntu 16.04 LTS).
  2. Fixed bug encountered when using extra global variables in weight updates.

Release Notes for GeNN v3.0.0

This release is the result of some fairly major refactoring of GeNN which we hope will make it more user-friendly and maintainable in the future.

User Side Changes

  1. Entirely new syntax for defining models - hopefully terser and less error-prone (see updated documentation and examples for details).
  2. Continuous integration testing using Jenkins - automated testing and code coverage calculation calculated automatically for Github pull requests etc.
  3. Support for using Zero-copy memory for model variables. Especially on devices such as NVIDIA Jetson TX1 with no physical GPU memory this can significantly improve performance when recording data or injecting it to the simulation from external sensors.

Release Notes for GeNN v2.2.3

This release includes minor new features and several bug fixes for certain system configurations.

User Side Changes

  1. Transitioned feature tests to use Google Test framework.
  2. Added support for CUDA shader model 6.X

Bug fixes:

  1. Fixed problem using GeNN on systems running 32-bit Linux kernels on a 64-bit architecture (Nvidia Jetson modules running old software for example).
  2. Fixed problem linking against CUDA on Mac OS X El Capitan due to SIP (System Integrity Protection).
  3. Fixed problems with support code relating to its scope and usage in spike-like event threshold code.
  4. Disabled use of C++ regular expressions on older versions of GCC.

Release Notes for GeNN v2.2.2

This release includes minor new features and several bug fixes for certain system configurations.

User Side Changes

  1. Added support for the new version (2.0) of the Brian simulation package for Python.
  2. Added a mechanism for setting user-defined flags for the C++ compiler and NVCC compiler, via GENN_PREFERENCES.

Bug fixes:

  1. Fixed a problem with atomicAdd() redefinitions on certain CUDA runtime versions and GPU configurations.
  2. Fixed an incorrect bracket placement bug in code generation for certain models.
  3. Fixed an incorrect neuron group indexing bug in the learning kernel, for certain models.
  4. The dry-run compile phase now stores temporary files in the current directory, rather than the temp directory, solving issues on some systems.
  5. The LINK_FLAGS and INCLUDE_FLAGS in the common windows makefile include 'makefile_commin_win.mk' are now appended to, rather than being overwritten, fixing issues with custom user makefiles on Windows.

Release Notes for GeNN v2.2.1

This bugfix release fixes some critical bugs which occur on certain system configurations.

Bug fixes:

  1. (important) Fixed a Windows-specific bug where the CL compiler terminates, incorrectly reporting that the nested scope limit has been exceeded, when a large number of device variables need to be initialised.
  2. (important) Fixed a bug where, in certain circumstances, outdated generateALL objects are used by the Makefiles, rather than being cleaned and replaced by up-to-date ones.
  3. (important) Fixed an 'atomicAdd' redeclared or missing bug, which happens on certain CUDA architectures when using the newest CUDA 8.0 RC toolkit.
  4. (minor) The SynDelay example project now correctly reports spike indexes for the input group.

Please refer to the full documentation for further details, tutorials and complete code documentation.


Release Notes for GeNN v2.2

This release includes minor new features, some core code improvements and several bug fixes on GeNN v2.1.

User Side Changes

  1. GeNN now analyses automatically which parameters each kernel needs access to and these and only these are passed in the kernel argument list in addition to the global time t. These parameters can be a combination of extraGlobalNeuronKernelParameters and extraGlobalSynapseKernelParameters in either neuron or synapse kernel. In the unlikely case that users wish to call kernels directly, the correct call can be found in the stepTimeGPU() function.
    Reflecting these changes, the predefined Poisson neurons now simply have two extraGlobalNeuronParameter rates and offset which replace the previous custom pointer to the array of input rates and integer offset to indicate the current input pattern. These extraGlobalNeuronKernelParameters are passed to the neuron kernel automatically, but the rates themselves within the array are of course not updated automatically (this is exactly as before with the specifically generated kernel arguments for Poisson neurons).
    The concept of "directInput" has been removed. Users can easily achieve the same functionality by adding an additional variable (if there are individual inputs to neurons), an extraGlobalNeuronParameter (if the input is homogeneous but time dependent) or, obviously, a simple parameter if it's homogeneous and constant.
    Note
    The global time variable "t" is now provided by GeNN; please make sure that you are not duplicating its definition or shadowing it. This could have severe consequences for simulation correctness (e.g. time not advancing in cases of over-shadowing).
  2. We introduced the namespace GENN_PREFERENCES which contains variables that determine the behaviour of GeNN.
  3. We introduced a new code snippet called "supportCode" for neuron models, weightupdate models and post-synaptic models. This code snippet is intended to contain user-defined functions that are used from the other code snippets. We advise where possible to define the support code functions with the CUDA keywords "__host__ __device__" so that they are available for both GPU and CPU version. Alternatively one can define separate versions for host and device in the snippet. The snippets are automatically made available to the relevant code parts. This is regulated through namespaces so that name clashes between different models do not matter. An exception are hash defines. They can in principle be used in the supportCode snippet but need to be protected specifically using ifndef. For example
    #ifndef clip(x)
    #define clip(x) x > 10.0? 10.0 : x
    #endif
    Note
    If there are conflicting definitions for hash defines, the one that appears first in the GeNN generated code will then prevail.
  4. The new convenience macros spikeCount_XX and spike_XX where "XX" is the name of the neuron group are now also available for events: spikeEventCount_XX and spikeEvent_XX. They access the values for the current time step even if there are synaptic delays and spikes events are stored in circular queues.
  5. The old buildmodel.[sh|bat] scripts have been superseded by new genn-buildmodel.[sh|bat] scripts. These scripts accept UNIX style option switches, allow both relative and absolute model file paths, and allow the user to specify the directory in which all output files are placed (-o <path>). Debug (-d), CPU-only (-c) and show help (-h) are also defined.
  6. We have introduced a CPU-only "-c" genn-buildmodel switch, which, if it's defined, will generate a GeNN version that is completely independent from CUDA and hence can be used on computers without CUDA installation or CUDA enabled hardware. Obviously, this then can also only run on CPU. CPU only mode can either be switched on by defining CPU_ONLY in the model description file or by passing appropriate parameters during the build, in particular
    genn-buildmodel.[sh|bat] \<modelfile\> -c
    make release CPU_ONLY=1
  7. The new genn-buildmodel "-o" switch allows the user to specify the output directory for all generated files - the default is the current directory. For example, a user project could be in '/home/genn_project', whilst the GeNN directory could be '/usr/local/genn'. The GeNN directory is kept clean, unless the user decides to build the sample projects inside of it without copying them elsewhere. This allows the deployment of GeNN to a read-only directory, like '/usr/local' or 'C:\Program Files'. It also allows multiple users - i.e. on a compute cluster - to use GeNN simultaneously, without overwriting each other's code-generation files, etcetera.
  8. The ARM architecture is now supported - e.g. the NVIDIA Jetson development platform.
  9. The NVIDIA CUDA SM_5* (Maxwell) architecture is now supported.
  10. An error is now thrown when the user tries to use double precision floating-point numbers on devices with architecture older than SM_13, since these devices do not support double precision.
  11. All GeNN helper functions and classes, such as toString() and NNmodel, are defined in the header files at genn/lib/include/, for example stringUtils.h and modelSpec.h, which should be individually included before the functions and classes may be used. The functions and classes are actually implementated in the static library genn\lib\lib\genn.lib (Windows) or genn/lib/lib/libgenn.a (Mac, Linux), which must be linked into the final executable if any GeNN functions or classes are used.
  12. In the modelDefinition() file, only the header file modelSpec.h should be included - i.e. not the source file modelSpec.cc. This is because the declaration and definition of NNmodel, and associated functions, has been separated into modelSpec.h and modelSpec.cc, respectively. This is to enable NNmodel code to be precompiled separately. Henceforth, only the header file modelSpec.h should be included in model definition files!
  13. In the modelDefinition() file, DT is now preferrably defined using model.setDT(<val>);, rather than #define DT <val>, in order to prevent problems with DT macro redefinition. For backward-compatibility reasons, the old #define DT <val> method may still be used, however users are advised to adopt the new method.
  14. In preparation for multi-GPU support in GeNN, we have separated out the compilation of generated code from user-side code. This will eventually allow us to optimise and compile different parts of the model with different CUDA flags, depending on the CUDA device chosen to execute that particular part of the model. As such, we have had to use a header file definitions.h as the generated code interface, rather than the runner.cc file. In practice, this means that user-side code should include myModel_CODE/definitions.h, rather than myModel_CODE/runner.cc. Including runner.cc will likely result in pages of linking errors at best!

Developer Side Changes

  1. Blocksize optimization and device choice now obtain the ptxas information on memory usage from a CUDA driver API call rather than from parsing ptxas output of the nvcc compiler. This adds robustness to any change in the syntax of the compiler output.
  2. The information about device choice is now stored in variables in the namespace GENN_PREFERENCES. This includes chooseDevice, optimiseBlockSize, optimizeCode, debugCode, showPtxInfo, defaultDevice. asGoodAsZero has also been moved into this namespace.
  3. We have also introduced the namespace GENN_FLAGS that contains unsigned int variables that attach names to numeric flags that can be used within GeNN.
  4. The definitions of all generated variables and functions such as pullXXXStateFromDevice etc, are now generated into definitions.h. This is useful where one wants to compile separate object files that cannot all include the full definitions in e.g. "runnerGPU.cc". One example where this is useful is the brian2genn interface.
  5. A number of feature tests have been added that can be found in the featureTests directory. They can be run with the respective runTests.sh scripts. The cleanTests.sh scripts can be used to remove all generated code after testing.

Improvements

  1. Improved method of obtaining ptxas compiler information on register and shared memory usage and an improved algorithm for estimating shared memory usage requirements for different block sizes.
  2. Replaced pageable CPU-side memory with page-locked memory. This can significantly speed up simulations in which a lot of data is regularly copied to and from a CUDA device.
  3. GeNN library objects and the main generateALL binary objects are now compiled separately, and only when a change has been made to an object's source, rather than recompiling all software for a minor change in a single source file. This should speed up compilation in some instances.

Bug fixes:

  1. Fixed a minor bug with delayed synapses, where delaySlot is declared but not referenced.
  2. We fixed a bug where on rare occasions a synchronisation problem occurred in sparse synapse populations.
  3. We fixed a bug where the combined spike event condition from several synapse populations was not assembled correctly in the code generation phase (the parameter values of the first synapse population over-rode the values of all other populations in the combined condition).

Please refer to the full documentation for further details, tutorials and complete code documentation.


Release Notes for GeNN v2.1

This release includes some new features and several bug fixes on GeNN v2.0.

User Side Changes

  1. Block size debugging flag and the asGoodAsZero variables are moved into include/global.h.
  2. NGRADSYNAPSES dynamics have changed (See Bug fix #4) and this change is applied to the example projects. If you are using this synapse model, you may want to consider changing model parameters.
  3. The delay slots are now such that NO_DELAY is 0 delay slots (previously 1) and 1 means an actual delay of 1 time step.
  4. The convenience function convertProbabilityToRandomNumberThreshold(float *, uint64_t *, int) was changed so that it actually converts firing probability/timestep into a threshold value for the GeNN random number generator (as its name always suggested). The previous functionality of converting a rate in kHz into a firing threshold number for the GeNN random number generator is now provided with the name convertRateToRandomNumberThreshold(float *, uint64_t *, int)
  5. Every model definition function modelDefinition() now needs to end with calling NNmodel::finalize() for the defined network model. This will lock down the model and prevent any further changes to it by the supported methods. It also triggers necessary analysis of the model structure that should only be performed once. If the finalize() function is not called, GeNN will issue an error and exit before code generation.
  6. To be more consistent in function naming the pull\<SYNAPSENAME\>FromDevice and push\<SYNAPSENAME\>ToDevice have been renamed to pull\<SYNAPSENAME\>StateFromDevice and push\<SYNAPSENAME\>StateToDevice. The old versions are still supported through macro definitions to make the transition easier.
  7. New convenience macros are now provided to access the current spike numbers and identities of neurons that spiked. These are called spikeCount_XX and spike_XX where "XX" is the name of the neuron group. They access the values for the current time step even if there are synaptic delays and spikes are stored in circular queues.
  8. There is now a pre-defined neuron type "SPIKECOURCE" which is empty and can be used to define PyNN style spike source arrays.
  9. The macros FLOAT and DOUBLE were replaced with GENN_FLOAT and GENN_DOUBLE due to name clashes with typedefs in Windows that define FLOAT and DOUBLE.

Developer Side Changes

  1. We introduced a file definitions.h, which is generated and filled with useful macros such as spkQuePtrShift which tells users where in the circular spike queue their spikes start.

Improvements

  1. Improved debugging information for block size optimisation and device choice.
  2. Changed the device selection logic so that device occupancy has larger priority than device capability version.
  3. A new HH model called TRAUBMILES_PSTEP where one can set the number of inner loops as a parameter is introduced. It uses the TRAUBMILES_SAFE method.
  4. An alternative method is added for the insect olfaction model in order to fix the number of connections to a maximum of 10K in order to avoid negative conductance tails.
  5. We introduced a preprocessor define directive for an "int_" function that translates floating points to integers.

Bug fixes:

  1. AtomicAdd replacement for old GPUs were used by mistake if the model runs in double precision.
  2. Timing of individual kernels is fixed and improved.
  3. More careful setting of maximum number of connections in sparse connectivity, covering mixed dense/sparse network scenarios.
  4. NGRADSYNAPSES was not scaling correctly with varying time step.
  5. Fixed a bug where learning kernel with sparse connectivity was going out of range in an array.
  6. Fixed synapse kernel name substitutions where the "dd_" prefix was omitted by mistake.

Please refer to the full documentation for further details, tutorials and complete code documentation.


Release Notes for GeNN v2.0

Version 2.0 of GeNN comes with a lot of improvements and added features, some of which have necessitated some changes to the structure of parameter arrays among others.

User Side Changes

  1. Users are now required to call initGeNN() in the model definition function before adding any populations to the neuronal network model.
  2. glbscnt is now call glbSpkCnt for consistency with glbSpkEvntCnt.
  3. There is no longer a privileged parameter Epre. Spike type events are now defined by a code string spkEvntThreshold, the same way proper spikes are. The only difference is that Spike type events are specific to a synapse type rather than a neuron type.
  4. The function setSynapseG has been deprecated. In a GLOBALG scenario, the variables of a synapse group are set to the initial values provided in the modeldefinition function.
  5. Due to the split of synaptic models into weightUpdateModel and postSynModel, the parameter arrays used during model definition need to be carefully split as well so that each side gets the right parameters. For example, previously
    float myPNKC_p[3]= {
    0.0, // 0 - Erev: Reversal potential
    -20.0, // 1 - Epre: Presynaptic threshold potential
    1.0 // 2 - tau_S: decay time constant for S [ms]
    };
    would define the parameter array of three parameters, Erev, Epre, and tau_S for a synapse of type NSYNAPSE. This now needs to be "split" into
    float *myPNKC_p= NULL;
    float postExpPNKC[2]={
    1.0, // 0 - tau_S: decay time constant for S [ms]
    0.0 // 1 - Erev: Reversal potential
    };
    i.e. parameters Erev and tau_S are moved to the post-synaptic model and its parameter array of two parameters. Epre is discontinued as a parameter for NSYNAPSE. As a consequence the weightupdate model of NSYNAPSE has no parameters and one can pass NULL for the parameter array in addSynapsePopulation. The correct parameter lists for all defined neuron and synapse model types are listed in the User Manual.
    Note
    If the parameters are not redefined appropriately this will lead to uncontrolled behaviour of models and likely to segmentation faults and crashes.
  6. Advanced users can now define variables as type scalar when introducing new neuron or synapse types. This will at the code generation stage be translated to the model's floating point type (ftype), float or double. This works for defining variables as well as in all code snippets. Users can also use the expressions SCALAR_MAX and SCALAR_MIN for FLT_MIN, FLT_MAX, DBL_MIN and DBL_MAX, respectively. Corresponding definitions of scalar, SCALAR_MIN and SCALAR_MAX are also available for user-side code whenever the code-generated file runner.cc has been included.
  7. The example projects have been re-organized so that wrapper scripts of the generate_run type are now all located together with the models they run instead of in a common tools directory. Generally the structure now is that each example project contains the wrapper script generate_run and a model subdirectory which contains the model description file and the user side code complete with Makefiles for Unix and Windows operating systems. The generated code will be deposited in the model subdirectory in its own modelname_CODE folder. Simulation results will always be deposited in a new sub-folder of the main project directory.
  8. The addSynapsePopulation(...) function has now more mandatory parameters relating to the introduction of separate weightupdate models (pre-synaptic models) and postynaptic models. The correct syntax for the addSynapsePopulation(...) can be found with detailed explanations in teh User Manual.
  9. We have introduced a simple performance profiling method that users can employ to get an overview over the differential use of time by different kernels. To enable the timers in GeNN generated code, one needs to declare
    networkmodel.setTiming(TRUE);
    This will make available and operate GPU-side cudeEvent based timers whose cumulative value can be found in the double precision variables neuron_tme, synapse_tme and learning_tme. They measure the accumulated time that has been spent calculating the neuron kernel, synapse kernel and learning kernel, respectively. CPU-side timers for the simulation functions are also available and their cumulative values can be obtained through
    float x= sdkGetTimerValue(&neuron_timer);
    float y= sdkGetTimerValue(&synapse_timer);
    float z= sdkGetTimerValue(&learning_timer);
    The Insect olfaction model example shows how these can be used in the user-side code. To enable timing profiling in this example, simply enable it for GeNN:
    model.setTiming(TRUE);
    in MBody1.cc's modelDefinition function and define the macro TIMING in classol_sim.h
    #define TIMING
    This will have the effect that timing information is output into OUTNAME_output/OUTNAME.timingprofile.

Developer Side Changes

  1. allocateSparseArrays() has been changed to take the number of connections, connN, as an argument rather than expecting it to have been set in the Connetion struct before the function is called as was the arrangement previously.
  2. For the case of sparse connectivity, there is now a reverse mapping implemented with revers index arrays and a remap array that points to the original positions of variable values in teh forward array. By this mechanism, revers lookups from post to pre synaptic indices are possible but value changes in the sparse array values do only need to be done once.
  3. SpkEvnt code is no longer generated whenever it is not actually used. That is also true on a somewhat finer granularity where variable queues for synapse delays are only maintained if the corresponding variables are used in synaptic code. True spikes on the other hand are always detected in case the user is interested in them.

Please refer to the full documentation for further details, tutorials and complete code documentation.


Previous | Top | Next