Skip to main content

Conda packaging for GeNN


๐Ÿ“‚ Project Repository

๐Ÿ”— Conda-package-GeNN
This repository contains all the code, packaging recipes, and documentation developed during my Google Summer of Code project.


๐Ÿ“‘ Table of Contents

๐ŸŒ Google Summer of Code (GSoC)

Google Summer of Code (GSoC) is an annual global program focused on bringing new contributors into open source software development.
Contributors work with open source organizations under the guidance of mentors to learn, code, and make impactful contributions during the summer.

๐Ÿ“Š GSoC 2025 Highlights

  • 15,240 applicants from 130 countries submitted 23,559 proposals
  • 185 mentoring organizations selected 1,272 contributors from 68 countries
  • 66.3% of contributors had no prior open source experience, showing GSoCโ€™s accessibility
  • A three-week Community Bonding period helps contributors and mentors plan and get oriented before coding

๐Ÿ”— Read more on the official announcement

๐Ÿง  About INCF

INCF
The International Neuroinformatics Coordinating Facility (INCF) is an open and FAIR (Findable, Accessible, Interoperable, and Reusable) neuroscience standards organization.
Launched in 2005 through a proposal from the OECD Global Science Forum, INCFโ€™s mission is to make neuroscience data and knowledge globally shareable and reusable.

๐ŸŒ Impact on Society

By developing community-driven standards and tools for data sharing, analysis, modeling, and simulation, INCF: - Promotes collaboration across international neuroscience communities
- Enables reproducible and scalable research
- Accelerates discoveries in brain science
- Supports better understanding of brain function in both health and disease

Through these efforts, INCF helps build a more open scientific ecosystem, ultimately contributing to advances in healthcare, mental health, and neurological research worldwide.

โšก About GeNN

GeNN
The GPU-enhanced Neuronal Networks (GeNN) project is a code generation framework designed to accelerate the simulation of spiking neural networks (SNNs) using GPUs.

๐Ÿ”ฌ Role in Neuroscience

GeNN plays a crucial role in computational neuroscience by: - Enabling fast and efficient simulation of large-scale spiking neural networks
- Allowing researchers to prototype and test brain-inspired models at unprecedented scales
- Supporting reproducibility and standardization in neural simulations
- Bridging the gap between biological realism and computational efficiency

Through its GPU acceleration, GeNN empowers neuroscientists to explore complex models of brain function that would otherwise be computationally prohibitive.

โ“ Problem Statement

GeNN is a C++ library that generates code for efficiently simulating Spiking Neural Networks (SNNs) using GPUs.
To compile the generated code, GeNN requires a C++ compiler and development versions of backend dependencies such as CUDA.

Currently, this means GeNN must be installed from source, which can be a barrier for many potential users: - Researchers may not have the right compiler or CUDA version installed - Installation errors can take hours to resolve - New users may be discouraged before even running their first simulation

๐ŸŽฏ Project Goal

For this project, I aimed to develop a Conda (Forge) package for GeNN which: - Handles the installation of all required dependencies (C++, CUDA, libraries) - Provides pre-built binaries for Linux, Windows, and macOS - Makes installation as simple as:

```bash conda install -c conda-forge pygenn-cpu # CPU-only conda install -c conda-forge pygenn-cuda # CUDA-enabled

๐Ÿ“ฆ Deliverables

  • โœ… Conda-Forge recipes for both CPU and CUDA variants of GeNN
  • โœ… User documentation and installation instructions

๐ŸŽฎ Rise of CUDA in Neural Simulations

NVIDIA

The introduction of CUDA (Compute Unified Device Architecture) by NVIDIA revolutionized the way scientists and engineers simulate neural networks.

๐Ÿš€ Why CUDA Matters

  • Provides massive parallelism by leveraging thousands of GPU cores
  • Accelerates matrix operations and synaptic updates critical for spiking neural networks
  • Reduces simulation times from hours or days to minutes or seconds
  • Allows scaling to millions of neurons and synapses in realistic brain models

๐Ÿงฉ Impact on Neuroscience

By harnessing CUDA, researchers can: - Explore biologically detailed models of neural circuits
- Run real-time simulations for robotics and brain-inspired AI
- Investigate complex dynamics of the brain that were previously infeasible due to computational limits

In short, CUDA has been a key enabler in advancing computational neuroscience and the adoption of frameworks like GeNN.

๐Ÿ“ฆ Why Conda (and not PyPI)

We chose Conda because our package is not just Python โ€” it also includes a C++ backend and CUDA code.

  • Conda can package non-Python dependencies (C++, CUDA, compilers, system libraries), while PyPI is limited to Python-only distributions.
  • With Conda we can pin CUDA versions and compilers, ensuring compatibility across Linux, Windows, and macOS.
  • This makes Conda the better choice for distributing GPU-accelerated scientific software like GeNN, where reproducibility and native dependencies are critical.

๐Ÿ—๏ธ Package Architecture

Conda-Forge

We designed the package to provide two build variants of GeNN:

  1. CPU-only
  2. Lightweight build that works without CUDA
  3. Useful for users who want to experiment with spiking neural networks on any system

  4. CUDA-enabled

  5. Full GPU acceleration using modular CUDA packages
  6. Ideal for large-scale neuroscience simulations

๐Ÿ“‚ Structure

  • Separate Conda recipes: pygenn-cpu and pygenn-cuda
  • Each recipe pins Python, NumPy ABI, and (for CUDA builds) modular CUDA components like cuda-nvcc, cuda-cudart, and cuda-libraries
  • Shared test suite ensures both variants behave consistently

This dual-architecture approach makes GeNN more accessible and reproducible, whether on laptops or GPU clusters.

๐Ÿ”— Read more on the detailed package structure

โš”๏ธ Challenges Faced and Solutions

๐ŸŒ€ Challenge 1: Transition from CUDA <12.x to CUDA โ‰ฅ12.x

Initially, our package was built for CUDA 11.7, which used a monolithic toolkit package.

๐Ÿ‘‰ Example: CUDA 11.7 recipe

However, starting with CUDA 12.x, Conda-Forge adopted a modular CUDA packaging system:

  • Instead of a single cudatoolkit package
  • CUDA is split into components like cuda-nvcc, cuda-cudart, cuda-libraries, cuda-libraries-dev, etc.

๐Ÿ”— Detailed explanation: Pre-12 vs Post-12 CUDA packaging

โœ… Our Solution

  • Migrated the recipe to modular CUDA dependencies in meta.yaml
  • Explicitly pinned the CUDA version with:

    yaml - cuda-version =={{ cuda_version }} - cuda-nvcc {{ cuda_nvcc }} - cuda-cudart {{ cuda_cudart }} - cuda-libraries {{ cuda_libraries }} - cuda-libraries-dev {{ cuda_libraries_dev }}

  • Ensured compatibility across Linux, Windows, and macOS by adjusting the build matrix and using Condaโ€™s modular CUDA toolchain.

This transition was essential to keep the package future-proof and aligned with Conda-Forgeโ€™s evolving CUDA ecosystem.

โš”๏ธ Challenge 2: Setting CUDA_PATH After Installation

During testing, we discovered that after installing the CUDA-enabled package,

the CUDA_PATH environment variable was not automatically set in the Conda environment.

  • This caused issues on both Linux and Windows, where users needed CUDA_PATH for compiling and running GeNN models.
  • Without it, the CUDA backend could not be located properly by the build system.

๐Ÿ”— Reference: post-link script design

โœ… Our Solution

  • Added post-link.sh (Linux/macOS) and post-link.bat (Windows) scripts to the recipe.
  • These scripts:
    • Notify users that they must export or set CUDA_PATH in their shell session
    • Provide clear guidance on how to configure it (export CUDA_PATH=$CONDA_PREFIX on Linux/macOS, set CUDA_PATH=%CONDA_PREFIX%\\Library on Windows)

Example post-link.sh Script

#!/bin/bash
echo ""
echo "============================================"
echo "PyGeNN CUDA backend installed successfully!"
echo ""
echo "To enable CUDA support, set the environment variable:"
echo "    export CUDA_PATH=$CONDA_PREFIX"
echo ""
echo "Alternatively, if you have a system-wide CUDA installation:"
echo "    export CUDA_PATH=/usr/local/cuda-12.x"
echo ""
echo "PyGeNN will automatically use CUDA_PATH if set; otherwise, you may"
echo "need to manually configure it for certain use cases."
echo "============================================"
echo ""

This ensures users are explicitly informed about the required step, making the installation process clearer and less error-prone.

โš”๏ธ Challenge 3: Moving Windows Build to NMake + MSBuild

Originally, the Windows build system relied only on MSBuild, which was insufficient to support conda pacakge's GeNNโ€™s requirement for runtime code compilation of models.

โœ… Our Solution

  • Migrated the Windows backend to a hybrid NMake + MSBuild system.
  • Benefits of this change:
  • Enabled runtime compilation of CUDA kernels on Windows
  • Added robust CUDA path management, ensuring builds work with Condaโ€™s modular CUDA layout
  • Standardized the use of CUDA_LIBRARY_PATH across Windows environments for consistency

This migration improved reliability and made the Windows build much closer to Linux in flexibility,
while also aligning with Condaโ€™s CUDA packaging best practices.

๐Ÿ”— My Pull Request #705 โ€“ robust CUDA lib path resolution for Conda & system installs

โš”๏ธ Challenge 4: Fixing macOS .dylib Handling in pygenn-cpu

When building the CPU-only PyGeNN package on macOS, we encountered an issue where
the required dynamic libraries (.dylib) were not being copied correctly into the installed package directory.
This caused runtime errors where Python could not locate GeNNโ€™s backend libraries.

โœ… Our Solution (My PR ๐Ÿ”ง)

I submitted PR #707 to fix the macOS library handling in setup.py.
Key technical improvements included:

  • Dynamic Library Discovery
  • Updated setup.py to explicitly find GeNNโ€™s .dylib artifacts generated during the build process.
  • Ensured both the core libgenn_dynamic.dylib and the CPU backend libraries were properly detected.

  • Correct Copy into site-packages

  • Added logic to copy these .dylib files into the final pygenn installation directory under site-packages.
  • This guarantees the Python extension modules can locate their linked dynamic libraries at runtime.

  • macOS Loader Path Fixes

  • Adjusted the install_name handling so that macOSโ€™s runtime linker resolves the .dylib files correctly.
  • Prevented the โ€œimage not foundโ€ errors that occurred when relocating the package to a Conda environment.

๐Ÿ”ฌ Impact

  • Resolved import-time failures on macOS for the pygenn-cpu package.
  • Improved cross-platform parity, since Linux .so handling was already stable.
  • Made the CPU-only build truly portable across Conda environments on macOS.

๐Ÿ”— My Pull Request #707 โ€“ macOS .dylib fix in setup.py

๐Ÿ“ฆ Conda-Forge Packages

After resolving build system and packaging challenges, we contributed to the official Conda-Forge recipes for PyGeNN.

๐Ÿš€ Published Packages

  • pygenn-cuda โ†’ staged-recipes PR #30899
    • GPU-accelerated build with modular CUDA support
    • Targets Linux and Windows with reproducible CUDA environments
  • pygenn-cpu โ†’ staged-recipes PR #30907
    • Lightweight CPU-only build
    • Cross-platform support (Linux, Windows, macOS) without CUDA dependency

๐ŸŒ Impact

  • Brought PyGeNN to the Conda-Forge ecosystem, making installation as simple as:

    bash conda install -c conda-forge pygenn-cpu # CPU-only conda install -c conda-forge pygenn-cuda # CUDA-enabled - Improved discoverability, reproducibility, and accessibility for neuroscience researchers and developers worldwide.

๐ŸŒŸ Impact of the Package

Before our Conda-Forge packages, users had to install GeNN from source:
- Clone the repository
- Configure compilers and CUDA toolchains manually
- Build the C++ backend
- Troubleshoot platform-specific errors (Linux, Windows, macOS)

This process was time-consuming and error-prone, often taking hours for new users.

๐Ÿš€ Improvements with Conda Packages

  • Installation reduced to a single command:
    bash conda install -c conda-forge pygenn-cpu # CPU-only conda install -c conda-forge pygenn-cuda # CUDA-enabled
  • No manual compilation needed โ€” all binaries are pre-built for the target platform
  • Cross-platform availability: Linux, Windows, and macOS
  • Pinned toolchains and CUDA versions ensure reproducibility and stability
  • Eliminates setup barriers, letting researchers focus on science, not build systems

๐Ÿ”ฌ Impact on Researchers

  • Decreased installation time from hours โ†’ minutes
  • Made GeNN accessible to a wider audience, including those without deep build/DevOps expertise
  • Strengthened the reliability of neuroscience workflows by providing reproducible environments

In short, this packaging effort turned GeNN from a complex source-based project into an accessible plug-and-play library for the neuroscience community!