Skip to content
forked from norse/norse

Deep learning with spiking neural networks (SNNs) in PyTorch.

License

Notifications You must be signed in to change notification settings

treestreamymw/norse

 
 

Repository files navigation

A library to do deep learning with spiking neural networks.

Test status chat on Discord

This library aims to exploit the advantages of bio-inspired neural components, which are sparse and event-driven - a fundamental difference from artificial neural networks. Norse expands PyTorch with primitives for bio-inspired neural components, bringing you two advantages: a modern and proven infrastructure based on PyTorch and deep learning-compatible spiking neural network components.

Documentation: https://norse.ai/docs/

1. Getting started

To try Norse, the best option is to run one of the jupyter notebooks on Google collab.

Alternatively, you can install Norse and run one of the included tasks such as MNIST:

python -m norse.task.mnist

2. Using Norse

Norse is generally meant as a library for customized use in specific deep learning tasks. This has been detailed in our documentation: working with Norse.

Here we briefly explain how to install Norse and start to apply it in your own work.

2.1. Installation

Note that we assume you are using Python version 3.7+, are in a terminal friendly environment, and have installed the necessary requirements. More detailed installation instructions are available in the documentation.

MethodInstructionsPrerequisites
From PyPi
pip install norse
Pip
From Conda
conda install -c norse norse
Anaconda or Miniconda
From source
pip install -qU git+https://github.com/norse/norse
Pip, PyTorch

2.2. Running examples

Norse is bundled with a number of example experiments, serving as short, self contained, correct examples (SSCCE). They can be run by invoking the norse module from the base directory. More information and tasks are available in our documentation and in your console by typing: python -m norse.task.<task> --help, where <task> is one of the task names.

  • To train an MNIST classification network, invoke
    python -m norse.task.mnist
  • To train a CIFAR classification network, invoke
    python -m norse.task.cifar10
  • To train the cartpole balancing task with Policy gradient, invoke
    python -m norse.task.cartpole

2.3. Example on using the library: Long short-term spiking neural networks

The long short-term spiking neural networks from the paper by G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass (2018) is one interesting way to apply norse:

from norse.torch.module import LSNNLayer, LSNNCell
# LSNNCell with 2 input neurons and 10 output neurons
layer = LSNNLayer(LSNNCell, 2, 10)
# Generate data: 20 timesteps with 8 datapoints per batch for 2 neurons
data  = torch.zeros(20, 8, 2)
# Tuple of (output data of shape (8, 2), layer state)
output, new_state = layer.forward(data, state)

3. Why Norse?

Norse was created for two reasons: to 1) apply findings from decades of research in practical settings and to 2) accelerate our own research within bio-inspired learning.

A number of projects exist that attempts to leverage the strength of bio-inspired neural networks, however only few of them integrate with modern machine-learning libraries such as PyTorch or Tensorflow and many of them are no longer actively developed.

We are passionate about Norse and believe it has significant potential outside our own research. Primarily because we strive to follow best practices and promise to maintain this library for the simple reason that we depend on it ourselves. Second, we have implemented a number of neuron models, synapse dynamics, encoding and decoding algorithms, dataset integrations, tasks, and examples. While we are far from the comprehensive coverage of simulators used in computational neuroscience such as Brian, NEST or NEURON, we expect to close this gap as we continue to develop the library.

Finally, we are working to keep Norse as performant as possible. Preliminary benchmarks suggest that on small networks of up to ~10000 neurons Norse achieves excellent performance. We aim to create a library that scales from a single laptop to several nodes on a HPC cluster. We expect to be significantly helped in that endeavour by the preexisting investment in scalable training and inference with PyTorch.

Read more about Norse in our documentation.

4. Similar work

The list of projects below serves to illustrate the state of the art, while explaining our own incentives to create and use norse.

  • BindsNET also builds on PyTorch and is explicitly targeted at machine learning tasks. It implements a Network abstraction with the typical 'node' and 'connection' notions common in spiking neural network simulators like nest.
  • cuSNN is a C++ GPU-accelerated simulator for large-scale networks. The library focuses on CUDA and includes spike-time dependent plasicity (STDP) learning rules.
  • decolle implements an online learning algorithm described in the paper "Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)" by J. Kaiser, M. Mostafa and E. Neftci.
  • Long short-term memory Spiking Neural Networks (LSNN) is a tool from the University of Graaz for modelling LSNN cells in Tensorflow. The library focuses on a single neuron and gradient model.
  • Nengo is a neuron simulator, and Nengo-DL is a deep learning network simulator that optimised spike-based neural networks based on an approximation method suggested by Hunsberger and Eliasmith (2016). This approach maps to, but does not build on, the deep learning framework Tensorflow, which is fundamentally different from incorporating the spiking constructs into the framework itself. In turn, this requires manual translations into each individual backend, which influences portability.
  • Neuron Simulation Toolkit (NEST) constructs and evaluates highly detailed simulations of spiking neural networks. This is useful in a medical/biological sense but maps poorly to large datasets and deep learning.
  • PyNN is a Python interface that allows you to define and simulate spiking neural network models on different backends (both software simulators and neuromorphic hardware). It does not currently provide mechanisms for optimisation or arbitrary synaptic plasticity.
  • PySNN is a PyTorch extension similar to Norse. Its approach to model building is slightly different than Norse in that the neurons are stateful.
  • Rockpool is a Python package developed by SynSense for training, simulating and deploying spiking neural networks. It offers both JAX and PyTorch primitives.
  • SlayerPyTorch is a Spike LAYer Error Reassignment library, that focuses on solutions for the temporal credit problem of spiking neurons and a probabilistic approach to backpropagation errors. It includes support for the Loihi chip.
  • SNN toolbox automates the conversion of pre-trained analog to spiking neural networks. The tool is solely for already trained networks and omits the (possibly platform specific) training.
  • SpyTorch presents a set of tutorials for training SNNs with the surrogate gradient approach SuperSpike by F. Zenke, and S. Ganguli (2017). Norse implements SuperSpike, but allows for other surrogate gradients and training approaches.
  • s2net is based on the implementation presented in SpyTorch, but implements convolutional layers as well. It also contains a demonstration how to use those primitives to train a model on the Google Speech Commands dataset.

5. Contributing

Contributions are warmly encouraged and always welcome. However, we also have high expectations around the code base so if you wish to contribute, please refer to our contribution guidelines.

6. Credits

Norse is created by

More information about Norse can be found in our documentation. The research has received funding from the EC Horizon 2020 Framework Programme under Grant Agreements 785907 and 945539 (HBP).

7. License

LGPLv3. See LICENSE for license details.

About

Deep learning with spiking neural networks (SNNs) in PyTorch.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.9%
  • Other 1.1%