header applications

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View Privacy Policy

NEST (www.nest-simulator.org) is a widely used, publicly available simulation software for spiking neural network models, scaling up to the full size of Petascale computers.


NEST focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons. The internal dynamics of model neurons in NEST is simple, described by a small number of linear ordinary differential equations, and neurons communicate via discrete events (spikes).


There is growing scientific demand to use NEST also to model networks of neurons with more complex internal dynamics, e.g., being highly non-linear. This will require much greater computing power. Another challenging aspect is the very large amount of data generated by large-scale network simulations. Today, this data is usually written to file and analysed later offline, but this approach is often inefficient.


Significant progress in theory and simulation allows us today to make detailed, biophysically correct predictions of Local field potentials (LFP) signals in brain models. In the DEEP-EST project we used a workflow, where spikes are streamed directly from NEST point-neuron simulations into compartmental neuron simulations with the Arbor simulation package. Arbor is currently under development in a collaboration between the SimLab Neuroscience at Jülich Supercomputing Centre (JSC) and CSCS, the Swiss Supercomputing Centre, as part of their activities in the Human Brain Project.


Understanding the dynamics and function of large neuronal networks requires an analysis of their activity, which is represented by spike trains, i.e., sequences of pulses with which neurons communicate. Statistical analysis of large numbers of spike trains obtained from many neurons in a network is essential for interpreting simulation results. The Elephant package is a standard toolkit for such analysis developed by Forschungszentrum Jülich (FZJ) in collaboration with research partners in the Human Brain Project and elsewhere. The interaction between NEST (producing the spike trains) and Elephant is the second workflow used in the DEEP-EST project.


Based on a detailed analysis, the workflows were distributed as follows on the DEEP-EST system:

Workflow Tk 1.2

Workflow of NEST, Arbor and Elephant


The simulation of the multi-area model with NEST is run on the CM using a hybrid parallelisation scheme combining MPI and OpenMP threads. CM is optimal for NEST, because NEST's irregular memory access patterns perform optimally on CPUs with large, low-latency RAM.


The detailed analysis with Arbor then runs on the ESB, because Arbor requires considerably more compute power relative to memory. Arbor benefits significantly from vectorisation using AVX2, AVX512, and GPGPUs; it uses hybrid parallelisation combining MPI and C++11 threads or Intel TBB.


The analysis of the spike trains recorded from selected populations of the multi-area model is carried out by Elephant, which runs on the DAM.


During the DEEP-EST project several optimisations to the NEST simulation code have been developed. One of the most important steps was optimizing the spike delivery algorithm:


Decrease in runtime as a result of the redesign of the spike-delivery algorithm in NEST for different numbers of neurons.


The spike-delivery algorithm was redesigned to facilitate better cache utilization when accessing the target neurons, which results in a significant decrease in runtime. The measurements shown here were obtained on the DEEP-EST CM using 2 MPI processes per compute node and 12 threads per MPI process.