header applications

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

NEST (www.nest-simulator.org) is a widely-used, publically available simulation software for spiking neural network models, scaling up to the full size of petascale computers.

NEST focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons. The internal dynamic of model neurons in NEST is simple, described by a small number of linear ordinary differential equations, and neurons communicate via discrete events (spikes).

 

There is growing scientific demand to use NEST also to model networks of neurons with more complex internal dynamic, e.g., being highly non-linear. This will require much greater computing power. Another challenging aspect is the very large amount of data generated by large-scale network simulations. Today, this data is usually written to file and analysed later offline, but this approach is often inefficient.

 

Significant progress in theory and simulation allows us today to make detailed, biophysically correct predictions of Local field potentials (LFP) signals in brain models. In DEEP-EST we will use a workflow, where spikes will be streamed directly from NEST point-neuron simulations into compartmental neuron simulations with the Arbor simulation package. Arbor is currently under development in a collaboration between the SimLab Neuroscience at Jülich Supercomputing Centre (JSC) and CSCS, the Swiss Supercomputing Centre, as part of their activities in the Human Brain Project.

 

Understanding the dynamics and function of large neuronal networks requires an analysis of their activity, which is represented by spike trains, i.e., sequences of pulses with which neurons communicate. Statistical analysis of large numbers of spike trains obtained from many neurons in a network is essential for interpreting simulation results. The Elephant package is a standard toolkit for such analysis developed by Forschungszentrum Jülich (FZJ) in collaboration with research partners in the Human Brain Project and elsewhere. The interaction between NEST (producing the spike trains) and Elephant would be the second workflow used in DEEP-EST.

 

After a detailed analysis it was decided to distribute the workflows on the DEEP-EST system in the following way:

Workflow Tk 1.2

Workflow of NEST, Arbor and Elephant

 

The simulation of the multi-area model with NEST is run on the CM using a hybrid parallelisation scheme combining MPI and OpenMP threads. CM is optimal for NEST, because NEST's irregular memory access patterns perform optimally on CPUs with large, low-latency RAM and because NEST does not benefit from vectorisation.

 

The detailed analysis with Arbor then is run on the ESB, because Arbor requires considerably more compute power relative to memory. Arbor benefits significantly from vectorisation using AVX2, AVX512, and GPGPUs; it uses hybrid parallelisation combining MPI and C++11 threads or Intel TBB.

 

Analysis of spike trains recorded from selected populations of the multi-area model is carried out by Elephant, which runs on the DAM.

 

During the DEEP-EST project there were done several optimisations to the NEST simulation so far. One of the most important steps was optimizing the spike delivery algorithm within NEST:

MPI Processes

Decrease in runtime as a result of the redesign of the spike-delivery algorithm in NEST for different numbers of neurons per MPI process (NPP).

 

The spike-delivery algorithm was redesigned to facilitate better cache utilization when accessing the target neurons, which results in a significant decrease in runtime. The measurements shown here were obtained on the DEEP-EST CM using 2 MPI processes per compute node and 12 threads per MPI process.