A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations
Front. Neuroinform. 16:837549 (2022) Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detail...
Saved in:
Main Authors: | , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
16-12-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Front. Neuroinform. 16:837549 (2022) Modern computational neuroscience strives to develop complex network models
to explain dynamics and function of brains in health and disease. This process
goes hand in hand with advancements in the theory of neuronal networks and
increasing availability of detailed anatomical data on brain connectivity.
Large-scale models that study interactions between multiple brain areas with
intricate connectivity and investigate phenomena on long time scales such as
system-level learning require progress in simulation speed. The corresponding
development of state-of-the-art simulation engines relies on information
provided by benchmark simulations which assess the time-to-solution for
scientifically relevant, complementary network models using various
combinations of hardware and software revisions. However, maintaining
comparability of benchmark results is difficult due to a lack of standardized
specifications for measuring the scaling performance of simulators on
high-performance computing (HPC) systems. Motivated by the challenging
complexity of benchmarking, we define a generic workflow that decomposes the
endeavor into unique segments consisting of separate modules. As a reference
implementation for the conceptual workflow, we develop beNNch: an open-source
software framework for the configuration, execution, and analysis of benchmarks
for neuronal network simulations. The framework records benchmarking data and
metadata in a unified way to foster reproducibility. For illustration, we
measure the performance of various versions of the NEST simulator across
network models with different levels of complexity on a contemporary HPC
system, demonstrating how performance bottlenecks can be identified, ultimately
guiding the development toward more efficient simulation technology. |
---|---|
DOI: | 10.48550/arxiv.2112.09018 |