Concepts overview#

Arbor is a library that lets you to model neural networks with morphologically detailed cells; which it then executes the resulting simulation on a variety of hardware. The execution can optionally be configured in high detail but comes with sensible defaults.

Simulation
Recipe
Describe the science
Context
Determine the Hardware
Domain decomposition
Map recipe to hardware
An overview of Arbor at the highest level. The layout reflects a division that is core to Arbor: the science is described entirely separate from the execution of the simulation.
The Recipe describes the network by declaring its constituent cells and their connections in addition to the stimuli and data extraction.
Execution is determined by mapping out the available hardware in a context as processes (via MPI) plus the threads and GPUs associated with each process.
Finally, a domain decomposition maps the network onto said resources taking into account the capabilities of each cell type.

To learn how to use Arbor, it is helpful to understand some of its concepts. Arbor’s design aims to enable scalability through abstraction. To achieve this, Arbor makes a distinction between the description of a model, and the execution of a model: a recipe describes a model, and a simulation is an executable instantiation of a model.

To be able to simulate a model, three basic steps need to be considered:

  1. Describe the model by defining a recipe;

  2. Define the computational resources available to execute the model;

  3. Initiate and execute a simulation of the recipe on the chosen hardware resources.

The Python front-end further abstracts away some of these steps for single cell models, where users only need to describe the cell and simulation; and the details of the recipe and computational resources construction are handled under the hood. Generally speaking though, these 3 steps are the building blocks of an Arbor application.

Recipe
Size
The number of cells regardless of type.
Cells
A cell description.
Network
List of incoming connections.
In- and Outputs
List of stimuli and probes.
The recipe is the core abstraction to building networks. It declares a number of cells present in the network and will then be interrogated for each about its properties. By asking the recipe about each cell in isolation, Arbor can build the network in parallel while keeping memory consumption low.

Recipes represent a set of neuron constructions and connections with mechanisms specifying ion channel and synapse dynamics in a cell-oriented manner. This has the advantage that cell data can be initiated in parallel.

A cell represents the smallest unit of computation and forms the smallest unit of work distributed across processes. Arbor has built-in support for different cell types, which can be extended by adding new cell types to the C++ cell group interface.

Simulations manage the instantiation of the model and the scheduling of spike exchange as well as the integration for each cell group. A cell group represents a collection of cells of the same type computed together on the GPU or CPU. The partitioning into cell groups is provided by Domain decomposition which describes the distribution of the model over the locally available computational resources.

In order to visualize the result of detected spikes a spike recorder can be used, and to analyse Arbor’s performance a meter manager is available.

Probing and Sampling shows how to extract data from simulations.