Hardware context#

An Arbor simulation requires a Recipes, a (hardware) context, and a Domain decomposition. The Recipe contains the neuroscientific model, the hardware context describes the computational resources you are going to execute the simulation on, and the domain decomposition describes how Arbor will use the hardware. Since the context and domain decomposition may seem closely related at first, it might be instructive to see how recipes are used by Arbor:

Recipe
Cell descriptions
gid 1
type lif_cell
description
...
...
gid 37
type cable_cell
description
  • Morphology
  • Ion channels
  • Connection sites
  • Labels
Cell kinds
gid 1 lif_cell
...
gid 37 cable_cell
...
Simulator: kind of gid=1?
Recipe: lif_cell.
...
Simulator: kind of gid=37?
Recipe: cable_cell.
Group implementations for lif_cell, cable_cell
Domain decomposition: group-0, group-1
Simulator requests cell description of gid=37
Recipe: morphology, ....
Simulator:
group-0 make cable_cell
group-1 make lif_cell
Simulator:
group-0 simulate for dt
group-1 simulate for dt
Simulation
Recipe
Context
12 threads
1 GPU
Domain decomposition
group-0
kind: cable
hardware: 1 GPU
gids: 37, ...
group-1
kind: lif
hardware: 12 threads
gids: 1, ...
...
An illustration of the cell-specific portion of the recipe, and how it is used during the lifetime of the simulation: the simulation object will, depending on its configuration, query the recipe for the neuroscientific components it describes. This demonstration also show why the recipe separates cell descriptions from cell types. The latter is, as you might expect, shorthand, and is used in the allocation of the cell to a particular cell group. A cell group implementation is a handler for a certain kind of cell, and Arbor comes with these for all it's included cell kinds. However, users can develop their own specialized cell group implementations. More on that in the internal developer documentation.
Local resources are locally available computational resources, specifically

the number of hardware threads and the number of GPUs.

An allocation enumerates the computational resources to be used for a simulation, typically a subset of the resources available on a physical hardware node. It also contains flags to enable thread and process affinity. When asked to set affinity, Arbor will try to maximize the use of the available resources, i.e. it will spread out processes and threads such that each gets a maximal share of compute units and cache. Existing affinity settings will honoured, so setting it for processes while an external mechanism (e.g. SLURM or OpenMPI) does the same is ill advised. Threads can not be managed externally, thus requesting thread binding is generally safe and may yield significant performance improvements for CPU-only simulations and/or the model build phase. Affinity requires hwloc to be found during build time.

New users can find using contexts a little verbose. The design is very deliberate, to allow fine-grained control over which computational resources an Arbor simulation should use. As a result Arbor is much easier to integrate into workflows that run multiple applications or libraries on the same node, because Arbor has a direct API for using on node resources (threads and GPU) and distributed resources (MPI) that have been partitioned between applications/libraries.

Execution context#

An execution context contains the local thread pool, and optionally the GPU state and MPI communicator, if available. Users of the library configure contexts, which are passed to Arbor methods and types.

API#