HBP Validation Framework¶
Quick Overview¶
We discuss here some of the terminology pertaining to the validation framework.
- Model
A Model or Model description consists of all the information pertaining to a model excluding details of the source code (i.e. implementation). The model would specify metadata describing the model type and its domain of utility. The source code is specified via the model instance (see below).
- Model Instance
This defines a particular version of a model by specifying the location of the source code for the model. A model may have multiple versions (model instances) which could vary, for example, in values of their biophysical parameters. Improvements and updates to a model would be considered as different versions (instances) of that particular model.
- Test
A Test or Test definition consists of all the information pertaining to a test excluding details of the source code (i.e. implementation). The test would specify metadata defining its domain of utility along with other info such as the type of data it handles and the type of score it generates. The source code is specified via the test instance (see below).
- Test Instance
This defines a particular version of a test by specifying the location of the source code for executing the test. A test may have multiple versions (test instances) which could vary, for example, in the way the simulation is setup or how the score is evaluated. Improvements in the test code would be considered as different versions (instances) of that particular test.
- sciunit
A Python package that handles testing of models. For more, see: https://github.com/scidash/sciunit
- Result
The outcome of testing a specific model instance with a specific test instance. The result would consist of a score, and possibly additionally output files generated by the test.
General Info¶
It is clear from the descriptions above, that running a particular test for a model under the validation framework is more accurately described as the running of a specific test instance for a specific model instance.
When running a test, the test metadata and test instance info is typically retrieved from the validation framework. This involves authenticating your HBP login credentials.
The model that is tested can be registered on the Model Catalog beforehand, or after completion of the test by asking to register it automatically, just before registering the result on the validation framework.
Registration of the model and its test results also requires authenticating your HBP login credentials.
It should be noted that an HBP account can be created even by non-HBP users. For more information, please visit: https://services.humanbrainproject.eu/oidc/account/request
Collabs on the HBP Collaboratory can be either public or private. Public Collabs can be accessed by all registered users, whereas private Collabs require the user to be granted permission for access.
The Model Catalog and the Validation Framework apps can be added to any Collab. A Collab may have multiple instances of these apps. The apps must be configured by setting the provided filters appropriately before they can be used. These filters restrict the type of data displayed in that particular instance of the app.
All tests are public, i.e. every test registered on the Validation Framework can be seen by all users.
Models are created inside specific Collab instances of the Model Catalog app. The particular app inside which a model was created is termed its host app. Similarly, the Collab containing the host app is termed the host Collab.
Models can be set as public or private. If public, the model and its associated results are available to all users. If private, it can only be seen by users who have access to the host Collab.
No information can be deleted from the Model Catalog and Validation Framework apps. In the future, an option to hide data will be implemented. This will offer users a functionality similar to deleting, while retaining the data in the database back-end.
Models, model instances, tests and test instances can be edited as long as there are no results associated with them. Results can never be edited!
Python Client¶
- The Validation Framework has a Python Client. This can be downloaded at:
- The documentation is available at:
REST APIs¶
The Validation Framework offers various REST APIs to access its various features and functionalities.
- The documentation is available at:
Validation UseCases¶
We currently have the following Use Cases on the Collaboratory:
- 1) Validation Framework Demo (Walkthrough)
This is designed to demonstrate the working of the HBP Validation Framework via validating a cerebellar Purkinje cell model against experimental data
- 2) Hippocampus Single Cell Model Validation
This use case shall take as input a BluePyOpt optimized output file. The validation tests run on the hoc template specified as “best_cell” in the meta.json file. There are a total of six different tests in this use case:
Somatic Features Test - UCL data set: (for pyramidal cells and interneurons)
It evaluates the model against various eFEL features under somatic current injection of varying amplitudes. The experimental dataset used for validation is obtained from UCL. The This test can be used for both pyramidal cells and interneurons.
Somatic Features Test - JMakara data set: (for pyramidal cells)
It evaluates the model against various eFEL features under somatic current injection of varying amplitudes. The experimental dataset used for validation is obtained from Judit Makara. The This test can be used only for pyramidal cells.
Depolarization Block Test: (for pyramidal cells)
The Depolarization Block Test aims to determine whether the model enters depolarization block to prolonged, high intensity somatic current stimulus. It compares the current intensity to which the model fires the maximum number of action potentials, the current intensity before the model enters depolarization block (the two should be equal) and the equilibrium potential during depolarization block to the experimental data of Bianchi et al. 2012 (http://dx.doi.org/10.1007/s10827-012-0383-y).
Back-Propagating AP Test: (for pyramidal cells)
The Back-Propagating AP Test evaluates the mode and efficacy of back-propagating action potentials on the apical trunk in locations of different distances from the soma. The amplitude of the first and last AP of around 15 Hz train is compared to experimental data from Golding et al. 2001 (https://doi.org/10.1152/jn.2001.86.6.2998).
PSP Attenuation Test: (for pyramidal cells)
The PSP Attenuation Test evaluates how much the post synaptic potential (using EPSC stimulus) attenuates from the dendrite (different distances) to the soma. The soma/dendrite attenuation is compared to data from Magee & Cook 2000 (http://dx.doi.org/10.1038/78800).
Oblique Integration Test: (for pyramidal cells)
Tests signal integration in oblique dendrites for increasing number of synchronous and asynchronous synaptic inputs. The experimental data is obtained from Losonczy and Magee 2006 (https://doi.org/10.1016/j.neuron.2006.03.016).
The results are registered on the HBP Validation Framework app. If an instance of the Model Catalog and Validation Framework is not found in the current Collab, then these are created. Additionally, a test report is generated which can be viewed within the Jupyter notebook.
- 3) Basal Ganglia Single Cell Validation
This test shall take as input a BluePyOpt optimized output file, containing a hall_of_fame.json file specifying a collection of parameter sets. The validation test would then evaluate the model for all (or specified) parameter sets against various eFEL features. The results are registered on the HBP Validation Framework app. If an instance of the Model Catalog and Validation Framework are not found in the current Collab, then these are created. Additionally, a test report is generated and this can be viewed within the Jupyter notebook, or downloaded.
- 4) BluePyOpt Optimized Model Validation
This test shall take as input a BluePyOpt optimized output file. The validation test would then evaluate the model for all parameter sets against various eFEL features. It should be noted that the reference data used is that located within the model, so this test can be considered as a quantification of the goodness of fitting the model. The results are registered on the HBP Validation Framework app. If an instance of the Model Catalog and Validation Framework are not found in the current Collab, then these are created. Additionally, a test report is generated and this can be viewed within the Jupyter notebook, or downloaded.
- 5) Basal Ganglia Morphology Validation
This test shall take as input a directory containing neuronal morphologies. The features extraction is carried out using ‘NeuroM’ (https://github.com/BlueBrain/NeuroM), so the test currently supports only NeuroM compatible formats.
The user decides whether to run the validations for all available morphologies, or a subset of these. The validation test evaluates the morphology in two stages:
Hard Constraints Here we evaluate the integrity of the neuronal reconstruction in order to determine if it is appropriate for further evaluations. The evaluations here can be sub-divided into the following NeuroM features (apps): - morph_check - cut_plane_detection
Soft Constraints [Currently only available for Fast Spiking Interneurons] Neuronal reconstructions that pass the ‘Hard Constraints’ are evaluated here for their morphometric features. The features are extracted using NeuroM’s morph_stats app, wherever possible, either directly or as a combination of multiple features. These are then compared against experimentally obtained data, as determined by the particular validation test being executed.
Some of the features currently included are soma’s diameter and the maximal branch order in the dendrites, besides the number of trunk sections, -X,Y,Z- extents, field’s diameter and total path-length of both the axon and the dendrites.
Note: Currently only Striatum Fast-Spiking Interneurons (FSI) can be considered, since observation data is missing for other neuron types.
- 6) Basal Ganglia Population Morphology Validation
The average morphometrics of a population of (digitally reconstructed) Fast-Spiking Interneurons (FSI) in Striatum, is validated against experimental data. Additional plots are provided to visualize some statistics derived from the morphometrics of the individual cells, e.g. linear regression analysis, histograms and Kernel-Distribution-Estimates (KDE) for single features, and bi-dimensional joint KDEs for pairs of uncorrelated features.
This test shall take as input a directory containing neuronal morphologies. The features extraction is carried out using ‘NeuroM’ (https://github.com/BlueBrain/NeuroM), so the test currently supports only NeuroM compatible formats.
The user decides whether to run the validations for all available morphologies, or a subset of these. The validation test evaluates the morphology in two stages:
Hard Constraints Here we evaluate the integrity of the neuronal reconstruction in order to determine if it is appropriate for further evaluations. The evaluations here can be sub-divided into the following NeuroM features (apps): - morph_check - cut_plane_detection
Soft Constraints [Currently only available for Fast Spiking Interneurons] Neuronal reconstructions that pass the ‘Hard Constraints’ are evaluated here for their morphometric features. The features are extracted using NeuroM’s morph_stats app, wherever possible, either directly or as a combination of multiple features. The average morphometrics of the population of neurons is then computed. Those mean values are then compared against experimentally obtained data, as determined by the particular validation test being executed.
Some of the features currently included are soma’s diameter and the maximal branch order in the dendrites, besides the number of trunk sections, -X,Y,Z- extents, field’s diameter and total path-length of both the axon and the dendrites.
Additional plots are provided to visualize some statistics derived from the morphometrics of the individual cells, e.g. linear regression analysis, histograms and Kernel-Distribution-Estimates (KDE) for single features, and bi-dimensional joint KDEs for pairs of uncorrelated features.
Note: Currently only Striatum Fast-Spiking Interneurons (FSI) can be considered, since observation data is missing for other neuron types.
Notes¶
Access to the validation tools and services requires HBP SGA2 accreditation. Non-HBP members should contact “support@humanbrainproject.eu” for access.
The validation use cases are only Python 3 compatible! Python 2 support has been dropped as it reached the end of its life on January 1st, 2020.