Abstract
Magnetoencephalopgraphy (MEG) is a non-invasive functional imaging modality for
mapping cerebral electromagnetic activity from measurements of the weak magnetic
field that it generates. It is well known that the MEG inverse problem, i.e. the problem of
identifying electric currents from the induced magnetic fields, is a severely
underdetermined problem and, without complementary prior information, no unique
solution can be found. In the literature, many regularization techniques were proposed.
In particular, optimization-based methods usually explain the data by superficial sources
even when the activity is deep in the brain. A way to make easier the identification of
deep focal sources is the use of depth weighting.
We revisit the MEG inverse problem, regularization and depth weighting from a
Bayesian point of view by hierarchical models: The primary unknown is the discretized
current density inside the head, and we postulate a conditionally Gaussian anatomical
prior model. In this model, each current element, or dipole, has a preferred, albeit not
fixed, direction that is extracted from the anatomical data of the subject. The variance of
each dipole is not fixed a priori, but modeled itself as a random variable described by its
hyperprior density. The hypermodel is then used to build a fast iterative algorithm with
the novel feature that their parameters are determined using an empirical Bayes
approach. The hypermodel provides a very natural Bayesian interpretation for sensitivity
weighting, and the parameters in the hyperprior provide a tool for controlling the focality
of the solution, thus leading to a flexible algorithm that can handle both sparse and
distributed sources.
To demonstrate the effects of different parameter selections under optimal conditions,
we test the algorithm on synthetic but realistic data. The tests show that the hierarchical
Bayesian models combined with linear algebraic methods provide a versatile framework
to develop robust and flexible numerical methods, and are able to overcome some of
the limitations of standard regularization techniques, for instance deep source
localization. The proposed algorithm is computationally efficient, gives a direct control of
how well the computed estimates satisfy the data and is designed to easily
accommodate different types of prior information.
Anno
2015
Autori IAC
Tipo pubblicazione
Altri Autori
Calvetti D., Pascarella A., Pitolli F., Somersalo E., Vantaggi B.