Michael Rempe, Applied Mathematics, Northwestern University
Efficient computational strategies for simulating neural activity on branched structures
Ever since Hodgkin and Huxley developed their foundational mathematical model of neuronal activity, the field of computational neuroscience has been growing rapidly. THis system of differential equations has been used to simulate the electrical activity in many different types of cells. Since all but the most simple forms of the equations are not solvable analytically, numerical methods are typically used instead. Today, computer simulation environments employ these numerical methods to model neural systems as small as individual ion channels and as large as networks consisting of 10,000 morphologically accurate model cells.
Implicit numerical methods are the most widely used for these simulations due to their favorable numerical stability properties. However, a drawback of these simulations is that the voltage update is global inscope, meaning that computational effort cannot be focused on those regions of the cell that are most active. The result is an unnecessary slow-down in neral simulations when activity is localized to a small region of the cell.
I will present a predictor-corrector numerical method we developed that effectively decouples all of the branches from each other. By splitting apart the computation, the algorithm provides a framework for spatial adaptivity. The active regions are detected and computational effort is focused there, while saving computations in other regions of the cell that are at rest. As a result, the computational cost of a simulation scales with activity, not with the physical size of the system. I will show several simulations that illustrate this idea, including reproductions of recent experimental results from our lab.
Main Campus - Engineering Classroom Wing (View Map)
1111 Engineering DR
Name: Ian Cunningham