Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied. The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology.
As neuroscientists, we can often choose to talk about the brain at any of a number of levels: atoms/molecules, ion channels and other proteins, cell compartments, neurons, networks, columns, modules, systems, dynamic equations, and algorithms.
However, a description at too low a level might be too detailed, causing one to lose the forest virtual reality headset for the trees. Alternatively, a description at too high a level might miss valuable information and is less likely to generalize to different situations.
For example, one might theorize that cars work by propelling gases from their exhaust pipes. Although this might be consistent with all of the observed data, by looking “under the hood” one would find evidence that this model of a car’s function is incorrect.
On the other hand, a model may be formulated at too low a level. For example, a description of the interactions between molecules of wood and atoms of metal is not essential for a complete, thorough understanding of how a door works.
Emergence
One particularly exciting aspect of multi-level research is when one synthesizes enough observations to move from one level to a higher one. Emergence is a term used to describe what occurs when simpler rules interact to form complex behavior. It’s when a particular combination of properties or (typically nonlinear) processes gives rise to something surprising and/or non-obvious. To give a basic example, hydrogen and oxygen both support fire. Surprisingly, their combination — water — puts fires out and expands when frozen.
An Example of Emergence: The Neural Basis of Memory
A recent article by Raymond and Redman (Journal of Neurophysiology, 2002) takes a close look at three separate subcellular mechanisms that appear to support LTP (reminder: LTP is long-term potentiation, which is one of the best candidates to-date for the neural basis of memory; http://www.neurevolution.net/2007/04/21/history%E2%80%99s-top-brain-computation-insights-day-20/).
Raymond and Redman replicate the earlier finding that longer bouts of electrical stimulation can cause LTP to be more powerful (resulting in larger postsynaptic responses), and last longer. They demonstrated three different levels of LTP in their experiment by using three different length trains of electrical stimulation. This stimulation-dependent property of LTP has been taken as the basis for synaptic modification rules used in neural network models (http://www.neurevolution.net/2007/03/15/neural-network-learning-rules/).
Interestingly, the researchers then demonstrated that by blocking three different cellular mechanisms — ryanodine receptors, IP3 receptors and L-type VDCCs respectively — they were able to selectively block LTP from the shortest, intermediate or longest stimulation trains.
Taken together, these results suggest that the high-level phenomenon of LTP is actually composed of (at least) three separate underlying processes. These separate processes appear to cover different timespans, contributing to an exponential curve relating LTP to the time and strength of neuronal activity.
Insight Gained
The study mentioned in this post contributes to the field by helping to lending additional evidence to our current theoretical understanding of a mechanism which is likely to underpin memory. From a theoretical perspective LTP appears to be a meaningful construct which emerges from mutliple, dissociable subcellular processes.
More generally, the study is an excellent demonstration of emergence: three separate processes from a particular level (subcellular receptor proteins) appear to jointly support a more abstract, single processes at a higher level (LTP in cellular electrophysiology). As a result, computational modelers can feel more comfortable with assumptions of an LTP-like assumption in their simulations.
A final thought is that this type of research also clearly highlights the importance of interdisciplinary research in the neurosciences.
P.L.