Topic 6: Causal Understanding Causal understanding is an important part of human cognition. How do we understand that a particular event or force has caused another event? How do realize that inserting coins into a soda machine results in a cool beverage appearing below? And ultimately, how do we understand people’s reactions to events? The …
Category Archives: Computational Modeling
A Brief Introduction to Reinforcement Learning
Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist. This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity. In earlier posts, we outlined two computational models …
Continue reading “A Brief Introduction to Reinforcement Learning”
Levels of Analysis and Emergence: The Neural Basis of Memory
Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied. The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology. As neuroscientists, we can often choose to …
Continue reading “Levels of Analysis and Emergence: The Neural Basis of Memory”
Combining Simple Recurrent Networks and Eye-Movements to study Language Processing
Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables …
Grand Challenges of Neuroscience: Day 3
Topic 3: Spatial Knowledge Animal studies have shown that the hippocampus contains special cells called “place cells”. These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four …
Grand Challenges of Neuroscience: Day 2
Topic 2: Conflict and Cooperation Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior. Two behaviors that have important effects on the survival of humans are cooperation and conflict. According to the NSF committee convened last year, conflict and cooperation is an …
History’s Top Brain Computation Insights: Day 25
25) The dopamine system implements a reward prediction error algorithm (Schultz – 1996, Sutton – 1988) It used to be that the main thing anyone "knew" about the dopamine system was that it is important for motor control. Parkinson's disease, which visibly manifests itself as motor tremors, is caused by disruption of the dopamine …
Continue reading “History’s Top Brain Computation Insights: Day 25”
History’s Top Brain Computation Insights: Day 22
22) Recurrent connectivity in neural networks can elicit learning and reproduction of temporal sequences (Jordan – 1986, Elman – 1990, Schneider – 1991) Powerful learning algorithms such as Hebbian learning, self-organizing maps, and backpropagation of error illustrated how categorization and stimulus-response mapping might be learned in the brain. However, it remained unclear how sequences and …
Continue reading “History’s Top Brain Computation Insights: Day 22”
History’s Top Brain Computation Insights: Day 21
21) Parallel and distributed processing across many neuron-like units can lead to complex behaviors (Rumelhart & McClelland – 1986, O'Reilly – 1996) Pitts & McCullochprovided amazing insight into how brain computations take place. However, their two-layer perceptrons were limited. For instance, they could not implement the logic gate XOR (i.e., 'one but not both'). An …
Continue reading “History’s Top Brain Computation Insights: Day 21”
History’s Top Brain Computation Insights: Day 19
19) Neural networks can self-organize via competition (Grossberg – 1978, Kohonen – 1981) Hubel and Wiesel's work with the development of cortical columns (see previous post) hinted at it, but it wasn't until Grossberg and Kohonen built computational architectures explicitly exploring competition that its importance was made clear. Grossberg was the first to illustrate the …
Continue reading “History’s Top Brain Computation Insights: Day 19”
History’s Top Brain Computation Insights: Day 11
11) Action potentials, the electrical events underlying brain communication, are governed by ion concentrations and voltage differences mediated by ion channels (Hodgkin & Huxley – 1952) Hodgkin & Huxley developed the voltage clamp, which allows ion concentrations in a neuron to be measured with the voltage constant. Using this device, they demonstrated changes in ion …
Continue reading “History’s Top Brain Computation Insights: Day 11”
History’s Top Brain Computation Insights: Day 10
10) The Hebbian learning rule: 'Neurons that fire together wire together' [plus corollaries] (Hebb – 1949) D. O. Hebb's most famous idea, that neurons with correlated activity increase their synaptic connection strength, was based on the more general concept of association of correlated ideas by philosopher David Hume (1739) and others. Hebb expanded on this …
Continue reading “History’s Top Brain Computation Insights: Day 10”
History’s Top Brain Computation Insights: Day 9
9) Convergence and divergence between layers of neural units can perform abstract computations (Pitts & McCulloch – 1947) Pitts & McCulloch created the first artificial neurons and artificial neural network. In 1943 they showed that computational processing could be performed by a series of convergent and divergent connections among neuron-like units. In 1947 they demonstrated …
Continue reading “History’s Top Brain Computation Insights: Day 9”
A Popular but Problematic Learning Rule: “Backpropogration of Error”
Backpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm. Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity. Since backprop is so …
Continue reading “A Popular but Problematic Learning Rule: “Backpropogration of Error””
Human Versus Non-Human Neuroscience
Most neuroscientists don't use human subjects, and many tend to forget this important point: All neuroscience with non-human subjects is theoretical. If the brain of a mouse is understood in exquisite detail, it is only relevant (outside veterinary medicine) in so far as it is relevant to human brains. Similarly, if a computational model can …