Topic 6: Causal Understanding Causal understanding is an important part of human cognition. How do we understand that a particular event or force has caused another event? How do realize that inserting coins into a soda machine results in a cool beverage appearing below? And ultimately, how do we understand people’s reactions to events? The …
Author Archives: P.L.
A Brief Introduction to Reinforcement Learning
Computational models that are implemented, i.e., written out as equations or software, are an increasingly important tool for the cognitive neuroscientist. This is because implemented models are, effectively, hypotheses that have been worked out to the point where they make quantitative predictions about behavior and/or neural activity. In earlier posts, we outlined two computational models …
Continue reading “A Brief Introduction to Reinforcement Learning”
Levels of Analysis and Emergence: The Neural Basis of Memory
Cognitive neuroscience constantly works to find the appropriate level of description (or, in the case of computational modeling, implementation) for the topic being studied. The goal of this post is to elaborate on this point a bit and then illustrate it with an interesting recent example from neurophysiology. As neuroscientists, we can often choose to …
Continue reading “Levels of Analysis and Emergence: The Neural Basis of Memory”
Combining Simple Recurrent Networks and Eye-Movements to study Language Processing
Modern technologies allow eye movements to be used as a tool for studying language processing during tasks such as natural reading. Saccadic eye movements during reading turn out to be highly sensitive to a number of linguistic variables. A number of computational models of eye movement control have been developed to explain how these variables …
Grand Challenges of Neuroscience: Day 5
Topic: Language Everyday (spoken) language use involves the production and perception of sounds at a very fast rate. One of my favorite quotes on this subject is in “The Language Instict” by Steven Pinker, on page 157. “Even with heroic training [on a task], people could not recognize the sounds at a rate faster than …
Grand Challenges of Neuroscience: Day 4
After a bit of a hiatus, I’m back with the last three installments of “Grand Challenges in Neuroscience”. Topic 4: Time Cognitive Science programs typically require students to take courses in Linguistics (as well as in the philiosphy of language). Besides the obvious application of studying how the mind creates and uses language, an important …
Grand Challenges of Neuroscience: Day 3
Topic 3: Spatial Knowledge Animal studies have shown that the hippocampus contains special cells called “place cells”. These place cells are interesting because their activity seems to indicate not what the animal sees, but rather where the animal is in space as it runs around in a box or in a maze. (See the four …
Grand Challenges of Neuroscience: Day 2
Topic 2: Conflict and Cooperation Generally, cognitive neuroscience aims to explain how mental processes such as believing, knowing, and inferring arise in the brain and affect behavior. Two behaviors that have important effects on the survival of humans are cooperation and conflict. According to the NSF committee convened last year, conflict and cooperation is an …
Grand Challenges of Neuroscience: Day 1
Following up on MC's posts about the significant insights in the history of neuroscience, I'll now take Neurevolution for a short jaunt into neuroscience's potential future. In light of recent advances in technologies and methodologies applicable to neuroscience research, the National Science Foundation last summer released a document on the "Grand Challenges of Neuroscience". These …
A Popular but Problematic Learning Rule: “Backpropogration of Error”
Backpropogation of Error (or "backprop") is the most commonly-used neural network training algorithm. Although fundamentally different from the less common Hebbian-like mechanism mentioned in my last post , it similarly specifies how the weights between the units in a network should be changed in response to various patterns of activity. Since backprop is so …
Continue reading “A Popular but Problematic Learning Rule: “Backpropogration of Error””
Neural Network “Learning Rules”
Most neurocomputational models are not hard-wired to perform a task. Instead, they are typically equipped with some kind of learning process. In this post, I'll introduce some notions of how neural networks can learn. Understanding learning processes is important for cognitive neuroscience because they may underly the development of cognitive ability. Let's begin with a …
Computational models of cognition in neural systems: WHY?
In my most recent post I gave an overview of the "simple recurrent network" (SRN), but I'd like to take a step back and talk about neuromodeling in general. In particular I'd like to talk about why neuromodeling is going to be instrumental in bringing about the cognitive revolution in neuroscience. A principal goal of …
Continue reading “Computational models of cognition in neural systems: WHY?”
Can a Neural Network be Free…
…from a knee-jerk reaction to its immediate input? Although one of the first things that a Neuroscience student learns about is "reflex reactions" such as the patellar reflex (also known as the knee-jerk reflex), the cognitive neuroscientist is interested in the kind of processing that might occur between inputs and outputs in mappings that are …
Neuroscience Blogs of Note, Part 2
I will follow up MC's recent post with a brief review of three other neuroscience-related blogs that are worth mentioning as we begin Neurevolution. Brain Waves (http://brainwaves.corante.com/) is a self-labeled "neurotechnology" blog. Written by Zack Lynch, it is a real-world look at the effects and benefits derived from neuroscience research with regards to society, culture …