Feeds:
Posts
Comments

Archive for September, 2012

14 Sep 2012

 

This week’s reading was an excerpt from Daniel Kahneman’s book “Thinking, Fast and Slow.” In his book, Kahneman describes two distinct modes of cognition: fast cognition, which is rapid, automatic and involuntary, and slow cognition, which is effortful, operates on slower timescales and involves reasoning and conscious decision making. These fast and slow cognitive subsystems are often referred to as System 1 and System 2, respectively.
We  began with a discussion of how theories of human choice-making have changed over time. Much of traditional economic theory is based upon rational choice theory, which assumes that humans make rational decisions in order to optimize their self-interest. However, beginning in the 1960s and 70s, researchers in the emerging field of behavioral economics began conducting psychological studies to investigate how people actually make economic decisions, and found that much of human decision making is in fact, not rational. These ideas have received a great deal of attention in recent years with the publication of several popular books. The observation that humans do not make decisions in a completely rational way has important implications, particularly in that it calls into question the results of classical economic theory, which assumes that humans are rational actors.
Our focus then moved to a more detailed discussion of the notions of fast (System 1) and slow (System 2) cognition, and how these ideas relate to the broader questions of agency and will. We began by discussing system 1. System 1 cognition uses automatic heuristics for monitoring and reacting to the environment. It operates on fast timescales, and does not require voluntary control or conscious effort. Examples of tasks primarily involving system 1 include recognizing emotion in a facial expression, orienting to a sudden loud noise, driving on an empty road, walking at a comfortable pace, or recoiling from an unpleasant or painful stimuli. A question was brought up as to whether system 1 was philosophically deterministic, i.e. whether agency could play a role in the functioning of system 1. While we typically think of system 1 as operating automatically and outside of our conscious control (and lacking agency), a potential complication is that system 2 (which we associate with more “free will-like” behaviors) can alter the functioning of system 1 through training. The related, and more difficult question of whether agency requires conscious awareness was also brought up.
In contrast to system 1, slow (system 2) cognition thinking is characterized by conscious, deliberate activity which requires mental effort. Examples of tasks requiring system cognition include complex arithmetic, long term planning of actions, and paying attention to a single speaker in a crowded and noisy environment, walking at a faster pace than normal, or monitoring the appropriateness of one’s behavior in an unfamiliar social context. In various circumstances, system 2 can effectively “override” system 1 when necessary, allowing for conscious control of actions that are typically automatic. For example, breathing typically occurs automatically and without conscious effort, but can be brought under conscious control. Similarly, experienced drivers will drive a car primarily using system 1, unless a challenging or unexpected traffic situation arises, at which point system 2 is engaged.  Interestingly, in humans, the level of mental effort required for a task was found to correlate with amount pupil dilation. Consequently pupil dilation has be used as an externally measurable indicator of the level of conscious mental effort required for a task.
An important aspect of system 2 thinking is that it is costly, both metabolically and in terms of allocating attention. First, the set of stimuli to which we can consciously attend is intrinsically limited, implying that attention is a finite resource. This can be demonstrated in the classic “invisible gorilla” experiment, in which subjects watch a video of two basketball teams, one wearing black shirts and the other white shirts, passing basketballs. Halfway through the video, someone in a gorilla costume walks through the scene. When given the task of counting the number of passes made by the white team (a difficult, attention-demanding, system 2 task), roughly half the subjects fail to notice the gorilla. In addition to attentional costs, there is evidence that conscious mental effort is metabolically costly, and that performing cognitively demanding tasks can degrade one’s self-control and performance on subsequent tasks simply as a result of low blood sugar, a phenomenon known as “ego depletion”. One example of this mentioned in the reading was a study of a group of judges showing that the proportion of parole requests which were approved spikes shortly after lunch and then decreases as a function of the time since the judge’s most recent meal. The implication is that hungry or “ego depleted” judges tended to give less careful consideration to the cases, and defaulted to simply denying the parole requests.
This observation that System 2 is both metabolically costly and slow allows one to concoct possible evolutionary explanations for why it may be useful for some animals to possess the capacities for both fast, system 1 thinking and slow, system 2 thinking. System 1 cognition sacrifices adaptability and accuracy in unfamiliar contexts for speed and metabolic efficiency, whereas System 2 cognition sacrifices speed (in some cases) and metabolic resources. Thus one can imagine that in stable, familiar, and predictable contexts, system 1 may be preferred, while system 2 provides a way of dealing with unexpected events or unfamiliar and unpredictable environments. One should note that both slow and fast modes of cognition can involve the use of heuristics. This becomes apparent when considering a number of logical fallacies and cognitive biases that are commonly observed when people are presented with certain decision-making tasks.

-DL

Advertisements

Read Full Post »

Meeting 6.1 What’s at Stake?

31 Aug 2012

 

This was the inaugural meeting of the Fall 2012 semester. This semester we will be diving into a topic that has been tangentially addressed in many previous discussions, but has not had much direct focus: free will.

If you are new to this blog, let me (Sarah Bengston) first direct you to the syllabus and the glossary. In the glossary you will find a useful collection of working definitions for terms we often hear used during the discussions. For this semester’s discussion, the following terms are likely the most important, so I will highlight them here:

Physical Determinism: The future state of the system can be predicted without error from the present state of the system.
(Physical) Stochasticity: We can predict a range of outcomes and their probabilities but cannot specify exactly what will happen.
Philosophical Determinism: Human choices are sufficiently explained by circumstances. These may be initial conditions (e.g., neurological or genetic determinism) or external factors (e.g., fatalism or providence).
Agency: a word for “the ability to have done otherwise,” choice with philosophical non-determinism built in

Terms defined specifically for this semester are:

Free Will:  agency without constraint (though there was some debate about this).

Mind vs. Brain: The brain is the physical structure in our skull, while the mind is more of a Cartesian type entity.

For more definitions about relevant topics, take a quick trip over to Lucas’s blog post about “determinism.”

A substantial proportion of the first hour was spent clarifying and defining these terms.

To begin the discussion of the readings, we asked who thought there was agency and who thought there was no agency. While there was a pretty equal split, of those who thought there was agency the majority thought that is constrained.

The discussion then focused primarily on the Harris reading. There was, again, a split of opinions. Some felt that there were contradictory aspects to the text. For example, Harris promotes the idea that our actions are all predetermined for us at the level of our neurologic framework, yet we should strive to overcome this and act morally. How can we overcome a predetermined action?

It was also noted that certain aspects of neurobiology are ignored. For example, with conditioning, such as with impulse control, this can become an internalized process and actually change how our brains respond to stimuli. In this way conscious processes can shape our neural framework and change our behavior even under the framework of no agency that Harris presents.  However, it was pointed out that perhaps it was not the response to an impulse but the impulse itself that showed limited agency.

Consciousness and self awareness:

“We are just watching what happens based on our predetermined choices.”- A suggested summation of Harris’ argument.

How does our consciousness impact our agency? Are we able to have agency because we are conscious?

While there was some confusion between self awareness and consciousness, it was generally agreed upon that consciousness was simply knowledge that you are a different entity than others.

The two main points of Harris’ argument are:

1)    Conscious choice is an illusion and the result of unconscious processes, and thus there is no unconscious agency.

2)    Unconscious processes have no agency, so again, there is no unconscious agency.

Despite this, some felt that there was unconscious agency and while choices may not be consciously apparently the ability to have done something else was present.  See Lucas’ argument presented in the previous post for a fuller commentary.

It was brought up that the level of our neurobiology seemed an arbitrary level at which to put the control of our behavior. After all, the synapses of our brain are a byproduct of chemical changes, which are a byproduct of electron changes. If there is no agency, at what level of analysis should we be looking at behavior? Some discussion about agency in the light of physics was discussed, however a longer discussion was promised for next week. With this in mind, I will reserve a summation of that discussion for the next meeting.

From a behavioral ecology standpoint, it was pointed out that the idea of limited or no agency can sometimes be called limited behavioral plasticity. It was suggested that behaving sub-optimally could be evidence for agency, as evolution would have selected for individuals who always behaved in a predictable, optimal way. However, behavior can be limited through multiple mechanisms in an individual allowing sub-optimal behavior to persist, even in the face of evolution.  This supports the idea of constrained agency.

One proposed summary suggested:

Fatalism = no preference

Determinism = no agency

Nihilism = no meaning

Though we were quickly running out of time, it was proposed that you can have any combination of these concepts though preference without agency presented difficulties. Given two alternatives, a mechanism can use heuristics to pick one, but is this the same as preference or value for one over the other?  How is the ability to imagine a non-existent alternative related to concepts of preference?

-SEB

Read Full Post »

Our first discussion of the term centered around the question of free will presented in Sam Harris’ book by that name.  I (Lucas Mix) presented a summary of Harris’ argument from my perspective.  Rather than make this week’s blogger recapitulate that argument, I have set it down below.  It does not represent the consensus of the Forum.  A full and broader summary of our discussion will appear within the week.

 

Argument A

1) The portion of the brain that initiates action appears to fire before the portion of the brain that registers deliberation or choice.
    (See the Libet experiment, Libet et al. 1983 in Brain, and commentary by Daniel Wegner (2002) and Ebert and Wegner (2011).)
    ERGO (alpha)
2) All conscious choice is fully determined by prior unconscious causes.
    ERGO (beta)
3) There is no conscious agency.

[Statement 1 has been shown experimentally.  If 1 is true, and if we rule out the choice being made anywhere else in the brain or mind, 2 follows.  Harris sneaks in the conditional clause, but if we ignore it, then ERGO alpha appears valid.  Statement 3 does follow necessarily from statement 2 (ERGO beta).]

ARGUMENT B

4) Willing is necessarily conscious.
    ERGO (gamma)
5) There is no unconscious agency.

[Statement 4 appears to be an argument by definition.  It begs the question of how “willing” relates to “choice” and “agency.”  I’m not willing to buy Harris’ definition, but if you do, statement 5 follows necessarily (ERGO gamma).]

ARGUMENT C

3) There is no conscious agency.
5) There is no unconscious agency.
    ERGO (delta)
6) There is no agency.

This is a syllogism along the lines of

P(x: x = [a] ) = {NULL}
P(x: x = [~a] ) = {NULL}
————-
P(x) = {NULL}

and appears to be valid.

So, I’m agreeing with ERGO beta, gamma, and delta, but claim the argument fails because ERGO alpha requires a hidden assumption with which I do not agree. I agree with statement 1, but not with statement 4.  ERGO alpha and statement 4 failing, I don’t buy the argument.

A parallel argument:

ARGUMENT D

7) Physics admits of no agents.
    ERGO (epsilon)
6) There is no agency.

[Statement 7 is correct.  Ergo epsilon requires a hidden assumption, either that no non-physical evidence exists or that absence of evidence is evidence of absence.  I cannot agree with either and don’t buy the argument.]

LJM

 

Read Full Post »