Experimental and Behavioral Economics · (Rapoport & Budescu, 1992) Adress critisism of previous...

Post on 05-Nov-2019

2 views 0 download

Transcript of Experimental and Behavioral Economics · (Rapoport & Budescu, 1992) Adress critisism of previous...

Experimental and Behavioral Economics

Lecture 2: Bayesian updating and

cognitive heuristics

Prof. Dr. Dorothea Kübler Summer term 2019

1

Memos Thank you for sending them! Many good summaries, some very very short…but I accepted all of them unless you received an email from me. A criticism voiced by one of you: early experiments were designed to support the theory, results are not surprising (e.g., regarding indifference curves). Many pointed out that early experiments laid the foundations in terms of methodology (paying subjects etc.).

The famous „Linda Problem“

(Tversky and Kahnemann 1983) „Linda is 31 years old, single, outspoken, and

very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.“

Please rank the following statements by their probability of being true:

(1) Linda is a teacher in elementary school.

(2) Linda works in a book-store and takes Yoga-classes.

(3) Linda is active in the feminist movement.

(4) Linda is a psychiatric social worker.

(5) Linda is a member of the League of Women Voters.

(6) Linda is a bank teller.

(7) Linda is an insurance salesperson.

(8) Linda is a bank teller and is active in the feminist movement.

Do you believe that (8) is more likely than (3)? Almost no one does. Do you believe that (8) is more likely than (6)? About 90% of subjects do. In a sample of well-

trained Stanford decison-science doctoral students, 85% do.

They all commit the conjuction fallacy. Why?

Conjunction fallacy. Conjunction law of probabilities: P(A&B)≤ P(A)

A. Linda is a bank teller (6) B. Linda is active in the feminist movement (3) A&B. Linda is a bank teller and is active in the feminist movement. (8)

From the conjunction law follows that P(8)<P(3) From the conjunction law follows that P(8)<P(6) Other examples: 1980‘s: USSR invades Poland and US breaks the diplomatic relationship with USSR, in the following year. Robustness: Charness, Karni, Levin (2010): much smaller rates when incentivized and in groups

Another bias (Kahnemann & Tversky,1972)

„A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:

(a) 85% of the cabs in the city are Green and 15% are Blue. (b) A witness said the cab was Blue. (c) The court tested the reliability of the witness at night and

found that the witness correctly identified each of the two colors 80% of the time and failed to do so 20% of the time.

What is the probability that the cab involved in the accident is

Blue rather than Green?

What is the probability that the cab involved in the accident is Blue rather than Green? The median and modal response in experiments is 80%.

Base rate neglect Updating belief X after learning M should follow rule Bayes’ Rule:

𝑃 𝑋 𝑀 = 𝑃 𝑀 𝑋 𝑃(𝑋)𝑃(𝑀)

X- Car was Blue M- Witness identifies the car as blue

𝑃 𝐵𝐵𝐵𝐵 𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵 =𝑃 𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵 𝐵𝐵𝐵𝐵 ∙ 𝑃(𝐵𝐵𝐵𝐵)

𝑃(𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵)

=𝑃 𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵 𝐵𝐵𝐵𝐵 ∙ 𝑃(𝐵𝐵𝐵𝐵)

𝑃 𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵 𝐵𝐵𝐵𝐵 ∙ 𝑃 𝐵𝐵𝐵𝐵 + 𝑃 𝐼𝐼𝐵𝐼𝐼. 𝑏𝐵𝐵𝐵 𝐼𝑛𝐼 𝑏𝐵𝐵𝐵 ∙ 𝑃 𝐼𝑛𝐼 𝑏𝐵𝐵𝐵

= 80%∙15%80%∙15%+20%∙85%

= 41%

Given that the proportion of blue cabs in the city is only 15%, the posterior probability is not correctly updated: Base rate neglect

What do the two examples have in common?

Representativeness Representativeness can be defined as „the degree to which an event : (i) is similar in essential characteristics to its parent

population and (ii) reflects the salient features of the process by which it is

generated“ (Kahnemann& Tversky, 1982, p.33).

Another violation of Bayes’ rule (Edwards, 1968)

2 urns, A and B, look identical from outside. A: 7 red and 3 blue balls. B: 3 red and 7 blue balls. One urn is randomly chosen, both are equally likely. Suppose that random draws with replacement from

this urn amount to 8 reds and 4 blues. What is the probability that the urn is A?

Typical reply is between 0.7 and 0.8 But Prob(A I 8 reds and 4 blues)=0.967

Conservatism - underweighting of likelihood information.

Conservatism versus base rate neglect:

contradiction? Griffin and Tversky (1992): people overweight the

strength of evidence and underemphasize its weight

Obstacles to learn: two more biases Wason, 1968:

You are presented with four cards, labelled E, K, 4 and 7. Every card has a letter on one side and a number on the other side.

Hypothesis: „Every card with a vowel on one side

has an even number on the other side.“ Which card(s) do you have to turn in order to test

whether this hypothesis is always true?

Right answer: E and 7 Why 7? People rarely think of turning 7. Turning E can yield both supportive and

contradicting evidence. Turning 7 can never yield supportive evidence, but it can yield contradicting evidence.

Confirmatory Bias:

People tend to avoid pure falsification tests. Too little learning.

Self-fulfilling prophecies If people believe a hypothesis is true, their actions often produce a biased sample of evidence that reinforces their belief. Example: If waiter has stereotype that young patrons do not tip well (she may display confirmation bias by remembering times that young patrons have not tipped well and ignoring times when young patrons have tipped well), this stereotype may become a self-fulfilling prophecy in that she may talk differently and wait less on young patrons, which in turn may make them tip less (thereby confirming the stereotype). Thus both confirmation bias and self-fullfilling prophecies inhibit learning.

Monty Hall problem.

One door has a car behind it, and behind other doors is a goat. Which door do you choose?

Would you like to change you choice?

Monty Hall problem Most people stick to the original choice, but this violates Bayes’ Rule. 𝑃 𝑋 𝑀 = 𝑃 𝑀 𝑋 𝑃(𝑋)

𝑃(𝑀)

𝑃 𝐶𝐶𝐶1 𝑛𝑜𝐵𝐼𝑜, 𝑐𝑐𝑛𝑐𝐵1 =𝑃 𝑛𝑜𝐵𝐼𝑜 𝑐𝐶𝐶1, 𝑐𝑐𝑛𝑐𝐵1 𝑃(𝑐𝐶𝐶|𝑐𝑐𝑛𝑐𝐵1)

𝑃(𝑛𝑜𝐵𝐼𝑜|𝑐𝑐𝑛𝑐𝐵1) =0.5 ∙ 1

𝑜0.5 =

1𝑜

𝑃 𝐶𝐶𝐶𝐶 𝑛𝑜𝐵𝐼𝑜, 𝑐𝑐𝑛𝑐𝐵1 =𝑃 𝑛𝑜𝐵𝐼𝑜 𝑐𝐶𝐶𝐶, 𝑐𝑐𝑛𝑐𝐵1 𝑃(𝑐𝐶𝐶|𝑐𝑐𝑛𝑐𝐵1)

𝑃(𝑛𝑜𝐵𝐼𝑜|𝑐𝑐𝑛𝑐𝐵1) =1 ∙ 1

𝑜0.5 =

𝐶𝑜

Monty Hall problem in markets Kluger and Wyatt (2004) Two types of assets in the experiment: With and without the right to „switch doors“ Prediction: The asset’s price with right should be twice as high as asset’s price without the right. Market aggregation („market magic“): if at least two out of six participants are rational Bayesians, the prices should be close to fundamental value. Result: Only 25% of groups were close to rational prices.

Randomness Is randomness intuitive? Consider sequence 1 and 2 of roulette-wheel outcomes. Sequence 1: Red-red-red-red-red-red Sequence 2: Red-red-black-red-black-black Is sequence 1 less, more or equally likely? They are equally likely, but most think that sequence 1 is less likely.

Representativeness/ Law of Small Numbers 1. Representiveness does not respect sample size

2. Law of small numbers (Tversky and Kahnemann 1971):

„All samples will closely resemble the process or populations that

generated them“ People tend to believe that each segment of a random sequence must exhibit the true relative frequencies of the events in question. If they see a pattern of repetitions of events, i.e. a segment that violates this „law“, they believe that the sequence is not randomly generated. „Hot hand“ for example.

Only law of large numbers is true In reality, representativeness of random sequences holds true only for infinite sequences („law of large numbers“). The shorter the sequence, the less it must represent the true frequencies of events inherent in the random process by which it has been generated. An infinite sequence of coin-flips must exhibit 50% Heads and 50% Tails. But this is not true for finite sequences.

Can people easily generate random sequences? Often people generate sequences with negative

autocorrelation (Wagenaar, 1972). Direct consequence of the

law of small numbers.

Who is doing better than average?

1. Mathematically sophisticated

2. Experienced (trained)

3. Children (Ross and Levy 1958)

Gambler’s Fallacy Another consequence of the law of small numbers Examples: Last week‘s winning numbers were 2, 10, 1, 21 and 5. Gamblers of the current week avoid betting on these numbers. (Clotfellter and Cook, 1993) After Head appeared twice in a fair coin-flip, most people who know that the coin is fair would now bet on Tail.

Gambler’s Fallacy In reality, this week it is as likely (or unlikely) as last week that the number 2 (or 10 or 1 or 5) wins. And „Head“ is as likely to appear after „Head-head“ than „Tail“. Reason: The events in question are independently distributed, and thus there should not be any negative correlation.

One more randomness-related bias Would you believe that the following sequence is random? Head, Tail, Head, Tail, Head, Tail… People think that repetitive patterns are not random, even if they are. People think that randomness produces absence of repetitive patterns.

What are the consequences of the failure to detect randomness? • overgeneralizations from small samples • too little search. • learn too quickly. Examples of possible consequences: • Cancer Clusters

Taxpayer money is spent on investigations although almost all of these clusters are randomly generated.

• Belief in a Just World / Just-World-Bias Guilt ascriptions to victims (cognitive bias referring to the common assumption that the outcomes of situations are caused or guided by some universal force of justice, order, stability, or desert, not randomness).

Possible psychological reason for failure to detect randomness: Illusion of control Psychological findings: People who accept the influence of randomness on their lives feel less in control and less secure and are more often depressive or anxious. Believing that they have control over their own life, people with illusion of control ascribe the same amount of control to others and deny random causes for failures and successes.

Is Law of small numbers/ Representativeness bias interesting for economists?

Mixed Strategies Playing mixed strategies implies the ability to

randomize. Adults are not good in writing down or recognizing

random sequences. Does this mean that they cannot play mixed

strategies?

Can people randomize? (Rapoport & Budescu, 1992) Adress critisism of previous experiments. Difference between explicit and implicit knowledge. Subjects have to play a repeated zero-sum game in which the unique Nash equilibrium is in mixed strategies. In the lab, equilibrium will be played only if subjects are able to randomize.

The game: The game is repeated N rounds. In each round, the equilibrium is to play Red or Black with probability ½.

1,-11,1-Black1,1-1,-1Red

BlackRed,21 →

Treatment D („Dyad“) – Baseline treatment • Random matching of pairs • Each pair plays the game on the previous slide for 150 rounds. • The game is common knowledge. • Subjects are told that their job is to earn as many points as possible. • In each round, each subject makes a hidden choice between Red and Black. • At the end of each round, subjects are informed about the outcome of the game (and can therefore infer the choice of their opponent).

Treatment S („Single“) • All 150 Choices must be made beforehand. • Treatment-comparison D-S isolates the effects of

the feedback (if any). Treatment R („Randomization“) • Traditional design: Subjects do not play a game but

have to write down a sequence that could have been generated by flipping a coin 150 times.

• Treatment comparison S-R isolates effects of the interactive nature of the game that requires implicit instead of explicit knowledge.

All subjects participated in treatment D and one of the treatments S or R. Procedure in treatment D • In the Instructions, the game was explained. • No detailed explanation of the research question (why?) • Instructions said that the experiment served to investigate decision behavior in a competitive situation

• Each subject got a package of 5 red and 5 black cards and an initial endowment of 20 points. • The pairs were seated opposite of each other. • Subjects made their choice by placing a card on the table, hiding the colored side. • Then, the experimenter turned both cards and determined the winner. • The loser of the round then had to transfer one of his points to the winner. • The subjects took their cards back, mixed them, and the next round started. • Communication was prohibited explicitly.

The game ended either after 150 rounds or after one of the players lost all of his 20 points. Procedures in treatments S and R • In treatments S and R, subjects were not placed opposite to each other but isolated. • They filled in decision sheets. • The experimenter collected the sheets. • Then, in treatment S only, the experimenter determined the winner for each round.

In treatment R, the subject whose sequence was closest to a previously computer-generated sequence (Bernoulli-process) was awarded a prize of 25 points. (This was known beforehand, too.) Main results: • In treatment R, the hypothesis that subjects generated Bernoulli sequences had to be rejected. • This effect vanished to a great extent in treatments D and S. • Moreover, subjects mixed with approximately 1/2 most of the time, as predicted.

The bottom line: • People can randomize if they have to do it for strategic reasons, interacting with others. • They cannot do it „only in their head“. Does this mean that misperception of randomness is not important for economists? .

Gambler‘s Fallacy in high stakes decisions (Chen, Moskowitz, Shue, 2016) Three setups: Refugee asylum court decisions in the United States: denying is 3.3% more likely after granting (implies 2% reverse decisions). (Admin data) Loan application reviews in India. 9% of wrong decisions due to autocorrelation in flat payment, slightly less in stronger incentives regimes. (Data from a field experiment of incentives regimes) Major League Baseball home plate umpire is 1.5 percentage points less likely to call a pitch a strike if the previous pitch was called a strike. Effect doubles close to the edge of the strike zone.

Heuristics in dealing with probabilities Question: Is there a theory about how people who are not rational Bayesians deal with probabilities? People employ heuristics (Kahneman and Tversky): • Availability • Anchoring •Representativeness

The Availability heuristic Used when people estimate the probability of a given event of type T according to the number of type-T events they can recollect. Example: people overestimate the frequency of rare risks (much publicity) but underestimate the frequency of common ones (no publicity). Public vs private transport; smoking vs illegal drugs.

Why does availability heuristic make sense? Normally, one can remember the more type-T events, the more frequently type-T events occur. But: The ease with which we remember certain events is influenced by other factors, too, e.g. by the emotional content or salience. Since people do not correct for these other factors, the availability heuristic is biased. (Biased sampling)

Anchoring Can be used whenever people start with an initial value that they update in order to reach a final value. If the final value is biased into the direction of the initial value, this is called „anchoring“ according to Kahneman & Tversky. Closely related to conservatism.

Anchoring examples Irrelevant message (initial value) has an effect on the outcome K&T (1982): Number between 1 and 100 from the wheel draw, and the number of countries 1*2*3*4*5*6*7*8 vs 8*7*6*5*4*3*2*1 (512 vs 2250 vs 40230) Wansink, Kent, and Hoch (1998): Limit sales on Cambell‘s soups Beggs and Graddy (2009) :Art auctions

The Representativeness heuristic/ Law of small numbers Often leads to significant deviations from Bayesian rationality. As we have seen before, the representativeness heuristic explains -Gambler‘s fallacy -Conjunction fallacy -Base-rate neglect.

Psychological versus economic experiments

1.Financial motivation (judgements versus

decisions)

2.Every day stories („vignettes“) versus neutral

settings (random devices)

3. Initial response versus equilibrium.