Bernoulli Trials and Expected...

Post on 10-Mar-2018

217 views 4 download

Transcript of Bernoulli Trials and Expected...

Bernoulli Trials and Expected Value!

CSCI 2824, Fall 2014! !!!

Assignments • Problem Set 4 due Friday • All lecture slides are posted at the course website • Final exam Tues Dec. 16 10:30-1 !!!!!

Flipping a weighted coin

•  Suppose we’re going to flip a coin 4 times, but the probability of the coin coming up “heads” is 0.6 (not, as usual, 0.5). (Each of these independent repeated trials of a probabilistic event is called a Bernoulli trial.)

•  What’s the probability of getting exactly two heads and two tails?

Flipping a weighted coin

•  Let’s take a look at the probability of a particular sequence: say, HTTT

•  The probability of getting this exact sequence (by our “tree” method) is 0.6*0.43

•  By similar reasoning, the probability of getting, say, HTHT is 0.62*0.42

Flipping a weighted coin

•  Now, there are C(4, 2) ways getting exactly two heads out of four flips; and each of these sequences has probability 0.62*0.42

•  So, the overall probability of getting two heads is 6 * (0.62 * 0.42) = 0.346

•  As a check: this is a little bit less than the probability of getting two heads with four flips of a fair coin (0.375).

A general theorem about Bernoulli trials

•  Suppose each of our trials has a probability p of “success”.

•  Then the probability of getting exactly k successes out of n trials is just:

C(n, k) * pk * (1-p)(n-k)

A Problem in Human Intuition

Which of these sequences of coin tosses is more likely?:

HHHHHHHH!! !HTTHTHTH!

Expectation (or Expected Value)

•  Suppose we try the following Yahtzee-like gamble:

•  We roll three dice. If three of a kind show up, we get $10. If only two of a kind show up, we get nothing. If all three dice have distinct values, we pay $0.75. Is this a good game to play over time?

Here’s how to solve this problem:

•  Probability of getting three of a kind: 1/36 •  Probability of two of a kind: 90/216 •  Probability of all three distinct: 120/216 •  So our expected value for the game is: (1/36 * 10) + (90/216 * 0) + (120/216 * -0.75) = -0.14

In other words, on average, we lose 14 cents per play of this game.

What do we mean by “expected value”, anyway?

•  Suppose we have a “reward” Ri associated with each possible event i in our sample space. Then the expected value of our sample space is just the sum of Ri times the probability of getting that Ri over the whole space: Σi Ri*pi

Expected Value of N Bernoulli Trials…

•  Suppose we flip a fair coin eight times. What is the expected number of heads?

0 * C(8,0) * (0.58) + 1 * C(8,1) * (0.57 * 0.5) + 2 * C(8,2) * (0.56 * 0.52) + … 8 * C(8,8) * (0.58)

Expected Value of N Bernoulli Trials…

•  Suppose we flip a “0.6-heads” coin eight times. What is the expected number of heads?

0 * C(8,0) * (0.48) + 1 * C(8,1) * (0.47 * 0.6) + 2 * C(8,2) * (0.46 * 0.62) + … 8 * C(8,8) * (0.68)

A Theorem We Can Prove By Induction…

•  In n Bernoulli trials with success probability p, the expected number of successes is just n*p.

•  To take an example: the expected number of heads after flipping a fair coin 100 times is 50.

•  The expected number of heads after flipping a 0.6-heads coin 100 times is 60.

How do we show this?

•  After 1 trial, the expected number of successes is just 1*p = p. (That’s what it means to have a probability p of success!)

•  So now, let’s say that if we perform m trials, up to a given value of m (where m is at least 1), we expect to have p*m successes.

What we’ve proved so far (probability p of winning in each trial)

0 * Prob[0 out of m] + 1 * Prob[1 out of m] + 2 * Prob[2 out of m] … + m * Prob[m out of m] = p*m

After m+1 trials 0 * Prob[0 out of m] * (1-p)

+ 1 * Prob[1 out of m] * (1-p) + 2 * Prob[2 out of m] * (1-p) … + m * Prob[m out of m] * (1-p) + 1 * Prob[0 out of m] * p + 2 * Prob[1 out of m] * p + … (m+1) * Prob[m out of m] * p = (1-p)* mp + [green part]

+ 1 * Prob[0 out of m] * p + 2 * Prob[1 out of m] * p + … (m+1) * Prob[m out of m] * p = (1+ 0) Prob[0 out of m] * p + (1+ 1) Prob[1 out of m] * p + (1 + 2) Prob[2 out of m] * p + … (1 + m) Prob [m out of m] * p

= p*(Prob[0 out of m] + Prob[1 out of m] + … Prob[m out of m]) + p*mp

= p + p*mp

After m+1 trials 0 * Prob[0 out of m] * (1-p)

+ 1 * Prob[1 out of m] * (1-p) + 2 * Prob[2 out of m] * (1-p) … + m * Prob[m out of m] * (1-p) + 1 * Prob[0 out of m] * p + 2 * Prob[1 out of m] * p + … (m+1) * Prob[m out of m] * p = (1-p)* mp + [green part] = (1-p)*mp + p + p*mp = (m+1) p

A Famous Experiment on “Guessing” by Gerd Gigerenzer We create a test on American cities (populations) with lots of questions of the form: “Which is bigger: SAN JOSE or SAN

ANTONIO?” We then administer this test to a classroom of

American students and a classroom of German students; the German students do better.

A Famous Experiment on “Guessing” by Gerd Gigerenzer Now, we create a test on German cities (populations) with lots of questions of the form: “Which is bigger: DORTMUND or BREMEN?” We then administer this test to a classroom of

American students and a classroom of German students; now the American students do better.

""Suppose we have to choose between pairs drawn from a list of 100. Further suppose:""a. When both objects are recognized, we have a 60 percent chance of getting the right answer. (E.g., is Munich a bigger city than Berlin?)""b. When both objects are unrecognized, we have a 50 percent chance. (Essentially, we’re just “flipping a coin”: is Dortmund bigger than Duisberg?)""c. When one object is unrecognized, we have an 80 percent chance of getting the right answer. (Is Munich bigger than Dortmund?)"

"

Three people take the test, which has 100 * 99 /2 = 4950 questions. One (person A) recognizes each and every object in the list. His score is:!

.6 * (100 * 99 / 2) = 2970!

Person B doesn’t know a thing about the objects in the list. His score is:!

0.5 * (100 * 99/2) = 2475!

Person C knows half the list. His score is:!

0.5 * (50 * 49 / 2) + 0.6 * (50 * 49 / 2) + 0.8 * (50 * 50) = !

612.5 + 735 + 2000 = 3347.5!

Moral: A little ignorance can sometimes help.!

Another “probability effect”

•  Estimate the proportion of English words that begin with the letter “K” versus those that have “K” in the third position.

The Conjunction Effect!

!!Bill is 34 years old. He is intelligent, but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but weak in social studies and humanities.!

Bill is a doctor, and his hobby is playing poker.!

Bill is an architect.!

Bill is an accountant.!

Bill plays jazz for a hobby.!

Bill surfs for a hobby.!

Bill is a reporter.!

Bill is an accountant who plays jazz for a hobby.!

Bill climbs mountains for a hobby.!