Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk...

30
Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in Multi-Criteria Decision- Making (MCDM 2007), Honolulu, HI April 3, 2007 Copyright © 2007 by B. Chandrasekaran Research supported through participation in the Advanced Decision Architectures Collaborative Technology Alliance sponsored by the U.S. Army Research Laboratory under Cooperative Agreement DAAD19-01-2- 0009.

Transcript of Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk...

Page 1: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Decision-Making and the Cognitive Architecture of

Problem SolvingB. Chandrasekaran

Keynote Talk

First IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM 2007),

Honolulu, HI

April 3, 2007

Copyright © 2007 by B. Chandrasekaran

Research supported through participation in the Advanced Decision Architectures Collaborative Technology Alliance sponsored by the U.S. Army Research

Laboratory under Cooperative Agreement DAAD19-01-2-0009.

Page 2: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Background for the Talk

• My long interest in cognition, cognitive architectures

• Relatively recent interest & work in Multi-Criterial Decision Making (MCDM)

• Puzzles about the relation between the general human situation in acting and deciding and the highly formal normative formulations in decision theory.

Page 3: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Decision- Making in the Wild (DMITW)

• DMITW is everyday DM, before the formalization that potentially makes the problem capable of a “rational” solution, the formal version. – Often formulated as maximizing utility (u), or

satisfaction of the agent.

• Ex: A man wins $20,000 in a lottery. Should he pay off some of his loans? Buy a new car? Repair house? Maybe other things? The formal core may be MCDM, but:– Choices (alternatives -- “alts”), criteria not given. DM has

to figure them out.

– In any case, this is one problem among many faced by agent.

Page 4: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Pre-Formal version process itself subject to informal rationality judgments

• Rationality in DMITW is also to be judged by the degree to which the process results maximizing utility for the agent, given agent’s knowledge.

• Many ways of falling short:– Less complete set of alt’s

• “I could’ve had a V8.” Failed to generate the V8 alt. Why would if have been more rational to have thought of V8?

• How to increase the likelihood that the V8 alt is generated?

– Poor evaluation of alt’s • How to ensure that likely satisfaction is estimated correctly?

– Choosing the “best” alt• Getting this right is the easy part. Once formalized, normative

solutions are available..

Page 5: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

My Focus

• I wish to focus on phenomena in this region, esp. the role of cognitive architecture in transforming the problem into the “formal” version.

– The strengths & weaknesses of cognitive architecture then become central.

– Analysis may help understand various “irrationalities” and how DSS’s may be enhanced.

Page 6: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality”

• Human decision-making is subject to various irrationalities. Examples: – Framing Bias – Transitivity issues, – “Fairness” bias,…

• Extensive literature, I’ll just touch on a few aspects.

• Understanding their origin and help in designing DSS’s that can help DM’s avoid them.

Page 7: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality” Examples: Framing Bias

• Subjects received a message indicating that they'd initially receive £50.

• They had to choose between a "sure" option and a "gamble" option. The "sure" option was presented in two different ways, or "frames" to different groups of subjects.

• The "sure" option was formulated as either "keep £20" (Gain frame) or "lose £30" (Loss frame). The "gamble" option was identical in both frames and presented as a pie chart depicting the probability of winning.

Page 8: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality” Examples: Framing Bias

Page 9: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Morgenbesser’ Pies• Anecdote about Philosopher Sidney

Morgenbesser:– After finishing dinner, Sidney decides to order dessert. – The waitress tells him he has two choices: apple pie

and blueberry pie. He orders the apple pie. – After a few minutes the waitress returns and says that

they also have cherry pie at which point Morgenbesser says

– “In that case I’ll have the blueberry pie.”

• This was supposedly a joke, but might the above behavior actually be rational?

Page 10: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Cognitive Architecture & Problem Solving

• Cognition can be analyzed architecturally in many layers.

• We focus on the problem solving (PS) level of analysis, such as embodied in Soar: – Goals, result in relevant knowledge being accessed from

LTM, which sets up a problem space, which may result in subgoals, and this is a recursive process, until all the subgoals are achieved, & the main goal is as well.

– Open-ended, search in problem spaces.– Heuristics & pre-compiled solutions can reduce search– Aspects of the architecture may bias the search in various

ways.

Page 11: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Evaluating Alternatives by Problem Solving: $20,000 Windfall Example

• Knowledge used to generate alt’s (not shown), evaluate consequences

Page 12: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Reducing Search

• Directly relevant knowledge – e.g., result of learning ftom previous situations, can reduce search.

• Heuristics. Similar to above, but less dependable – The heuristics may directly assign “desirability” values

based on elements of alt descriptions.

• Architectural biases:– Avoid loss, Avoid danger, built-in notions of fairness,

• Set by evolution

– These result in rapid evaluations of alts.

Page 13: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Ideally, PS transforms a DMITW into a formal version with a normative solution.

• Given a decision problem, the DM :– Simulates, imagines, calculates, processes, composes –

I.e., Problem Solves - - until she can get a sense of the possible alternatives, and their consequences.

• One approach is to transform the problem into MCDM. – Proceed with simulating consequences until each alt is

characterized in terms of similar dimensions, I.e., “criteria.”

• Not always possible, but when possible, the alt’s can be compared more readily.

– But how does MCDM connect to utility in general?

Page 14: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Example: $20,000 windfall

• Knowledge used to generate alt’s– Pay off debt (POD) vs fix house (FH)– POD will reduce monthly interest payments,

increase net income; FH …

POD

Inc. in income Domestic peace

Conscience improvement

FH

Page 15: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

MCDM and Utility• In MCDM, the DM is normally characterized

as one with multiple objectives.– But we need a unification with the more general

“DM with utility” model.

• Let Ci(dk) be the value alt dk has on the criterion Ci (w/o loss of generality, all criteria directions are such that higher is better), and u(dk) the utility of alt dk.

• The dimensions -- the criteria – should be as informative about u as possible: – f(C1,C2, …Cn) uest

Page 16: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Criteria & the u function, contd

• i. Ci(dk) > Ci(dl) --> Pr[u(dk) > u(dl)| Ci’s] (call it pi) is greater than ½. Alt’s can be ordered with some confidence.

• ii. Ci(dk) > Ci(dl) & Cj(dk) > Cj(dl) --> Pr[u(dk) > u(dl)| Ci’s & Cj’s ] (call it pij) is greater than or equal to pi. Alt’s can be ordered with more confidence.

• iii. If Ci(dk) > Ci(dl) & Cj(dk) < Cj(dl), the alt’s cannot be ordered, and more knowledge is needed.– A trade-off situation. Perhaps additional dimensions of

comparison can be identified.

Page 17: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Trade-off’s

• Trade-off’s are normally described as the agent “accessing” the preferences. In fact, in DMITW, they typically require additional simulation, imagining, etc.– More descriptive dimensions, giving more into

about relative values of u’s.

Page 18: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Possible Sources of Error: Cognitive Architecture & Bounded Rationality

• Knowledge limitation: generation of alts, exploration of consequences may both suffer.– What knowledge is invoked depends on the terms in the problem

description and on how the alts are described. Different descriptions, though logically identical, may invoke different knowledge. One source of Framing Bias.

• Architectural limits & biases:– STM limitations– Hard-wired architectural features might bias evaluations & choices

• “loss,” “danger,” in alt descriptions may influence assessments of alts. Another source of Framing Bias.

• Time limitation: Incomplete exploration, more dependence on heuristics, architectural biases.

Page 19: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Cognitive Architecture & Explanations of Biases

• Some of the “irrationalities” may be explained by the features of the cognitive architecture.

• Not everything that appears irrational may be so if viewed in the larger context.

Page 20: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality” Examples: Framing Bias

• Subjects received a message indicating that they'd initially receive £50. They had to choose between a "sure" option and a "gamble" option. The "sure" option was presented in two different ways, or "frames" to different groups of subjects. The "sure" option was formulated as either "keep £20" (Gain frame) or "lose £30" (Loss frame). The "gamble" option was identical in both frames and presented as a pie chart depicting the probability of winning.

Page 21: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Architectural Bias against “Loss”

• Evaluating “Bet” requires a bit of problem solving

• Issue (probably) does not arise if choices are: – {Lose $30, Lose $20} vs {Keep $20, Keep $30}– But how about: {Lose $30 Vs Keep $30}?

uest(Lose $30) < uest(Keep $20)

Page 22: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Possible Rationality of Prof. Morgenbesser’s Behavior

• Recall: – The waitress gives two choices: apple pie and blueberry

pie. He orders the apple pie.

– The waitress returns and says that they also have cherry pie at which point Morgenbesser says: “In that case I’ll have the blueberry pie.”

• Can a problem solving model explain this?– How about if the “cherry pie” alt triggered new

knowledge that generated new dimensions of comparison that had not previously been taken into account?

Page 23: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Framing Bias Related to Knowledge Invocation

• Consider example in sensor-allocation planning, where some of the criteria are: “coverage,” “timeliness,” and “fuel expended.” The last criterion can also be described by “fuel remaining” – one may be computed from the other.– When doing trade-offs between two COA’s, The DM may

have to simulate alt’s further to resolve trade-off.– The different descriptions can invoke different problem

spaces: fuel remaining may bring up problem spaces about future missions while fuel expended brings spaces about cost for current mission. The two descriptions may result in different trade-off behavior.

• Depending on the larger context, both descriptions may be used for better effect, though it may appear redundant.

Page 24: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality” Examples: Intransitivity

• Arrow’s Paradox in Voting (collective decision-making)– Versions exist in individual DM as well.– Less a result of hard-wired properties of the

architecture and more one that arises due to discarding relevant information.

• Counting positives & negatives.• Possible consequence of knowledge limitation,

discretization of evaluations.

Page 25: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Bias & “Irrationality” Examples: Fairness Bias

• Players A & B. Both A & B know the following rules.

• A is given $100 and asked to share with B. A can propose to B any amount to share. If B accepts, he gets that amount, and A gets to keep the balance. If B declines, neither of them get anything (A has to return the $100).

• By a significant margin, people asked to be B reject “unfair” offers, even though they’ll end up getting nothing.

• This bias is likely architectural.

Page 26: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Architectural Biases in Moral Decisions • Peter Singer, March 20, 2007, The Guardian (UK)

Greene ... studied how people respond to a set of imaginary dilemmas. In one, you are standing by a railroad track when you notice that a trolley, with no one aboard, is heading for a group of five people. They will all be killed if it continues on its current track. The only thing you can do to prevent these five deaths is to throw a switch that will divert the trolley on to a side track, where it will kill only one person. .... most people say you should divert the trolley on to the side track, thus saving a net four lives...

• In another dilemma, the trolley is about to kill five people. This time, you are standing on a footbridge above the track. You cannot divert the trolley. You consider jumping off the bridge, in front of the trolley, thus sacrificing yourself to save the people in danger, but you realise you are too light to stop the trolley. Standing next to you is a very large stranger. The only way you can prevent the trolley from killing five people is by pushing this stranger off the bridge into the path of the trolley. He will be killed, but you will save the other five. -- . . . , most people say that it would be wrong to push the stranger....

• ...remarkable consistency despite differences in nationality, ethnicity, religion, age and sex....

• Greene found that people asked to make a moral judgment about "personal" violations, like pushing the stranger off the footbridge, showed increased activity in areas of the brain associated with emotions. This was not the case with people asked to make judgments about relatively "impersonal" violations like throwing a switch. Moreover, the minority of subjects who did consider that it would be right to push the stranger off the footbridge took longer to reach this judgment than those who said that doing so would be wrong....

Page 27: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

What Does All this Mean for DSS’s?• A decision is only as good as the quality of the formal

version of the problem, however “rational” the solution is wrt the latter.

• Ultimately, all that the human DM has is hir cognitive architecture, which has its strengths & weaknesses. – Strengths:

• Large, potentially open-ended amounts of knowledge, including knowledge about accessing external knowledge.

• Ability to bring relevant knowledge, if very slowly, to bear on the task at hand.

– Weaknesses:• Slow.• Evolutionarily hard-wired architectural constraints & biases.

(framing a fairness biases)• Access to knowledge depends on cues in descriptions & may

miss relevant knowledge (framing bias).

Page 28: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

What Does All this Mean for DSS’s?

• DSS design should be sensitive to possible DM biases arising from the architecture.– Framing Bias:

Investigate the operational context to make sure descriptions help the DM evoke all relevant knowledge.

• DSS’s can help here.

Page 29: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

How DS’s can Help

• Supporting exploration of problem representation & formulation

• Supporting alt generation techniques humans are poor or slow at:– E.g., GA techniques for generation of design candidates,

candidate composition techniques– …

• Supporting evaluation techniques humans are slow or poor at:– E.g., simulation

Page 30: Decision-Making and the Cognitive Architecture of Problem Solving B. Chandrasekaran Keynote Talk First IEEE Symposium on Computational Intelligence in.

Questions?