Artificial Intelligence and Lisp LiU Course TDDC65 Autumn Semester, 2010

Post on 22-Dec-2015

222 views 0 download

Transcript of Artificial Intelligence and Lisp LiU Course TDDC65 Autumn Semester, 2010

Artificial Intelligence and Lisp

LiU Course TDDC65Autumn Semester, 2010

http://www.ida.liu.se/ext/TDDC65/

Your Resources in Course TDDC65

Course leader: Erik Sandewall

Lab assistants: Henrik Lundberg, John Olsson

Administrator: Anna Grabska Eklund

Webpage: http://www.ida.liu.se/ext/TDDC65/

All course materials, including powerpoint slides, are made available on an open-access license. Please see the course webpage for details about the conditions of use.

Course Goals Understand the character of major software techniques in

artificial intelligence, including both certain algorithms, the use of formal logic, and major types of software systems for AI.

Understand major aspects of list processing, in particular the elements of programming in Lisp, the use of S-expressions as a syntactic style for a variety of languages, and the relationship between Lisp/S-expressions and formal logic.

Have some knowledge of important applications of Artificial Intelligence and of Lisp-based software.

Have the understanding of formal logic that is required for achieving the above goals.

Teaching plan

In general (during period 1):

Wednesdays 10.15 – 12.00: Lectures

Tuesdays 13.15 – 15.00 or 15.15 – 17.00: Tutorials, in particular for the labs

Week 37 and 39:Fridays 8.15 - 10:00: Lectures on Formal Logic.These lectures are intended for those students that do not already

know this material. Please check the lecture notes for this part before you decide whether to go.

Next quarter: lectures Tuesdays 10.15 - 12.00, labs Mondays 13-17

Duration

Lectures end by December 1 Labs and exam study during December Lab facilities close March 1, 2011

Lectures today and tomorrow

The concept of 'intelligence' and the goals of artificial intelligence as a research area and as a software technology

The concept of autonomous intelligent agent Principles of software technology in A.I. The software resources for this course. These are followed by a lab tutorial session on

Thursday of this week.

Intelligence is More/Less, not Yes/No

Intelligent?

Intelligence is More/Less, not Yes/No

IntelligentStupid

People (typically)Computers,i.e. Software

Intelligence is More/Less, not Yes/No

IntelligentStupid

People (typically)The goal is to getsoftware to bearound here, or more

So What do we Mean by Intelligencein this Context?

I will first discuss some examples of intelligence in the behavior of a small child (age 1-4 years old)

Then discuss some examples of intelligence in the behavior of adults

Then describe some proposed software architectures for artificial intelligence systems

Then explain how the concept of intelligence is used in cognitive psychology, including both the standard view and alternative views

Also discuss how intelligence in this sense can be implemented in computer software.

First small child scenario

Task: get an icecream bar from the freezer in the kitchen

Solution: Move a chair to be in front of the refrigerator, then open the freezer door.

First small child scenario

Task: get an icecream bar from the freezer in the kitchen

Solution: Move a chair to be in front of the refrigerator, then open the freezer door.

This required: Being able to identify and use a 'tool' Foresight for visualizing the use of the first plan Understanding the problem with that plan Ability to revise the plan

First scenario, modified

Task: get an icecream bar from the freezer in the kitchen

Solution: Move a chair to be in front of the freezer, then fail to open the freezer door, then move the chair to be in front of the refrigerator door, open the freezer door.

This required: Being able to understand the problem with the

first plan Being able to extend the first plan accordingly.

First scenario, modified again

Task: get an icecream bar from the freezer in the kitchen again, two days later

Solution: Move a chair to be in front of the refrigerator door, climb it, and open the freezer door.

This required: Being able to remember and reuse a plan that

worked at an earlier time, including adapting it Being able to simplify the plan, and remove the

cul-de-sac that appeared in it

Second small child scenario

Scenario: Ronnie is playing with two dolls and has one of them in her hands, Alice comes around and takes the other doll.

Second small child scenario

Scenario: Ronnie is playing with two dolls and has one of them in her hands, Alice comes around and takes the other doll.

Solution: Ronnie goes and gets another thing that she knows Alice wants, and offers it to her in exchange for the second doll.

This required (one possible analysis): Model of Alice's state of mind, in particular, her likely

reaction when she sees the alternative toy Ability to make a plan for getting the alternative toy to the

attention of Alice A model of the current state of the world, e.g. the location

of the alternative toy A certain level of self-control.

Third small child scenario

Ronnie is playing with a doll, her father watching. She undresses the doll

Father: are you going to give the doll a bath? Ronnie: yes Father: are you going to bring her to the bathroom then? Ronnie: yes Pause Ronnie: come you also, I can not myself open the water

Third small child scenario

This required: Ability to foresee the problem of reaching the tap Ability to recruit father as an instrument Ability to formulate the request phrase, including the non-

standard phrase 'open the water' Ability to understand that the phrase 'I can not myself open

the water' is sufficient for explaining why she wants father to come along

Ability to understand that it is appropriate to explain why she wants him to come along

Summary of Necessary Capabilities

Have a model of the environment, including other persons or agents, wrt physical properties as well as behaviors

Have a reportoire of actions that can be used for forming plans

Have ability to identify goals, and to make and execute plans for achieving such goals

Have ability to understand problems when executing a plan, and to modify the plan accordingly

Have ability to foresee the effects of a plan Have ability to learn in the sense of preserving a

previously used plan, and to modify it and reuse it later Have the ability of making and using analogies

First Grownup Scenario

Topic: Have a number of guests at home in order to mingle before going out for dinner or entertainment

Part I: Making the preparations for the event Part II: Managing the event

First Grownup Scenario

Topic: Have a number of guests at home in order to mingle before going out for dinner or entertainment

Part I: Making the preparations for the event Part II: Managing the event. This requires

routine behavior (serving and mingling) but also intervention when something unexpected happens.

Part I requires prediction of both the routine situations and the possible intervention needs.

Compare child and grown-up scenarios

Similar sets of capabilities are used Grown-up has a much larger world model, and

a much larger repertoire of behaviors. Are there fundamental capabilities that differ

between child and grown-up?

Implementing Intelligence in Software

Artificial intelligence aims at implementing capabilities such as those described in the previous slides.

More ambitious goals, such as `implementing all aspects of human intelligence' is a very long-term research goal for some researchers, but it is not the mainstream goal.

The A.I. goal is to reproduce the behaviors, but there is no obligation to reproduce the exact mechanisms whereby it exists in people. The `hardware' is extremely different.

Understanding how intelligence functions in people (or animals) is also an interestsing topic of research, but a quite separate one.

Major Components of an A.I. System

Programs, which implement the necessary algorithms, protocols and interactive behaviors

A model of the system's environment, including things, persons, other A.I. systems (robots, agents); their physical properties and typical behaviors

The model may also contain a history of past events The system's cognitive state: its goals, plans, etc. An overall architecture. We shall discuss: -- The BDI model (Belief/Desire/Intention) -- The HTN model (Hierarchical Task Network)

The Belief/Desire/Intention Model (BDI)

Terminology: Belief: An element in the system's model of the world Desire: An objective that the system would like to bring

about Goal: A desire that has been adopted for active pursuit.

The set of goals must be consistent. Intention: Something that the system has decided to do -

a goal and a way of doing it Plan: A sequence of actions for achieving an intention Event: Observed by sensors or generated internally.

Events may update beliefs, trigger plans, or modify goals.

BDI Representations of the Child Scenarios

Events: Authorization to obtain icecream bar, other girl takes second doll, [internal event] get idea of giving the doll a bath

Beliefs: Knowledge about the situation and its constituents Desires: Eat icecream, Play with dolls (several instances

of that desire), Not having to interrupt play once started Goals: Get icecream, Get back second doll, Pour water on

doll Plan: (As in the examples)

The Belief/Desire/Intention Model (BDI)

Main Loop: initialize-state

repeat

options := option-generator(event-queue)

selected-options := deliberate(options)

update-intentions(selected-options)

execute()

get-new-external-events()

drop-unsuccessful-attitudes()

drop-impossible-attitudes()

end repeat

Hierarchical Task Networks (HTN)

Maintain a hierarchical plan as a datastructure with links between successive actions

Annotate the plan with additional information, e.g. preconditions, what to do if an action fails, and more. This is the task network.

Have programs for executing such task networks, including basic subprograms for executing each action in a task network

Have facilities for making plans expressed as a task network, checking plans ahead of time, modifying plans, explaining plans, dealing with plan failure, and more.

What is then the 'program' for the agent? The task network? The set of subprograms for specific actions? or the program that executes plans and invokes subprograms?

In the first and second sense the agent can `modify its own programs', in the second sense it can maybe 'learn behaviors'.

Comparing BDI and HTN

BDI is more general and `human-like'; HTN is more `technical'

It is possible, if one should wish, to incorporate the use of desires and goals in an HTN system as some of the facilities that operate on the current task network

The BDI model does not say almost anything about the execution structure. One possible approach is to embed an HTN facility in it.

BDI vs Human Intelligence

Intelligence as a Measurable Quantity

Identify a number of specific, measurable cognitive abilities, such as drawing conclusions, identifying analogies, geometrical manipulation..

Measure each of these in a representative population of persons

Notice statistical co-occurrence of several of these abilities. This makes it reasonable to consider them as a group. The term `intelligence' is used for characterizing this aggregate of specific abilities.

Intelligence, with this definition, is observed to be a useful concept: persistent during the development of a young person, good predictor of a number of success factors in a person's life.

Consensus Definition of Intelligence

"Mainstream Science on Intelligence" (52 researchers):

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings -- “catching on”, “making sense” of things, or “figuring out” what to do.

Consensus Definition of Intelligence

Report by American Psychological Association, 1995:

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, [and] to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. ...

Consensus Definition of Intelligence

Report by American Psychological Association, 1995: Individuals differ from one another in their ability to understand complex

ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, [and] to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.

Why do these capabilities occur and co-occur?

By the existence of a specific “organ”? By genetically dictated, physiological factors? By developmental physiological factors? By social training and upbringing?

Psychology Definitions of Intelligenceas Implementation Guidelines?

As specification of how to do it - not useful As specification of capabilities that the resulting system

should have - useable, but leaves very much to the interpretation of the reader

Anyway, these definitions focus on capabilities; this can be related to capabilities of actually implemented systems

Discrete Intelligence Criteria

Examples: Ability to make and use tools Ability to understand that other agents do not necessarily

know all that one knows oneself Ability to understand that the image in a mirror is a picture

of oneself, and not an additional physical individual In computational terms, such abilities are the direct result

of the world model that the system uses. Moderately advanced versions of these capabilities can be hand-crafted into a computational system; this belongs to the software design.

What about the ability of a child to acquire these abilities?

One or many intelligences? Consider the second child scenario. One view: the children show social competence. This is

intelligent behavior, but of another kind than what is required for opening freezer doors, or repairing physical devices. We should therefore distinguish between `mechanical' and `social' intelligence

Other view: The basic paradigms of problem-solving behavior is the same in both examples. They merely differ with respect to what parts of the agent's entire world models are being used.

Other proposed terms: `emotional intelligence,' `practical intelligence' [probably more coming]

Read the wikipedia page on 'intelligence' and the debate part for that page in order to get a flavor of the controversy.