CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours...

152
CAN COMPUTERS DO WHAT H[IMANS DO? A Cornipariaon Between Artificial Inteligence And Human Xxltelligence mikhael bebela missakabo A thesis submitted in confozmity with the sequiremente for the degree of Master of Arts Graduate Department of Education University of Toronto (CI Copyright by mikhael bebela missakabo 1998

Transcript of CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours...

Page 1: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

CAN COMPUTERS DO WHAT H[IMANS DO? A Cornipariaon Between Artificial Inteligence And Human

Xxltelligence

mikhael bebela missakabo

A thesis submitted in confozmity with the sequiremente for the degree of Master of Ar ts

Graduate Department of Education University of Toronto

(CI Copyright by mikhael bebela missakabo 1998

Page 2: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

National Library Bbliithèque nationale du Canada

Acquisitions and Acquisitions et Bibliographie Services senrices bibliographiques 395 Welüngtao Street 395, .me Wellington OtEawaON K1A ON4 Ottawa ON KIA ON4 Canada Canada

The author has granted a non- exclusive licence allowing the National L i b w of Canada to reproduce, loan, distriiute or seli copies of this thesis in microform, paper or electronic formats.

The author retains ownership of the copMght in this thesis. Neither the thesis nor substantial extracts fiom it may be printed or otherwise reproduced without the author's permission.

L'auteur a accorde une licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distrihm ou vendre des copies de cette thèse sous la forme de microfiche/nlm, de reproduction sur papier ou sur format électronique.

L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation.

Page 3: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu
Page 4: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

"Body am 1, and soulm -thus speaks the child. And why should one not speak like children? But the awakened and knowing Say: body am 1 entirely, and nothing else; and sou1 is only a word for something about the body. The body is a great reason, a plurality with one sense, a war and a peace, a herd and shepherd. An instrument of your body is also your little reason, my brother, which you cal1 l'spirit II -a l i t t l e instrument and toy of your great reason .... Behind your thoughts and feelings, my brother, there stands a mighty ruler, an unknown sage -whose name is self. In your body he dwells; he is your body. There is more reason i n your body than in your best wisdom .

(Friedrich Nietzsche, Thus Spoke Zarathustra)

Page 5: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Can Computers Do What Birmans Do? A C o m p a r i s o n B e t u e e n Artificial Intelligence And Human

Intelligence Master of Acts 1998

mikbael bebela missakabo Graduate Deputment o f Educatioa

University of Toronto

1s there a distinction between knowledge humans can acquire

and/or generate and knowledge computers can acquire and/or

generate? Unless taken in a metaphorical sense, in any contention

about computers' "intelligence" there is an underlying assumption

that computers somehow have "mind". The question is: what is the

rnind? We have to know what the (nature of) mind is before

attributing intelligence to computing machines. That is why 1

will briefly try to examine the (irn)possibility of knowing the

nature of mind. Furthermore 1 will attempt to answer the question

of whether computers or any other artifact can acquire and/or

generate knowledge the way(s) humans do. This will be done by

comparing and contrasting human problem capabilities and those of

computers. In the end, 1 will try to emphasize that social

interaction plays a significant role in the formation of the mind

and the development of intelligence.

Page 6: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

1 extend my gratitude to Prof. John Eisenberg and Prof. Harold

Troper for their vision and supervision. 1 cannot forget the

assistance and suggestions for improvement over the years by

Prof. George Moyal, Prof. Lutz Winckler, and Anke Winckler. To

Julia Winckler, Shirin Khosravaneh, and Martine Giguère, I

sincerely appreciate your everyday support and encouragement. 1

am also very much indebted to Prof. Neil Naiman, Prof. Fraser

Cowley, Khoudia Camara, Patricia Pok Shin, Prof. Claude Gratton,

and Prof. Pierre Belland. And a special thank to everybody 1

mentionned above for putting up with my 'warpedr notion of time.

Page 7: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

iii

Abstract

Acknowledgements

Table of Contents

Introduction

On the nature of the Mind

1s t h e Brain the Mind?

Problem solving: Humans and Machines

Social interaction and the Mind

Conclusion

Bibliography

iii

Page 8: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Encore un mot, e t je t e laisse. A i e t o u j o u r s présen t à I f e s p r i t

que l a nature n'est pas Dieu; qu'un homme n ' e s t pas une machine;

qu 'une hypothèse n ' e s t pas un fait: e t s o i s assuré que t u ne m'auras p o i n t

compris, p a r t o u t 00 t u c r o i r a s apercevoir quelque chose de c o n t r a i r e d ces p r i n c i p e s .

Denis Diderot (1 713-1 7 8 4 )

Even though strength and agility seemed to be the most important

abilities for ancient people, they also relied on such faculties

as vision, hearing, and smell. In the struggle to survive, al1

these factors played a major role in decisions on how, when, and

where to apply physical skills. But there must have been

'somethingr that coordinated and commanded al1 those distinct

abilities or faculties and accounted for judgment, creativity,

memories, and more. This 'sornething' could be identified as

mental power, wisdom, or intelligence. In The Republic, Plato

gives a similar scherne in which lesser virtues are governed by

the highesc virtue, wisdom. In the modern language the word

'wisdom', connoting a more contemplative approach to life, has

been replaced with the word 'intelligence'. However, it should be

noted that Plato had a quite different take on wisdom and

intelligence.

Page 9: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Despite the fact that we are agnostic about what it is, it is

still assumed that intelligence coordinates and commands our

actions and decisions. And, supposedly, in order to improve life

for al1 of us, scientists are trying to simulate intelligence by

creating 'intelligent artifacts'. There is no doubt that these

devices sometimes help us solve problems that requise a lot of

mental and/or physical power. Do 'intelligent' devices really

simulate intelligence? It is difficult to give a correct answer

to this question unless we know what the term 'intelligence'

stands for.

In general, this discussion will revolve around artificial

intelligence and human intelligence. 1 will attempt to answer the

question whether there is a dist inct ion between knowledge humans

can acquire and/or generate and knowledge compatets csn acquize

and/or generate. In other words, it will be an attempt to find

out whether there is a fundamental distinction between human

intelligence and artificial intelligence.

Proponents of artificial intelligence are convinced that (human)

intelligence can be duplicated in computing machines. This kind

of speculation cannot be empirically verified because we don't

seem to know what intelligence is. However we can dig out the

assumptions underlying this belief. In this assertion it is

obviously assumed that what we cal1 (human) intelligence has

Page 10: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

3

ontological status. This means that intelligence has some

physical and/or objective reality that could be expressed out and

replicated in artifacts. And this artificial intelligence could

be equated with hurnan or natural intelligence. In other words,

there would be no difference in the way humans and computers

think, corne to know things, and/or solve problems. The problem is

that since we seem to be agnostic about the nature of

intelligence, how could we draw the parallel between artificial

intelligence and human intelligence?

Intelligence is generally viewed as the capacity of cognitively

endowed beings, such as humans, to acquise and/or geneiate

knowledge, and eventually to solve problems. For example, when a

math student learns a method of factoring polynomials and uses it

to solve a problem, that student could be said to have

demonstrated some kind of intelligence. But what is meant by

intelligence is not always clear. The term "intelligence" is used

to describe various abilities. Charlie Parker was considered by

some a s a genius or highly intelligent, so was Albert Einstein.

However the basis of what makes some of us believe that Charlie

Parker is a genius is not interchangeable with what makes some of

us believe that Albert Einstein is a genius. In each case, the

criterion used to determine intelligence varies. Charlie Parker

is considered as a genius because of his outstanding talent as a

jazz musician. Albert Einstein is considered as a genius because

of outstanding insight in theoretical physics. Being an

Page 11: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

outstanding theoretical physicist doesn't make Einstein an

outstanding jazz musician. 1 don't think that Charlie Parker's

outstanding talent as a j a z z musician would necessarily give him

outstanding insights in theoretical physics. In both cases, it is

arbitrarily assumed that intelligence refers to some mental

capacity and/or activities. And artificial intelligence is

striving to extend this capacity to machines. To illustrate this,

chess playing would be a good example. In the field of artificial

intelligence is believed that beside the real-world version of

the game of chess,

there is another version, one existing purely in the world of symbols and syntax (Le., formal systems), and his version mirrors exactly the real world game we normally see. ... The connection between the two worlds lies in the interpretation of the elements of the formal system in terms of the objects and operations of the mathematical structuresw (Casti; 1985;~. 125) .

This means that it is possible to have a formal systems that

could play chess. There are already chess playing machines that

beat grand masters. The problem is: machines and humans don't

play the same way. For example, Deep Blue plays positional chess

while humans such as Gary Kasparov play mostly tactical or

strategic chess. Deep Blue's positional chess consists of simply

evaluating billions of moves, and choosing the most probable best

move by referring to patterns stored in its database. But humans

such as Kasparov look for meaningful configuration, and often

relies on intuition or hunches. It is brute force that makes Deep

Blue successful because intelligence cannot be reduced simply to

Page 12: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

an evaluation based on probability.

If, as said earlier, the term refers to mental activities, and

intelligence presupposes mind, then I would suggest that the

nature of the end, the locus of intelligence, be determined

before examining or comparing human intelligence and artificial

intelligence.

On the question of the nature of mind, there seem to be two

options: either the mind is simply a heuristic device which

doesnlt have an objective reality or it is an ontological entity

which must have some obiective reality. If it is simply a

heuristic device that helps us point out 'what that acquires

and/or generates knowledge', then it might not have reality

outside the "mind" in which it is envisaged. And we don't have to

worry about its nature.

Since artificial intelligence generally presupposes an

ontological mind, the discussion would mostly be on the second

option which is: the mind is an ontological 'entity'. John

Haugeland thinks that this presupposition is "based on a

theoretical conception as deep as it is daring: namely, we are at

root, cornputers ourselves" (1985;p.2), The fact that Artificial

Intelligence presupposes an ontological mind could explain why

computer scientists are endeavouring to build machines that

emulate the human brain. 1s the brain al1 that there is to the

Page 13: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

mind? The debate still rages on, and either answer (yes or no)

raises more questions. That is why I intend to examine the

question of whether the brain is the mind or, if it is not, the

relation between mind and brain. More precisely, 1 will discuss

the question of whether the ontological mind can be reduced to

the brain. 1 will deal with claims such as Haugeland's in order

to compare mental activities referred to as intelligence in

humans and problem-solving capabilities referred to as

intelligence in machines. Since creativity is often implied in

problem solving strategies 1 will try to show that creativity

cannot be replicated in machines. And, only a human being or more

precisely a person, a conscious 'entity' interacting with the

world (which is comprised of people and things) could be

creative. Thus I should also examine the role social interaction

plays in problem solving processes. My contention is that social

interaction plays a major role in the formation of the mind,

consequently it plays a role in problem-solving, knowledge

acquisition, and knowledge generation processes. There are many

cases showing that children who didnlt interact enough with the

world cannot fully develop their mental capacities. They cannot

articulate more than few words, they donlt seem to have developed

logical patterns of thinking, and they have difficulty solving

elementary problems. That is why I will end with an emphasis on

the importance of social interaction in the formation of the

mind. Whether it really exists or not, what we cal1 the mind

seems to be at the core of the human intelligence and/or

Page 14: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

behaviour. 1 would contend that if the mind exists, then it is

not an 'entity' that is distinct from the body. Although 1 donlt

espouse the idea of the mind as a brain state or a process in the

brain I view the mind as being materially supported. There has to

be a biological organism in order to have a mind. ft seems

difficult to deny the fact that without the brain and the central

nervous system there cannot be a mind. Failure to see this is

what Antonio Damasio (1995) calls Descartes ' error .

Roughly, 1 will be arguing against philosophical materialism

which is a doctrine that claims that every mental phenornenon is

totally explainable in physical terrns. However, as said before, 1

am not excluding the possibility of having mental phenomena being

materially supported. This seems to be a dilemma. 1 have

difficulty believing in an immaterial mind or sou1 but 1 will not

subscribe to the contention that not only a conscious mind could

be described in physical tems but also can be duplicated, even

improved by computing machines.

1 will also try to give an historical perspective to my

arguments. Because my belief is that in a particular society the

prevailing cultural and intellectual environment as well as the

socio-economical system by which the necessities of life are

produced contribute to the formation of paradigms around which

the intellectual activities revolve. The time and society in

which we live influence and/or shape the way we see the world.

Page 15: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

8

The comparison of human intelligence and artificial intelligence

would not have been possible two hundred years ago simply because

there were no 'intelligent' artifacts around. But John Haugeland

thinks that this comparison "has nothing to do with advanced

technologies but with deep theoretical assumptions". He argues

that, "according to a central tradition in Western philosophy,

thinking (intellection) essentially is rational manipulation of

mental symbols (viz., ideas) " (1985;~. 4) . Since computers also manipulate symbols the comparison is possible. 1 agree with

Haugeland but I would Say that advanced technologies should not

be ignored as a factor. The most straightforward reason is that

with no advanced technologies we would not have computers. And

the comparison would not be possible. The socio-intellectual

environment should not be overlooked. The idea of comparing human

intelligence to artificial intelligence has emerged within a

particular tradition (rationalism/materialism) in Western

philosophy .

Artificial Intelligence is a branch of computer science which

investigates the extent to which mental behaviour of humans can

be reproduced in and by machines. In other words, the goal of

Artificial Intelligence is to make machines that think.

Researchers in Artificial Intelligence argue that, even if

computing machines lack cognition, they can still he described as

intelligent by virtue of their ability to perform various complex

tasks that have traditionally required human intelligence. We

Page 16: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

give people tasks; on the basis of performance in a task we

consider that some thought has taken place in reaching a solution

to a problem. Similarly, we can give cornputers the same task;

then, it would seem to me, that it is only some kind of vulgar

prejudice if we refuse the accolade of intelligence to the

computer"(l980, p.13), Herbert Simon, a pioneer in the Artificial

Intelligence, muses. Compared to popular notion of intelligence,

Simon's definition is very narrow. It reduces intelligence to the

ability to perform various complex tasks. But he fails to mention

that this is done only in linear way following describable

discrete steps. Besides, what Simon means by 'complex task' is

not clear. There are 'cornplex tasks' involving the mental and the

physical that a machine cannot obviously perform: learning

progressively how to ride a bicycle, for example.

The notion of Artificial intelligence has been applied to

computer systems and programs capable of performing 'complex

tasks' such as information processing, game-playing, pattern

recognition, and medical diaqnosis. However not many scientists

believe that true Artificial Intelligence can ever be developed

since the workings of the human mind are still little understood.

How can a machine duplicate complex processes that are still

little understood?

Nonetheless Artificial Intelligence, "clearly delivers a product,

whether it be an industrial robot, planes and tanks that 'think,'

Page 17: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

10

or models of how to solve problems. Something seems to be working

-even though what is meant by 'working' is not clearn (Eisenberg;

1992;p.23). But no 'intelligentf tank is capable of fighting a

war without hurnans telling it what to do. Furthermore, would an

'intelligent' tank be aware of the fact that it is fighting a

w a r ? Could it suffer from such an elusive ailment as the 'Vietnam

War Syndrome' or the 'Gulf War Syndrome1? Would it be sensitive

to propaganda? Would it be capable of an act of bravery? Would it

be capable of understanding what treason means? Would it be

capable of empathy? I very much doubt that because 'intelligent'

devices lack consciousness which is essential factor in al1 cases

mentioned above.

Despite the fact that sometimes they alter the way we interact

with them and the way we conceive and describe the world,

'intelligent' machines perform tasks but humans solve problems.

Contrary to machines, humans are aware of the fact that they are

solving problems'. Furthermore, much of the human knowledge

generation and/or acquisition processes (including commonsense

knowledge) is not explainable or describable by/in algorithms as

in cornputers. Thus, Artificial Intelligence is still "four to 400

years" away, as John McCarthy, the field's narner, estimated three

decades ago. McCarthy's prediction is somewhat paradoxical.

L Even those on production line who perform repetitive tasks are somehow aware of what they are doing.

Page 18: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

What is meant by intelligence is still not clear, even ta those

who claim to be capable of measuring it2. The notion of

intelligence seems to be full of paradoxes. I.Q. tests, which are

believed to measure intelligence, are mostly based on reasoning

skills and reading skills. In other words, psychometrists think

that it is the combination of reading skills and reasoning skills

that gives a picture of an individual's intelligence. There are

cases in which either reading skills or reasoning skills are

deficient but the person still shows signs of what is called

intelligence. For example, a person who suffers from dyslexia, an

impairment of the ability to read, could be successful in

performing non language-related tasks. This shows how the notion

of intelligence, however useful, can be fuzzy.

Prior to the question of whether computers can be intrinsically

intelligent there is an assumption that computers are more than

tools. They are capable of acquiring and/or generating knowledge.

Indubitably, these two activities presuppose and require

intelligence. Unless taken in a metaphorical sense, in any

contention about cornputer intelligence there is an underlying

assumption that cornputers can have minds. Because, in our

2 For example, C. Murray and R.Herrnstein, authors of The Bell Curve, acknowledge that they donr t know what intelligence is. But they "know" how to measure it, and they have faith in the rnethod of measuring it (I.Q. tests). How can you measure something that you don't know?

Page 19: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

12

everyday language we tend to associate and/or conflate mind with

intelligence. Having a sharp mind literally means being

intelligent. However, I am not proposing to analyze the

relationship between mind and intelligence. Neither am 1 going to

propose another theory of mind. In chapter 1, 1 will simply look,

from an historical point view, at various attempts made in order

to define the nature of the mind. I choose to add an historical

perspective because, I think, it will help us understand not only

the evolution of theories of mind but also the contexts in which

they develop. But the focus will be on Descartes' attempt to

def ine the mind as an immaterial substance that is distinct and

independent of the body. Descartes, often referred to as the

father of modern philosophy, is the one whose theory of mind

Gilbert Ryle spoke of "with deliberate abusiveness, as 'the dogma

of the Ghost in the Machine'" (Ryle;1949;p.15). Ryle thinks that

Descartes represents the mind as being a ghost harnessed to a

machine (the body) 3 . The body is considered as a machine because

the bodily processes can be explained in physical terms. Since

the mind, supposedly, has no physical properties it cannot be

described in physical terms. The two entities are different in

nature but their interaction is realized in the pineal gland. And

1 choose to focus on Descartes because by separating the mind

from the body and clairning that the pineal gland is the locus of

the mind-body interaction Descartes not only provided the basis

3 Most likely in support of some religious doctrine of immortality.

Page 20: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

for philosophical materialism but also foreshadowed many

contemporary theories of mind. Many contemporary scholars (for

example, Marvin Minsky, Jerry Fodor) have a tendency to reduce

the mind to the brain. In other words, the mind is identical with

the brain and its functions. Herbert Feigl boldly states that "a

thought merely is -not arises from, not accompanies, but

identically is- a particular spatio-temporal firing pattern of

neurons in your brain" (Harth;1993;p.100). This kind of argument

serves to make materialist theories more 'scientificr and do away

with the immaterial mind but also to avoid falling into the trap

in which Descartes fel14.

In this discussion about human intelligence versus artificial

intelligence the possibility of the mind being the brain5 is of a

particular interest. The main reason is that: Artificial

Intelligence presupposes an ontological mind which is the brain.

1s the brain really the mind? This will be the topic of chapter

2. As a follow up of chapter 1, the discussion will be on how

progress and discoveries in Physiology have impacted and

influenced the way the mind is viewed. Progress and discoveries

in Physiology have allowed a reinterpretation of the concept of

mind: an ontological reduction of Cartesian (immaterial)

4 This trap is the question of mind/body interaction for which Descartes could not offer a convincing argument.

5 This doesnlt mean that 1 am comrnitted to the idea of an ontological mind.

Page 21: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

substance to physical substance. I would have difficulty

believing that the brain is the mind. But, 1 cannot deny, for

example, the correlation between brain damage and deterioration

of mental cornpetence. 1 would also try to show how the 'the brain

is the mindl assumption is a key component of any computational

theory of mind. Thus my argument will be that in any computer

mode1 of the mind the underlying assumption is that the mind is

the brain. Because if when the mind is seen as having physical

properties that its workings can be considered computational. For

example, the computational theory of mind known as Connectionism

or P a r a l l e l Distributed Processing, is an artificial intelligence

approach to cognition derived from the view of the brain as a

network of interconnected neurons.

1 will also maintain that reductionist/ materialist assumptions

are essential to any Computational Theory of Mind because one

cannot Say that mind is a computer if it (the mind) does not have

physical properties. 1 will also argue that not only would there

be a category-mistake in comparing such an elusive 'entity' as

the mind with a computer, but, as a colleague once said: "it is a

methodological danger to treat models/metaphors literally". This

approach to understanding cognition is a good example of the

prevailing scientific trend: materialism. As said earlier,

materialism is a doctrine that claims that every phenomenon is

explainable in physical terms. According to Hogan, materialists

Page 22: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

15

believe that "any instantiation of any property by, or within, a

human being is ultimately explainable in physical tems" (Hogan

i n Guttenplan;l995;p.472). And eliminative materialists "advocate

the elimination of our mental vocabulary on the grounds that it

is irredeemably unscientific and misleading" (Lyons;1995;p.lv).

Matter is the ultimate reality, and the mind and/or phenomenon

such as consciousness is an attribute or effect of matter. And it

could be explained by physiochemical changes in the nervous

system. The materialist approach does away with the problematic

Cartesian mind. It is so appealing because it seems to do away

with the mysterious aspect of phenomenon such as mind and/or

consciousness. And to those who believe in it, it gives a false

impression of power. Because it is supposed that once a

phenomenon is described in physical terms it can easily be

controlled. Maybe it can easily be handled by the intellect but

not in the physical sense. Erich Harth sees "materialism as an

outdated concept, rooted in the nineteenth-century belief that

all phenomena could be explained as the mechanical interactions

between many small indivisible and permanent material objects or

elementary particles. Since then, the world of these supposedly

indestructible units has been opened to reveal an immaterial

confusion of fields, virtual states, and questionable causal

relationsm6(~arth;1993;p.iix). Sergio Moravia asserts that

this approach doesn't seem to go away despite the warnings

6 My italics

Page 23: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

sounded by more advanced contemporary thought, materialism, whether explicitly or implicitly, appears to be the general Weltanschauung underlying research programs which could quite well do without it. Why is this so? 1s there some fear that if a materialist conception of the world is abandoned, then there is no choice but to embrace spiritualism?"(l995;p.7).

But if someone doesn't believe in materialism, this doesnlt

necessarily mean that they are spiritualists or mystics. 1 should

also add that by seeking to reduce every hurnan phenomenon to a

physical phenomenon, materialism becomes, as Hilary Putnam puts

it, "one of the most dangerous contemporary intellectual

tendencies"(l982b;p.147) which makes scientists believe in the

possibility of predicting and controlling (human) behaviour.

Because ''scientists (and others) are spooked by that which they

cannot control. To overcome this discomfort they must meet the

perceived challenge to control what appears so elusive and

difficult to ~ontrol'~(Eisenberg;l992; p.15). It would seem that

materialism is more than an intellectual trend. It is a

practical, ethical and political program that could spell out

danger. For example, intelligence is again being linked to the

genetic make-up and/or ethnic background and, in some way, the

socioeconomic status. This has been done by P. Rushton, and more

recently by C. Murray and R. Herrnstein.

In chapter 3, my intention is to scrutinize the process of

problem solving in both humans and cornputers by comparing and/or

contrasting them. David Rummellhart, for example, believes that

Page 24: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

it is possible to apply the concept of parallel distributed

processing (interlinked and concurrent computer operations) and

to create networks of experimental computer chips (silicon

neurons) that simulate or mimic data-processing functions of

brain cells. It is an attempt to reproduce human problem solving

capabilities in machines. The question is: does the mind function

and solve problems only in discrete steps? 1 donlt think so. It

is true that some human problem solving processes can be mapped

out. But the mind sometimes solves problem in a non-linear

fashion that cannot be explained or described by discrete steps.

And 1 will also try to show that human problem solving

capabilities involve more than a mere computation based on

deductive reasoning. Very often, creativity and other

psychological factors such as perseverance and courage play

significant but necessary roles in human problern solving

capabilities. As said earlier, "human problem solving

capabilities involves more than a mere computation based on

deductive reasoning". Because "the mind can be shown to operate

intuitively, to know certain truths without linear calculations"

(Eisenberg;~, 22; 1992) . For example, the mathematician Jacques Hadamard believed that the roots of creativity "lie not in

consciousness, but in the long unconscious work of incubation and

in the unconscious aesthetic selection of ideas that thereby pass

into consciousness'(Johnson-laird in Hadamard;l996;p.xiii). My

other contention was that 'Icreativity and other factors such as

perseverance and courage play significant roles in human problern

Page 25: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

18

solving capabilities". However 1 will not discuss "factors such

as perseverance and couragew7 and focus on creativity and/or

imagination. And the discussion will include the question of

whether creativity is uniquely human. Creativity plays a major

role in problem solving. So does imagination in knowledge

acquisition and problem solving. Very often, in order to

understand and/or solve a problem we have to imagine or picture

it. Imagination has helped Science make leaps and bounds.

Savinien de Cyrano de Bergerac, a 17th century French essayist

and philosopher, speculated about space travel. In the 19th

century, Jules Verne, a French novelist, could imagine

subrnarines. Today, space travel and undersea exploration are no

longer in the realm of fiction.

In chapter 4, the discussion will be on the role of social

interaction and/or environrnent in the formation of the mind.

Because 1 believe that the outside world or the environrnent in

which we live not only shapes our view of the world but also

influences the way we see the world. Furthemore, the world seems

to be the material with which knowledge is constructed. As David

Olson states it: "any account of the cognitive processes of

humans will therefore have to take into account the propesties of

these cultural artifacts as well as of the biological organs.

7 Since these aptitudes have not been attributed to computing machines even by the most techno-optimist and/or the most techno-enthusiast.

Page 26: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

19

Cognition rests as much on a cultural foundation as it does on

biological onen(1980;p.3). Even self-knowledge implies some kind

of differentiation and interaction with the rest of the world.

In this discussion, 1 will try to put an emphasis on the human

being as a whole, and not simply on the mind. The choice of this

topic is based on the fact that 1 tend to believe that it is

futile to inquire into the nature of the mind and/or the mind-

body interaction since we don't really know what the term mind

refers to. Furthermore, 1 believe that a human being or a person

is a whole being, an indivisible entity that is not an embodied

rnind or spirit. A human being or a person is not a 'cornputer m a d e

out of meatt(Gardner in Penrose;l989;viii) either, as Marvin

Minsky maintains. Cornputers do not have t h e ability to display

images internally and to order those images in a process called

thought. This is, as Damasio (1995) suggests, an essential to

having a mind. Furthermore 1 doubt that computers can experience

'what it is like to bel. However it seems plausible that what we

cal1 the mind could be materially supported. But this doesn't

mean that the m i n d can be reduced to a physical substance. In

this work, attempts will be made to compare and contrast the two

images (of human beings) prevailing in contemporary philosophy

and modern science: the image of a human being as a machine with

physical components and properties, and that of a human being as

person, "producer of acts, symbols, and values connected

essentially with his historical and cultural nature". As Gilbert

Page 27: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

20

Ryle once s a i d : "Men are not machines, not even ghost-ridden

machines. They are men -a tautology which is sometimes worth

remembering" (1949) . Instead of (or before) trying to find out what human being is made of, we should ask ourselves 'what is it

to be a person. Since a human being doesn't live in a vacuum, I

prefer the word 'person' because it connotes a subject ive social

entity living in a particular time and space and being the

inheritor of a particular history.

Page 28: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

1 shall say only that it is generally the ignorant who have given things

their names, and so the names do not always fit the things with sufficient

accuracy . R. Descartes (Replies to the Fiftb Set of Objections)

CHAPTER ONE

e rrati;ire of the

As said in the introduction, my contention is that: eithar the

mind is simply a heuristic concept or it is an ontological

entity. If it is simply a heuristic concept devised to help

understand behaviour and solve problems then we don't have to

worry about its nature. But if it is an ontological entity then

it must have sorne reality, it is not simply a concept. Generally

materialist thinkers believe that the mind is an ontological

entity. David K. Lewis affirms that "materialists must accept the

identity theory as a matter of fact: every mental experience is

identical with some physical stateWB(1966;p.63). For an

8 The question seems to be an ideological one. If I define myself as a materialist, then 1 must accept that every mental state is identical with some physical (neurophysiological) state . If 1 don1 t de£ ine myself as a rnaterialist, then I may not accept that every mental state is identical with some physical (neurophysiological) state .

Page 29: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

ontological mind there could be two primary categories of

existence: the physical and the nonphysical. The two categories

of existence give way to two major ontological alternatives on

the nature of the mind. Tautologically, the nonphysical option is

that the mind is an immaterial substance, therefore, off limits

to scientific inquiry; and the physical option would be that the

mind has some physical reality. Many contemporary cognitive

scientists opt for the physical option sometimes called:

'materialist ontology". According to this doctrine, the

mechanisms of the mind are implemented by the brain. Because a

well-functioning brain is the material seat of mental capacities.

And mental capacities emerge from neurophysiological capacities.

However mental capacities do not reduce to neurophysiological

capacities . (see . Lloyd Morgan, C. D. Broad, Samuel Alexander, and George H. Lewes)

In this chapter 1 propose to examine the (dualist) idea of an

ontological mind distinct from the body, or the mind as an

immaterial substance. More precisely the discussion will be on

the relation between the mind as an immaterial substance and the

body. However a full discussion on the nature of mind with a

historical perspective would be too arnbitious and beyond the

scope of this work. Sergio Moravia thinks that "a systematic,

historical study of the debate on the mind-body problem would

certainly pose a fascinating challenge, but one would inevitably

run the risk... of meshing such an investigation with the whole

Page 30: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

23

history of philosophy (or science) from ancient times to the

present" (1995; p. 1) .

A central metaphysical problem in the philosophy of mind is the

question of whether mental phenomena are also physical phenomena

and, if not, how they relate to physical phenomena. Ein

Weltknoten, a world knot: this is how Arthur Schopenhauer once

defined the problem of relationship between mind and body. This

'knot' ambiguously binds together what are, or appear to be, the

two fundamental dimensions of man" (Moravia;l995;p.l). But the

nature of the mind and that of the body, "the two fundamental

dimensions of man" should be determined before attempting to

solve the problem of how the two dimensions relate, if they

relate9. The nature of the physical dimension or the body is

knowable and open to empirical verification. But the nature of

the other dimension, the postulated one known as the mind remains

elusive, intractable and mysterious. And it highlights the limits

of our capacity to understand the world.

As said i n the introduction, 1 choose to examine mainly

Descartesr take on the nature of the mind. Because he is the one,

in modern philosophy, who first claimed that the mind was

distinct from the body. In the sixth meditation, he said that

9 Because, for example, you cannot be in a process of building a car and driving it at the same time.

Page 31: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

24

'*the first observation 1 make.. . is that there is a great difference between the N n d and the body, inasmuch as the body is

by its very nature always divisible, while the mind is utterly

indivisible1* (Cottingham;l984;p.59). He also suggested that while

the bodily processes could be explained in physical terms, the

processes of the mind could not. Another important reason:

despite the fact that the doctrine of mind and body as distinct

entities is discussed throughout history of philosophy, Descartes

is viewed as the father of modern mind-body problem. He is the

one who, according to Gilbert Ryle, instituted the dogma of the

Ghost in the Machine. Since, by separating the mind from the

body, Descartes' approach raises more problems than it solves 1

would try t o include, in this discussion, some subsequent

reactions and theories on the nature of the mind.

What is the mind? John Haugeland thinks that our "commonsense

concept of 'the mind' as an immaterial entity is surprisingly

recent. It arose during the seventeenth century, along with

modern science and modern mathematics -which is no mere

coincidence" (1985;p.lS). Also, this concept seems to be a result

of the medieval world view which was mainly a Christian

adaptation of ideas put forward by ancient Greek thinkers such as

Plato and Aristotie. Haugeland argues that the origin of the

modern mind could be found in the socio-historical (and

intellectual) context created by the Copernican distinction

Page 32: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

25

between appearance and realitylO. If Haugeland is right, the

Copernican revolution did affect not only Astronomy but also the

prevailing Weltanschauung! It influenced Galileo Galilei whose

discoveriesn were important factors in bringing about the

ultimate triumph of the Copernican revolution. F r o m Haugeland's

perspective, the important point is that this revolution

highlighted the distinction "... between how things seem and what they really are; that is, it fuxther separates thought £rom the

world" (1985;p.23). However the consequences of the Copernican

revolution came out with Descartes.

Descartes, seems to have immensely contributed to Our

"commonsense concept of the mind". But Ryle also believes that

Descartes was simply "reformulating already prevalent theological

doctrines of the sou1 in the new syntax of Galile0"(1949;p.23).

This was done by describing the mind as not having physical

properties. Charles Morris also thinks that the fundamental

features of the Cartesian views can be found in Newton and

Galileo12. As Bertrand Russell derides this approach:

'O However I could argue that this distinction didnlt start with Copernicus. P l a t o contended that there was a difference between things and their forms (which bestow existence on them).

l1 Such as telescope, the changing phases of Venus, the moons of Jupiter, and many more.

12 Galileo believed that 'the book of nature is written in the language of rnathematics.

Page 33: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Nature and Nature's laws lay hid in the night. God said 'Let Newton bel, and al1 was light.

Pxesumably under the influence of Newton and Galileo discoveries

in physics, Descartes postulated a mathematically describable

materialistic universe structured by mechanistic principles. And

Russell adds that

he regarded the bodies of men and animals as machines; animals he regarded as automata, governed entirely by the laws of physics, devoid of feelings or consciousness. Men are different: they have a soul, which resides in the pineal gland. There the soul cornes in contact with the 'vital spirits', and through this contact there is an interaction between soul and body" (Russell; 1961; p.545).

However by envisaging the universe (and its content) as machines,

it is not in the way our senses view machines. 1 would Say that

it is in the way machines are viewed on the drawing board and in

engineersl minds. Descartes1 universe could accommodate the human

body but it excluded the human mind. Since the human body is part

of the universe, its processes could be explained in mechanical

terrns. However the same could not be said for the mind which is

not part of the universe. That is why 1 could contend that the

prevalent 17th century concept of mind had a Galilean-Cartesian-

Newtonian flavour but with an unmistakable déjà vu after taste.

This after taste is the doctrine of immaterial soul that could be

retraced back to Socrates and Plato or earlier Eastern

philosophy.

The Galilean-Cartesian-Newtonian world-view is, in part

Page 34: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

in£luenced by classical ideas of Pythagoras and Plato. It

regarded the world as a machine that could be described in

mechanical and mathematical terms. Consequently humans, as

'things-in-the-world' could be and should be described in

mechanical and mathematical terms. However the Galilean-

Cartesian-Newtonian world-view conceded that human beings have a

dual nature. They are body and N n d (soul). Physically, they are

parts of the world-machine. Mentally, they are spectators of the

world-machine to which they are linked by the relation of

knowledge.

Most likely influenced by his rationalist predecessors (i.e.

Plato), Descartes explicitly founded his First Philosophy on the

res cogitans (the mind/soul, the thinking thing) which is

supposedly immaterial. However even if the mind, when directed

towards itself, doesnlt perceive itself to be anything other than

a thinking thing nothing tells us that the mind is indeed a

t h i n k i n g thing. Or, from simply the fact of thinking nothing

tells us that there is an immaterial 'entity' behind this

activity or either this activity is materially supported.

Descartes goes further by claiming that not only the mind is

totally different from the body but it also operates

independently of it.

Ryle asserts that

when Galileo showed that his methods of scientific discovery

Page 35: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

were competent to provide a mechanical theory which should cover every occupant of space, Descartes found in himself two conflicting motives. As a man of scientific genius he could not but endorse the claims of mechanics, yet as a religious and moral man he could not accept, as Hobbes accepted, the discouraging rider to those claims, namely that human nature differs only in degree of complexity from clockwork. The mental could not be just a variety of the mechanical (Ryle; 1 9 4 9 ; ~ . 18) .

The mind could not be an occupant of space (or a phpical

entity). If it were an occupant of space (a physical entity), it

would be subject to the laws of physics. In Descartesr view, the

mind is an immaterial substance that thinks. It is, "a thing

which doubts, understands, ai f irms, denies, wills , refuses, which

also imagines and feelsw (Works, 1,153). Here, Descartes'

position on the status of the mind is quite ambiguous. 1s the

mind a "thing which thinks" or an activity of the soul

(thinking)? It cannot be the entity that does the thinking and be

the thinking at the same time. Descartes thinks that the

difference resides only in terminology. Like his predecessors, he

believes that the substance in which thought immediately resides

is called mind. But the nature of the mind remains unclear

except, for dualists, that it is immaterial. However

'immateriality' seems to suggest lack of physical properties. In

this case, 'imrnateriality' seems to be simply a negation of

materiality.

Descartes prefers to "use the term 'mindl rather than 'soul'

since the word 'soul' is ambiguous and is often applied to

Page 36: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

something corporeal" (Cottingham;l993; p.114). In the second

meditation, Descartes said that he "imagined [the soul] to be

sornething tenuous, like a wind or fise or ether, which permeated

[his] more solid part~"(26)'~. My objection might sound a little

bit anachronistic: saying that the soul is tenuous suggests that

it is somewhat material. And if the sou1 could be corporeal, it

would be perishable. Furthemore, what would be its

status/position in relation with the mind and/or the body? In Les

Passions de 1 'âme, he clearly states that "1 'âme est d'une n a t u r e

qui n'a aucun rapport à 1 'étendue ni aux dimensions ou a u t r e s

propriétés de l a mat ière dont l e corps est composé." ( 1 6 4 9 ; 1,

art.30). It cannot be corporeal and not have "aucun rapport à

1 ' é tendue n i aux dimensions ou autres propriétés de l a mat i è re

dont l e corps e s t c o m p ~ s é ' ~ . Descartes believed that the essence

of physical substances is extension in space. Since he believes

that mind/soul is not extended in space it is distinct from

physical substances. Therefore the mind/soul is imateria1l4 . Descartes' definition of 'immaterialityl seems to be simply a

negation of physical characteristics. However there is no proof

l3 However this refers to the prior beliefs that didnlt resist to Descartes' methodical doubt. Since most of his beliefs were reinstated there is no reason this particular one would be a exception.

1 4 For Descartes, the "immateriality" of the soul made freedom possible since the body or any other physical object is subject to deteministic physical laws. And freedom plays a major role in the Christian system of beliefs. Without freedom, one cannot sin (unless you are a Calvinist).

Page 37: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

that the mind, if it does exist, is not extended in space.

If we take another look at the above-mentioned quote, we can also

Say that Descartes is suggesting that physiological processes do

not have any impact on mental activity. If so, then Descartes

contradicted hirnself, in the f i rs t meditation, when he talked

about madmen "whose brains are so damaged by the persistent

vapours of melancholia that they f i r m l y maintain they are kings

when they are paupers . . . "15 (Cottingham; 1993; p. 13) . If Descartes doesnlt contradict himself, then he believes that madness is a

physiological state that affects the mind. But if, as he claimed,

the body is distinct from the soul/mind how could it affect the

soul/mind? For example, how would both entities interact when

someone is drunk? In his letters to Queen Elizabeth (May 21 and

June 28, 1643), Descartes talks about the 'union of mind and

body' as a 'primitive notion'. He seems to be suggesting that

just as, for example, length is a property that belongs solely to

the body, properties such as understanding or sensation belong to

the mind insofar as one is an embodied consciousness~ From the

suggestion that the property known as length cannot be separated

from the physical body it describes, Descartes, in the Sixth Set

of Replies, states that ". . .the mind, even though it is in fact a substance, can nonetheless be said to be a quality of the body to

15 This could be interpreted as the assertion of a free mind. But it is not really since 'the brain is damaged by the persistent vapours of melancholia'.

Page 38: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

which it is joined"(Cottingham;I984;p.297). And, if "properties

such as understanding or sensation belong to the mind insofar as

one is an embodied consciousness~, then the mind is materially

supported. Thus the mind cannot be separated from "embodied

consciousness''. Despite the theory he developed in Les Passions

de l'âme that the pineal gland in the brain is the 'seat of the

soul' and al1 the physiological details, Descartes failed to give

a convincing account of the mind-body interaction except the

obvious ones such as: 'my hand rises when 1 decide to raise it'.

He gives no reason why one should believe that the mindhoul is

related to the brain. Descartes also argued that the mind is not

the brain because mind lacks spatial location. That the mind is

ontological (that it really exists) is still an open question.

Descartes thought of the body as a machine driven by the soul.

The underlying assumption is that It could not be otherwise since

Descartes, a Christian, believed in the survival of the soul

after death. He also believed that the mind could carry out its

operations independently of the body. This assertion cannot

follows from his assumptions such as 'the soul (mind) and the

body are two distinct entities', and it cannot be verified16 Even

16 Descartes' views on the nature of mind/soul have been derided by Voltaire in his works Micromégas (1752) and Dictionnaire philosophique (1764). Voltaire didn't believe in the possibility of knowing the (nature of) rnind. He preferred John Locke's take on the nature of the mind. Locke contended that the nature of the mind was not open to our knowledge.

Page 39: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

if it could, there would be another question to answer: can there

be a mind in the absence of the body (more precisely, the brain

and the central nervous system)? 1 doubt it. Unless you believe

in shoots and spirits, there seems to be no proof that the mind

an sich exists!17. Furthermore the positing of the mind and the

body as two ontologically distinct entities makes it difficult,

if not impossible, to explain their interaction. Let us suppose

that body and sou1 (mind) are like an automobile with its driver.

Descartes would be right to Say that the automobile and the

driver are two distinct entities and the automobile cannot

operate without a driver. And it is the driver who decides in

which direction the automobile should go. However the interaction

between the automobile and its driver are not as mysterious as

that between the body and the mind. My will (mind) can make my

finger (body) move. This is an instance of the mind controlling

the body. There seems to be, to the least, a 'causal' link

between my will and my finger moving. It shows that the mind and

the body are not independent from one another. After claiming

that the mind is distinct from the body and failing to explain

how body mind would interact, Descartes is trying to avoid the

logical conclusion of his dualist position. This conclusion

17 It should be noted that the brain and the central nervous system are essential for consciousness which is closely associated with the mind. However, this doesn't mean that the brain and the central nervous system are the essence of the mind (consciousness). The question of whether the brain is the mind will be discussed in the next chapter.

Page 40: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

should be that there is no interaction between the mind and the

body. Or there must be a link that allows interaction. But the

question of the nature of that link would rise. That link has to

be of a nature that allows interaction between a nonphysical

entity and a physical entity. Instead, Descartes takes an

untenable position by suggesting that the interaction between the

mind and the body occurs through the pineal gland. And this would

be a two-way psychophysical (causal) interaction. For example,

raising onels a m is an interaction from the mental to the

physical. Perceiving the redness of a red rose is an interaction

£rom the physical to the mental. However the nature of t h e s e

interactions seem to be a rnysteryle. It is undeniable that

certain brain states are very often accompanied by certain mental

states. For example, an excess of the activity of the

neurotransmitter dopamine is accompanied by schizophrenia; or a

diminished noradrenaline activity is accompanied by depression.

But nothing tells us that one causes the other. They could simply

be correlated. And correlation is not cause. One way out of this

trap would be suggesting that these interactions are brute facts

l8 Leibnitz and Malebranche rejected the possibility of psychological causal interaction in either direction. They argued that even though some mental phenomena are accompanied by some physical phenomena this constant conjunction never involves causal interactions. The two types of phenomena simply run in parallel according to Godls will. The problem is that, first, the claim that God exists has to be verified or successfully defended. Then it should be demonstrated how God imposes her or his will on phenomena.

Page 41: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

34

that cannot be explained. But this would be an undefensible act

of faith. In short, the question of how states of an extended

substance (the body) could be affected by states of an unextended

substance and/or the other way around remains unanswered.

Up to the 17th century many thinkers as well as ordinary people

believed that the mind is clearly separated from worldly things.

But Galileo's methods and discoveries could not be ignored. Ryle

thinks that the scientific but also Descartes, a Christian,

Still adhering to the grarmnar of mechanics, ... tried to avert disaster by describing minds in what was merely an obverse vocabulary. The workings of minds had to be described by the mere negatives of the specific descriptions given to bodies; they are not in space, they are not motions, they are not modifications of matter, they are not accessible to public observation. Minds are not bits of clockwork, they are just bits of not-clockwork (l949;p. 20) .

Ryle goes further by ridiculing Cartesianism as the view that

there is a ghost in the machine. He argues that viewing the mind

as a substance or an object is a category mistake. He thinks that

even though the word 'mind' is a noun it does not name an

objectlg. Descartes seems to be confusing the ways we talk about

physical entities and the ways we talk about minds. To have a

mind is not to have a distinct and special sort of entity. It is

simply to have certain capacities and dispositions.

19 In French, there is a difference between nom concret which refers to abjects and nom abstrait which refers to concepts. However the question of whether l r e s p r i t is a nom concre t or a nom abstrait remains open.

Page 42: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

35

Some 17th century philosophers rejected Descartes' division of a

human being into mental and physical substances. Baruch Spinoza

thought that both material and spiritual phenomena are distinct

modes of a single substance. He considered himself to be a

Cartesian. Unfortunately, his doctrine has been wrongly echoed by

the contemporary "mind stuff" theory according to which the mind

is nothing but a physical phenornenon. But there are phenomena

that cannot be accounted for solely in physical terms.

Consciousness is one of them.

Later on, J.O. de la Mettrie (1709-51) was more radical. In order

to solve the problerns raised by the Cartesian dualism he opted to

do away with the very concept of mind/soul. He argued that the

concept of mind or sou1 was no more than an unnecessary religious

bias. He also held that the concept of mind/soul was incompatible

with the objective scientific view that humans were no more than

machines. This position has been echoed, in contemporary

philosophy of mind by the approach to the nature of mind known

as: eliminative materialism. Advocates of this view argue that

ordinary concepts (such as beliefs, desires, goals) do not

represent correct categories of cognition, and cannot be reduced

to neurophysiological acco~nts~~. Therefore this cornmonsense

2 0 1 don't think that there is a category of cognition that could be reduced to a neurophysiological account. The reason is: cognition involves meaning, and meaning

Page 43: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

p s y c h o l o g i c a l framework must be abandoned w i t h t h e developrnent of

neu rosc i ences . J.O. de l a Mettrie tried to e x p l a i n t h e mind i n

mechanica l terms, and e l i rn ina t ive mate r ia l i s t s propose t o do the

s a m e i n neu rophys io log ica l tems2'. B u t many ph i lo sophe r s argue

t h a t t h e mind cannot be reduced t o p h y s i c a l p r o p e r t i e s . One

reason is: meaning canno t be accounted fo r i n physical t e r m s .

Meaning r e q u i r e s consc iousness . And, it i s s t i l l not c l e a r how

p h y s i c a l a c t i v i t i e s such as neuron f i r i n g s cou ld arnount t o a

meaningfu l mental phenomenon.

I n h i s Lettres Philosophiques, v o l t a i r e z 2 , a f t e r d e r i d i n g

C a r t e s i a n d o c t r i n e , sa id t h a t Locke had succeeded i n a v o i d i n g t h e

t r a p s of r a t i o n a l i s t metaphysics used by Desca r t e s and h i s

f o l l o w e r s :

Tant de raisonneurs ayant fait le roman de 1 'âme, un sage est venu qui en fait modestement 1 'histoire. Locke a développé à 1 'honune la raison humaine, comme un excellent anatomiste explique les ressorts du corps humain. (XIIIe lettres)

And D'Alembert added t h a t Locke had reduced metaphysics t o what

cannot be accounted f o r i n p h y s i c a l tems.

*' E l i m i n a t i v e m a t e r i a l i s t s p r e d i c t t h a t e v e n t u a l l y neurobiology will c a n n i b a l i z e psychology (and p e r h a p s o t h e r human s c i e n c e s ) .

2 2 V o l t a i r e was a l s o known and l i k e d o r d i s l iked for h i s a n t i - r e l i g i o u s views. T h i s c o u l d e x p l a i n why h e derided Desca r t e s1 ( C h r i s t i a n ) views.

Page 44: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

it should be: t h e experimental phys ics of t h e s o u l (p .147) .

However Locke was as much a d u a l i s t as Descartes. H e b e l i e v e d

t h a t t h e mind was as a " c l o s e t wholly s h u t from l i g h t l ' ( i n

Morris;1932;p.39). Therefore he d i d n l t concern himself wi th t h e

n a t u r e of mind. However he be l ieved i n t h e e x i s t e n c e of an e n t i t y

t h a t could be c a l l e d mind though 'wholly s h u t from l i g h t l . There

i s no way w e can e m p i r i c a l l y v e r i f y t h a t we o r o t h e r peop le have

minds. I n t r o s p e c t i o n wouldn't he lp us f i n d Our own rnind e i t h e r .

Observat ion of others and/or i n t r o s p e c t i o n cannot h e l p us

determine whether or n o t w e have s o u l or mind. We need t o r e s o r t

t o in fe rence and argumentat ion. David Hume p u t s i t t h i s way:

For my p a r t , when I e n t e r most i n t i m a t e l y i n t o what 1 ca l1 myself, 1 always stumble on some p a r t i c u l a r p e r c e p t i o n o r o t h e r , of h e a t or co ld , l i g h t o r shade, love o r h a t r e d , p a i n o r p l e a s u r e . 1 never can ca tch myself a t any t i m e w i thou t percept ion , and never can observe any th ing but t h e pe rcep t ion . ... If any one upon serious and u n p r e j u d i c ' d r e f l e c t i o n , t h i n k s he has a d i f f e r e n t n o t i o n of h imse l f , 1 must confess 1 can no longer reason wi th him. .... H e may perhaps, p e r c e i v e something s imple and c o n t i n u l d , which he c a l l s h imsel f ; t h o ' 1 a m c e r t a i n t h e r e is no such p r i n c i p l e i n m e .

Hume's views imply t h a t th ink ing is p e r c e i v i n g . Since p e r c e i v i n g

i s a conscious p r o c e s s , th ink ing must be a conscious p r o c e s s . If

t h i n k i n g i s a consc ious process , it can be d e s c r i b e d by d i s c r e t e

steps. This view j o i n s t h a t of Descartes and Hobbes 1 mentioned

e a r l i e r . Hume might be r i g h t by saying t h a t he "never can c a t c h

himself a t any t i m e wi thout percept ion, and never can obse rve

anyth ing but t h e p e r c e p t i o n , But what else could he

Page 45: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

As Eisenberg thinks, "how can an eye see itself seeing?" 'The

optically impaired' Hume is looking for his glasses while wearing

them, he has nevertheless made a point here. He is not suggesting

that introspection cannot reveal feelings or thoughts. He is

expressing his doubt on the existence of an immaterial Cartesian

mind which allegedly does the thinking or experiences the

feelings. Here, the mind is seen by Hume as an abstract term

referring to series of ideas.

William Barret thinks that the problem of the nature and/or the

existence of the mind is mainly a modern invention. In his book

titled Death of the S o u l , he says that this

problem is not found among ancient and medieval thinkers. Whatever their other aberrations, these older thinkers did not doubt that we lived in a world that was shared by our own and other rninds. But in this modern, scientific age of ours we feel compelled to raise such doubts out of a spirit of what we imagine to be theoretical exactness.(p.xii)

That is why 1 would contend that theories about the nature of the

mind (or soul) are more the product of a particular socio-

cultural environment than they are of persona1 idiosyncrasies.

The soul has often been considered as an entity that could not

have physical properties. The reason is: if the sou1 did have

physical properties, then it would be subject to laws of physics,

Page 46: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

39

and more importantly it would be perishable. Therefore it would

be against, for example, Christian doctrines. So, it would not be

far-fetched to contend that the sole of religious beliefs in

attempt to understand the nature of what w e cal1 the soul (or

mind) was not a negligible one. Morris thinks that the appearance

of the doctrine of mind as an imrnaterial substance "is a

corollary of the religious development which gave a central

metaphysical importance to the soul and its inner life"

(1932;p.21) . And, this may explain why, in the middle ages,

discussions about knowledge and the soul were somewhat restricted

to circles of theologians speculating on the afterlife.

The mind is very often postulated in order to explain behaviour.

But what is the rnind? 1 don't know if we can find the whole

truth. The nature of the mind remains unknown despite relentless

efforts. When we talk about the existence of the mind, it is

still the mind postulating its own existence. But when we try to

find the nature of the mind, it is the cognizing entity that is

trying to cognize itself. And a circularity would result froni

this kind of exercise. That that analyzes would be analyzing

itself. If we postulate a homonculus (or sorne kind of executive)

to break the vicious circle, we will have to postulate hornunculi

ad infinitum. Obviously, this would not make sense. Mind, if such

a 'thing' exists, would find it extremely difficult to find out

what "it" really is. The concept of mind seems to Vary every time

Page 47: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

one or a combination of some factors changes. Some of these

factors could be: the social environment, trends in the sciences

and the technology available. For example,

the science of mechanics was no sooner founded than a widespread ideology of mechanism followed in its wake. Man is a machine, so the lament goes. The molecules in nature blindly run according to the inalterable mechanical laws of nature; and as our molecules go, so do we. The human mind is a passive and helpless pawn pushed around by the forces of nature. Freedom is an illusion, And this lament was to rise to crescendo of pessimism during the nineteenth century. (Barret;p.xv; 1986)

About the current theories of mind, similar comments could apply.

With the advent of cornputers, computational theories of mind

started emerging. It can also be said that the advent of

computing machines has led to theories such as machine state

functionalism. This is a theory that contends that human mind can

be understood as special instance of computing machines, and

mental activity involves physical transformations from one

computational state to another. But al1 the workings of a

computing machine or any other mechanical device can be described

in mathematical and/or physical tems whereas "...a person's

thinking, feeling and purposive doing cannot be described solely

in the idioms of physics, chemistry and physiology, therefore

they must be described in counterpart idioms" (Ryle;1966,p.18).

This could be said to be debatable in the light of recent

findings in Neurophysiology. However neurophysiological (physical

or objective) description of mental phenomena cannot account for

Page 48: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

meaning .

As Eisenberg puts it, "one cannot categorize mind in the same way

as one categorizes inanimate objects such as metals, mass, or

electricity. For mind is active and the basis of al1 knowledge.

'It' creates categories; ... To categorize lit' would be dependent, hence to freeze lit', to make lit' passive. This would

not capture the mind that knows, the mind that organizes

experience" (p.25;1992). It would be difficult to draw the line

between the phenomenon observed (the mind) and the observer (the

mind). It is using the mind for the purpose of investigating the

mind. Any inquiry into the nature and the workings of the mind

has a inherent subjective flavour. In The L i m i t s of Reason,

Eisenberg argues that this kind of inquiry is self-contradictory

or incoherent.

Why do w e want to know" the nature of the mind? Knowing the

nature of the mind is not indispensable. Whether we know it or

not, l i f e goes on. As Gilbert Ryle says, "teachers and examiners,

magistrates and critics, historians and novelists, confessors and

non-commissioned officers, employers, employees and partners,

parents, lovers, friends and enemies a l1 know well enough how to

settle their daily question about the qualities of character and

intellect of the individual with whom they have to do" (Ryle;

Page 49: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

4 2

1949; p . 7 ) . Al1 this without necessarily knowing the nature of

the mind. However various notions of mind/soul have helped humans

regulate and explain some types of behaviour. Now, they have

helped scientists devise problem solving protocols. For example,

the General Problem solver (GPS), an artificial intelligence

project developed by Newell, Shaw, and Simon was capable of

solving a variety of problems in chess, logic, and mathematics.

The GPS uncovered a broad set of techniques that hurnans use in

problem-solving. In experimental settings, these techniques could

be obtained from human subjects through protocols or

introspective reports. Thus, the GPS was viewed as providing a

mode1 of human thinking and/or mind. However it doesnlt inform us

on the nature of the mind, except that it has to be "physical

symbol manipulator".

If an entity can move around whether it is in response to some

perceptual contact with its environment or not, we can easily

postulate that it has beliefs and desires. Therefore, it has a

mind. This is one way of accounting for behaviour. But we donlt

have to commit ourselves to the idea of an ontological mind. If

we really want to speculate on the nature of mind, we can take an

Aristotelian position which defines the mind as a set of

capacities or potentialities. When, for example, we talk about a

knife we refer not only to its characteristic "use" in the

activity known as cutting but also to its capacity to cut. But,

Page 50: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

when t a l k i n g abou t t h e mind t h e problem is m o r e cornplex.

C e r t a i n l y , w e c a n refer t o i t s c h a r a c t e r i s t i c u s e i n knowing,

l e a r n i n g , ana lyz ing , judging , w i l l i n g and so f o r t h , and/or i t s

c a p a c i t y t o do a l 1 t h o s e a c t i v i t i e s . A k n i f e t h a t h a s never been

u s e d is s t i l l a k n i f e by v i r t u e of i ts form and i t s c a p a c i t y t o

cut. But, 1 d o n t t t h i n k t h a t t h i s would be t h e case for t h e mind.

Compared t o a b s t r u s e contemporary l i t e r a t u r e on t h e n a t u r e o f

mind, L u c r e t i u s ' t a k e i s more r e f r e s h i n g and shows t h a t t h e

problem of t h e n a t u r e a n d / o r t h e e x i s t e n c e o f t h e mind is n o t

n e c e s s a r i l y a modern i n v e n t i o n . H e s a i d t h a t :

F i r s t , 1 m a i n t a i n t h a t t h e mind, which w e o f t e n c a l l t h e i n t e l l e c t , t h e seat of t h e guidance and c o n t r o l of l i f e , i s p a r t o f man, no less t h a n hand o r f o o t o r e y e s a r e p a r t s of a whole l i v i n g c r e a t u r e . There a r e some who a r g u e t h a t t h e s e n t i e n c e of t h e mind i s n o t lodged i n any p a r t i c u l a r p a r t , b u t is a v i t a l c o n d i t i o n of t h e body, what t h e Greeks c a l l h a m o n y , which makes u s live a s s e n t i e n t b e i n g s wi thou t h a v i n g any l o c a l l y de t e rmined mind. J u s t a s good h e a l t h may be s a i d t o be long t o t h e h e a l t h y body w i t h o u t be ing any s p e c i f i c p a r t o f i t , s o t h e y do n o t s t a t i o n t h e s e n t i e n c e of t h e mind i n any s p e c i f i c p a r t . (Lucre t ius ;1983;p .99)

L u c r e t i u s l t a k e on t h e n a t u r e of t h e mind s u g g e s t s t h a t t h e tem

'minci' d o e s n l t n e c e s s a r i l y refer t o an e n t i t y , b e it m a t e r i a l o r

i m m a t e r i a l . Ryle also t h i n k s t h a t s p e c u l a t i o n s over t h e ( n a t u r e

o f ) rnind i n v o l v e a c a t e g o r y mis t ake . H e s u g g e s t s t h a t w e f a 1 1 n o t

i n t o t h e t r a p of t h i n k i n g of t h e mind a s a n e n t i t y t h a t h a s

l o c a t i o n s and e v e n t s j u s t as w e might ". ..assume t h a t t h e r e i s a

p l a c e called ' t h e u n i v e r s i t y ' a p a r t and s e p a r a t e £rom b u i l d i n g s ,

Page 51: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

roads, lawns, persons, and other physically specifiable

entities". (in Gardner;1985;pm67) There is more to the

'universityr than buildings, roads, lawns, persons, and other

physically specifiable entities. There are also programs and

events. We cannot Say that in the 'university' physically

specifiable entities are parallel to programs and events and

control them or vice-versa. The 'university' cannot be reduced to

a single category of either physically specifiable entities or

events.

The view of the mind as an immaterial thinking entity parallel to

the material (extended) body23 is a mixture of Greek philosophy

and Christian credos. Richard Rorty argues that this "...concept

of mind is the blur with which western intellectuals became

obsessed when they finally gave up on the blur which was the

theologiants concept of God. The ineffability of the mental

serves the same cultural function as the ineffability of the

divine.. . " . (in Gardner; l98S;p.72) Sometimes, this view which has accommodated religious beliefs cornes into opposition with the

Bible's holistic view of a human being. The Hebrew word Nephesh

(the soul) could be translated as the self, the human being (Gen

2 , 7 ; P s 103,l). More precisely, it refers to a person in both

his/her spiritual and physical nature.

2 3 but still controls it.

Page 52: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

45

Sergio Moravia thinks that "Man can no longer be interpreted as

homo duplex (despite the efforts by neo- and crypto-dualists):

the 'mind' , of course, does not exist as an entity; and the

'bodyf is an extremely generic concept, itself derived from an

out-dated brand of metaphysics (if anything, one should speak of

the brain and the central nervous system) ( l W 5 ; p . 3) . I am not trying to suggest that the problem of the nature of the mind is a

pseudo-problem. However 1 think that it is a product of a

specific social and cultural elaboration. In short, 1 would Say

that the term 'mind' is a symbol that refers to 'sornethingl that

w e could simply cal1 a human being, "considered individually and

existentially as a person.

As Barret puts it, "in the three and half centuries since the

modern science entered the world, we have added immeasurably to

our knowledge of physical nature, in scope, depth, and subtlety.

But our understanding of human consciousness in this tirne has

become more fragmentary and bizarre, until at present w e seem in

danger of losing any intelligent grasp of the human mind

alt~gether"*~ (p-xvi). We tend to take a purely spectator view of

the mind while losing sight of the fact that we are involved in

the game. As Morris believed, a man/woman inquiring into the

2 4 On the nature of the mind, Richard Rorty has suggested that it is conceivable that the so-called mind might have none.

Page 53: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

nature of the mind is like a man/woman going outside his/her

house and looking through the window to see if he/she is at home.

The concept of mind seems to refer to mental activities and

mental states of a person. Mental activities are mainly the

capacity to learn and understand, the capacity to rernember, to

imagine, to create, and to have insight. And mental states are

mainly states of consciousness and also sense of identityt5.

Locke suggested that the identity of a person is determined by

our idea of a person. And our idea of a person is that of a

thinking intelligent being, that has reason and reflection, and

considers itself as itself, the same thinking thing in different

times and places. Mind seems to characterize a person. How could

we talk about minds without referring to people? In short, it

takes a person to have a mind. But 1 would not go as fax as

Descartes who claimed that only humans had souls, and animals

were more like automatat6. Because 1 don't know what the sou1 is.

2 5 1 wouldnlt attempt to define consciousness. Various definitions of consciousness tend to be tautological (for example, consciousness is awareness) . Sometimes definitions tend to reduce consciousness to objects of consciousness (for example, feelings, sensations, or thoughts). Locke equated consciousness with physical sensations and information they provide, whereas Leibnitz and Kant thought of consciousness as having a more central and active role.

26 This position was derided by Jean de La Fontaine, a 17th century French poet, in his Lettre à Mine de La Sabl iere.

Page 54: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

As Diderot once said:

Mais d e quelque manière que I t o n conçoive ce qui pense en nous, il e s t cons tant que les fonctions en s o n t dépendantes d e 1 ' o r g a n i s a t i o n , e t d e 1 : é t a t a c t u e l de notre corps pendant que nous vivons. ... Du moins n'avons-nous nulle idée immédiate de dépendance, d run ion , n i de rapport entre ces deux choses, corps e t pensée. Cette union est donc un fait que nous ne pouvons révoquer en doute, mais dont l e s détails nous sont absolument inconnus. (in Encyc lopéd ie ; l986;p . 236)

Unless w e reduce it to Descartes' Cogi to which is a conscious

thinking thing, the mind seems to be elusive. The problem with he

Descartes' Cog i to is that it doesnlt accommodate unconscious

mental activities which supposedly account for creativity and

intuition. The rnind, the soul , or whatever we cal1 "it' might

exist, yet we have no way of establishing its existence.

Furthermore, if the mind does exist, it would be affected while

studying itself. But we need the concept of mind. The mind could

be a fiction. It is nevertheless a heuristic device that gives a

point of stability and/or reference for dealing with (human)

behaviour. It can be an interpretive device that would help us,

as subjects, cognize and talk about things in the world and

develop problem-solving devices. But it doesnlt seem to enable us

to cognize and talk about phenornena of the non-material realm

such as feelings, decisions, mernories, etc. .

Not knowing the nature of the mind and/or not being able to solve

the problem of mind-body dualism will not keep people from living

Page 55: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

4 8

their lives. Teachers and examiners, magistrates and critics,

historians and novelists, confessors and non-commissioned

officers, employers, employees and partners, parents, lovers,

friends and enemies al1 know well enough how to settle their

daily question about the qualities of character and intellect of

the individual with whom they have to do' (Ryle; 1949; p.7) . 1 doubt the mind-body problem will ever be solved. "We have trying

for a long time to solve the mind-body problern. It has resisted

our best efforts. The mystery persists. ... It is time to admit candidly that we cannot resolve the mystery" (McGinn in

Lyons;1995;p.272). However the distinction between mind and body

might disappear with the development of empirical sciences.

Because it is more and more apparent that the mental realm is

materially supported. The material support (the body) is a

necessary condition. However it is not sufficient. This

contention is based on the fact that our view of nature (of the

mind) seems to be "shaped" by available technology. And from the

standpoint of modern science the idea of the mind as an

immaterial 'entity' is untenable. There remains the possibility

that the mind could be emerging from a physiological state. This

will be the topic of the next chapter.

Page 56: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Could a brain have thoughts, illusions or pains? The senselessness of the supposition seems so obvious that 1 find it hard to take seriously. No experiment

could establish this result for a brain. Why not? The fundamental reason is that a

brain does not sufficiently resemble a human being. Norman Malcolm (1985)

In chapter 1, 1 said that "for an ontological mind there could be

two primary categories of existence: the physical and the

nonphysical. The two categories of existence give way to two

major ontological alternatives on the nature of the mind. One

option is that the mind is an immaterial substance, therefore,

off limits to scientific inquiry. Because it cannot be described

in physical and/or objective terms. The other option would be

that the mind has some physical realityt' ( p . 2 3 ) . Since the

(Cartesian) attempt to describe the mind as an immaterial

substance seems to raise more problerns than it solves (for

example, the mind-body interaction), I propose, in this chapter,

to examine the possibility of describing the mind in physical

terms. In other words, it will be a discussion on whether the

Page 57: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

brain is the mind.

Descartes flirted with the idea of the mind being materially

supported in a somewhat different fashion. In his 26 January 1640

letter to Father Mersenne, Descartes contended that:

(The pineal) gland is the principal seat of the soul and the place where al1 thoughts originate. The reason from which I derive this belief is that 1 find no part in al1 the brain, Save this alone, which is not double. Now since we see only one thing with the two eyes, nor hear but one voice with the two ears, nor have but one thought at the same time, it must of necessity be that the different things that enter by the two eyes or the two ears must go to unite in some part of the body there to be considered by the soul. Now it is impossible to find any other suitable place in the whole head but this gland. Further, it is situated the most suitably possible for this purpose, to wit, in the middle between the cavities .

Descartes believed that mind was distinct from the body. But,

more precisely, the mind was an extracorporeal entity expressed

through the pineal gland in the brain. Descartes' assertion

hasnlt really stood up to the empirical investigations but the

discussion he started on the relation between the mind and the

body (brain) still elicits much debate and continues to shape

and/or influence current theoretical approaches to the

understanding of the mind. By contending that the mind was an

extracorporeal entity expressed through the pineal gland in the

brain, Descartes, though not a materialist, somehow set the stage

for subsequent rnaterialist descriptions of the mind.

Page 58: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

51

Materialist description of the mind didn't start with Descartes.

Ancient Egyptians believed that the heart was the (seat of the)

mind. But in the second century, the Roman physician Galen, was

the first to contend that the brain is the locus of the mind. He

"attacked the tenets of the philosopher Aristotle, who believed

that the heart and not the brain was the centre for human thought

and feelings. To prove his point, Galen carried out crude

experiment showing that pressure applied to the brain can

paralyze an animal while similar pressure on the heart had no

ef fectw (Restak; l984;p 2 0 ) . Ever since this view has been prevalent in Western science. Most probably it influenced

Descartes who contended that the mind-body interaction is

realized through the pineal gland in the brain. As said earlier,

Descartes, by establishing a link between the mental and the

physical, somehow set the stage for subsequent materialist

descriptions of the mind. Contemporary cognitive scientists (for

example, the Churchlands) neuroscientists believe that the brain

is the mind; or the mind is what the brain does. Since the brain

and its activities could be described in physical terms, the mind

can be described in neurophysiological (physical) terms, and

nothing else is needed. The assurnption is that mental phenomena

are simply connections of neurons and/or patterns of nerve

impulses in the brain. And the benefit of this materialist

approach is: if the mechanisms of the mind are implemented solely

by the brain, consequently the distinction between the mind and

Page 59: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

52

the body is an illusion. Thus the mind can be equated with and/or

reduced to the brain. Once the brain's operation is well

understood, it can be replicated in the machine, and the

implementation of 'true' artificial intelligence could be

achieved. My argument i s that since 'what the mind i s ' i s

unknown, and very l i t t l e is known about the brain, the comparison

(and/or the reduction) would not hold. Even if the workings o f

the brain were well understood, i t would be impossible t o compare

i t t o anything s i n c e i t s configuration continually changes i n

response t o the outside world through the senses. And the

(subjective) experience of the outside world cannot be described

i n physical ternis.

~aterialism is a doctrine that claims that every phenornenon is

explainable in physical terms. Matter is the ultimate reality,

and the mind and/or phenornena such as consciousness is an

attribute or effect of matter. And such phenornena are caused by

physiochemical changes in the nervous system. This doctrine is

not new. Philosophical materialism could be traced back to the

ear ly Greek philosophers (Anaximenes, Empedocles, Heraclitus,

Thales and others) who subscribed to a variant of materialism

known as hylozoism (meaning that matter is intrinsically alive)

for which matter and life are identical. Hylozoism is related to

the doctrine of hylotheism, in which matter is held to be divine,

Page 60: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

or the existence of God is possible only through matter. In the

18th and 19th centuries, hylozoism had many advocates: scientists

and naturalistically minded philosophers. But this rebirth of

materialism was particularly motivated by a spirit of hostility

toward the Church and Christian theological dogmas. One of the

exponents of antireligious materialism, Julien Offroy de la

Mettrie (1709-1751) wrote a book titled L'Homme-machine. Most

likely influenced by Newton, Galileo, and Descartes' mechanistic

conception of the universe he suggested that a human being is a

machine thus doing away with the notion of an immaterial

sou1 /~nind~~.

Wagman thinks that "the philosophical intricacies of the mind-

body problem were to some extent circumvented by a scientific

paradigm in physiology and psychology, beginning with Albert von

Haller (1708-77), in which coordinate relations were postulated

between the nature of brain organization and the nature of mental

processes. Experimental research in physiology and psychology

sought to establish correlations between specific neural

structures and specific psychological processes. The research

constituted a cornmitment to an empirical psycho-physical

parallelism" (1991;p.12). Psycho-physical parallelisrn is an

27 L'Homme-machine was ordered destroyed by the Dutch government, and 3.0. de La Mettrie fled to Prussia.

Page 61: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

5 4

approach t o t h e n a t u r e o f t h e mind which p o s t u l a t e d two s e p a r a t e ,

independent e n t i t i e s (body and mind) having p e r f e c t l y c o r r e l a t e d

p r o c e s s e s . M a t e r i a l i s m has c o l l a p s e d t h e two independen t e n t i t i e s

i n t o one . I n f a c t , t h e p h y s i c a l e n t i t y has c a n n i b a l i z e d t h e non-

p h y s i c a l e n t i t y , t h u s doing away with t h e mind-body problem.

The contemporary p h i l o s o p h i c a l materialism has been i n f l u e n c e d by

b o t h ( t h e f i n d i n g s i n ) neurophysiology and the t h e o r y o f

e v o l u t i o n . However advoca tes o f t h i s d o c t r i n e ( p h i l o s o p h i c a l

m a t e r i a l i s m ) are no longe r rnot ivated by a n t i t h e i s t i c s en t imen t s

b u t t h e y s t r i v e t o show t h a t the mind and/or phenornena such as

c o n s c i o u s n e s s are t h e r e s u l t o f n a t u r a l p rocesses , and n o t t h e

s u p e r n a t u r a l ones . I n an i n c r e a s i n g l y secular wor ld i t i s

convenient a n d r e a s s u r i n g t o b e l i e v e t h a t t h e mind can b e

accounted for by a m a t e r i a l i s t d e s c r i p t i o n , and t h a t n o t h i n g else

i s needed. R i c h a r d Restak t h i n k s t h a t " t h e mind i s n o t h i n g more

t h a n a term w e employ t o d e s c r i b e some of t h e f u n c t i o n s o f t h e

b r a i n . . . . The term mind is used in t h e same way as tems such a s

' i n f l a t i o n ' , ' p r o g r e s s V - u s e f u l concep t s about p r o c e s s e s . One

cannot l o c a t e i n f l a t i o n anywhere w i t h i n a depar tment of

econornics. One c a n ' t t r a v e l t o the United Nat ions t o i n t e r v i e w

' p e a c e r . These t e rms a r e n ' t t h i n g s ; t h e y are c o n v e n i e n t terms f o r

p r o c e s s e s t o o complex t o be d e s c r i b e d adequa te ly i n f e w words"

(Res tak ;1984;p .343) . This i s a case of a ca tegory-mis take . Mind,

whatever it is, i s no t a p r o c e s s . Even i f it were; a p r o c e s s is a

Page 62: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

55

continuous action or series of actions directed to some end. 1s

there some end to the mind as a process? 1 donlt think so. If

there is an end to the mind, we seem to be agnostic about. 1 can

also add that: if there were, we would fa11 into some kind of

determinism that would not accommodate imagination, intuition, or

creativity. Besides the terms 'inflation1 and 'peacel donlt refer

to processes but they refer to states of things that could be

llocated' in time. The mind would be "a state of thingsw only if

we subscribe to rnaterialist approach.

Daniel Dennett says clearly that: "what we want, in the end, is a

materialistic theory of mind as the brain" (Dennett; 1984;

p. 1453-54) . This sounds like a attempt to put the mind in a box or to turn whatever it is into a physical object in order to

study it. And "the bet is that someday people will formulate

type-type identity staternents such as 'Beliefs are just xzqry

firings at velocity v and r in sector 23041V' (Flanagan; 1991;

218). But this kind of description would have to wait for a more

mature science of the bxain. Paul Churchland (1979) predicts an

imminent genuine 'intellectual revolution' that would bring a new

scientific 'theory of man1. This new theory would be a

neurophysiological interpretation of the hurnan being. But

Churchland doesn't specify how this revolution would corne about.

Renowned neuroscientists, such as Wilder Penfield, John Eccles,

Roger Sperry, or K a r l Pribam have given up hope that an

Page 63: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

understanding of the brain would lead to an explanation of the

mind. Because brainls operations are still largely

incomprehensible. Penfield once conceded that he has come "to

takè seriously, even to believe that the consciousness of man,

the mind, is something not to be reduced to brain mechanisms"

(Restak; l984;p.Mg) .

Sometimes materialists take a stand that seems to be doctrinal or

ideological. For example, David K. Lewis argues that

"rnaterialists m u t accept the identity theory as a matter of

fact: every mental experience is identical with some physical

stateW(Lewis; 1966; p. 63jZ8. This position gets close to

Spinoza's position. However, the difference is that: Spinoza

doesnlt affirm a fundamental materialism. The identity theory,

one of many materialist approaches to the nature of the mind,

claims that the states of the brain are identical with the states

of the mind. Identity theorists are doing what philosophers of

mind cal1 'an ontological reduction of Cartesian substance to

physical substance1, and they are trying to give a 'scientific'

reinterpretation to the concept of mind. David Armstrong, one of

28 If 1 define myself as a materialist, then I must accept that every mental state is identical with some physical (neurophysiological) state. If 1 don't define myself as a materialist, then 1 may not accept that every mental state is identical with some physical (neurophysiological) state. But this has nothing to do with what the mind is.

Page 64: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

identity theorists, thinks that "...we can give a complete

a c c o u r ~ t ~ ~ of man in purely physico-chemical terms (Armstrong, 1965

in Borst; 1983;p.67). This contention is purely speculative.

Armstrong doesnft explain how a physico-chemical account of a

mental state is possible. He simply assumes that the brain is the

mind; or that fmind' and 'brain' refer to the same thing. The

reason is: if two logically independent terms (or concepts)

exist, this doesnlt necessarily imply that there are two

independent ontological entities as well. The (ontological)

existence of the mind cannot be derived from the simple fact that

the mind is a logically independent concept. For example, water

and H,O are two distinct terms referring to same object. But

water is no different from H,O. Identity theorists argue that it

is illegitimate to jump from the concept of logical independence

to that of ontological existence. Since mind and brain are two

distinct terms, it doesn't follow that they are two different

ontological entities. This smart move helps do away with the

(concept of) mind, thus equating mental states to physical

states. However it can be used only against Cartesian d~alisrn~~.

It also opens the possibility and legitimacy of a monistic-

physicalistic theory which would reduce the mind to a physicai

'O This reasoning is of no use against a theory that doesnlt accommodate the idea of the mind as an immaterial substance.

Page 65: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

object: t h e b ra in . But t h e same reasoning c o u l d be

way around. Mind and b r a i n a r e two d i s t i n c t terms,

58

used t h e o t h e r

t h e y cou ld be

r e f e r r i n g t o two d i f f e r e n t o n t o l o g i c a l e n t i t i e s . Water and s a l t

are two d i f f e r e n t te rms/concepts , they a l s o refer t o two

d i f f e r e n t o b j e c t s . Furthermore I could argue t h a t if t h e r e is a

s t a t e m e n t t h a t i s t r u e o f t h e b r a i n and not true o f t h e mind, o r

i f t h e r e i s s ta tement t h a t i s t r u e of t h e mind and n o t t r u e of

t h e b r a i n , then it would fo l low t h a t t h e brain and t h e mind do

n o t r e f e r t o t h e same thing3'. I f ' b r a i n t and 'mindm do not r e f e r

t o t h e same t h i n g then t h e b r a i n i s no t the mind.

If a l 1 mental s t a t e s can b e reduced t o neurophys io log ica l

p r o c e s s e s (as t h e i d e n t i t y t h e o r y c l a i m s ) , t h e n s e n s a t i o n , f o r

example, could be e a s i l y reduced to, and i d e n t i f i e d wi th a

p h y s i c a l state. The problem is: i t w i l l be d i f f i c u l t t o l o c a t e

s p a t i a l l y a mental s t a t e o r i d e n t i f y it with a p a r t i c u l a r

neurophys io log ica l p rocess . For example, t h e p r o c e s s o f

e x p e r i e n c i n g pa in s t a r t s i n t h e v a s t network o f f r e e nerve

3 1 Fur themore , n o t any (k ind o f ) a s s e r t i o n cou ld be made

i n d i f f e r e n t l y o f bo th t h e b r a i n and t h e mind without sounding absurd. I f X i s s a i d t o have a d i r t y mind, it d o e s n v t mean o r e n t a i 1 t h a t t h e b r a i n i s d i r t y . Moral p r e d i c a t e cannot be app l i ed t o a p h y s i c a l o b j e c t . I would n o t q u a r r e l wi th those who might a r g u e t h a t human beings a r e p h y s i c a l o b j e c t s . However I would sugges t t h a t t h e y keep i n mind t h e f a c t t h a t moral p r e d i c a t e can be a p p l i e d o n l y t o persons or group o f persons supposedly r e s p o n s i b l e before the l a w and /o r t h e s o c i e t y .

Page 66: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

endings interlaced throughout the surface of the skin. A chemical

is released in the surrounding area, and an impulse (pain

'message') is carried along nerve fibers. Once in the spinal

cord, the impulse is sent to the thalamus, in the brain. This is

quite a journey. It shows that it takes more than the brain to

experience pain32. And a sensation cannot be limited to a brain

state. Since the process of experiencing involves many parts of

the body why reduce it to the body? Instead of saying arbitrarily

that mental states are brain states, we can (arbitrarily) Say

that mental states are thalamus states or spinal cord states.

Besides a neurophysiological account of, for example, a sensation

would simply be a third person account. It would simply be an

account of discrete steps of how, for example, a sensation cornes

into being. A third person (objective) and/or neurophysiological

account of a particular mental state would not account for some

of the first person features of that mental state. For example,

subjectivity is one of the first person features. To illustrate

this, let us consider a problem that emerges from three obvious

facts of life:

'Tact 1 is the fact that when, for example, I bite rny tongue 1

32 Searle (1992) argues that mental states can be subject to causal reduction but not to ontological reduction. The problem with this approach is that causal relations between different parts of the brain and the central nervous system seem to be more than questionable. They are confused and confusing.

Page 67: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

experience the subjective feeling of pain ... This experience exists for me alone; and were I try to tell you what it is like,

1 could do so only in the vaguest and most metaphorical of ways.

My felt pain has an associated time (right now), an associated

place (my tongue), an intensity (mild), and an a£fective tone

(unpleasant), but in most other respects it seems beyond the

scope of physical description. Indeed my pain, 1 would Say, is no

part of the objective world, the world of physical material. In

short it can hardly count as a physical event.

Fact 2 is the fact that at the same time as I bite my tongue

there are related brain processes occurring in my brain. These

processes comprise the activity of nerve cells. In principle

(though not of course in practice) they could be observed by an

independent scientist with access to the interior of head; and

were he to try to tell another scientist what my brain-based pain

consists in, he would find the objective language of physics and

chemistry entirely sufficient for his purpose. For him my brain-

based pain would seem to belong nowhere else than in the world of

physical material. In short it is nothing other than a physical

event . Fact 3 is the fact that, so far as we know, Fact 1 wholly depends

on Fact 2. In other words the subjective feeling is brought about

by the brain processes (whatever 'brought about by' means).

The problem is to explain how and why and to what end this

dependence of non-physical mind on the physical brain has come

Page 68: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

61

aboutw (Humphrey; 1992;~. 3-4) . In short, Fact 2 is a necessary but not sufficient condition of

Fact 1. A sensation, such as pain, is physiologically supported,

but it has also some first person features. Despite their

neurophysiological underpinning, mental states are nevertheless

subjective. Their subjectivity is, sometimes, shaped by culture

and the environment in which the subject lives. A sensation I

experience as painful could be delectable for someone else, and

painful and delectable for someone else. What is music for X

could be just plain noise for Y. According to Diana Deutsch, a

professor of psychology at the University of California at San

Diego, people don't al1 hear music in precisely the same way. She

thinks that our perception of certain sound patterns depends on

our native language and whether we are right- or left-handed. She

daims that even our dialect matters. For example, people who

grew up in California tend to hear certain sound patterns quite

differently £rom those who grew up in ~ngland'~. This lends

support to my contention that social interactions (environment)

is a factor in our conception and perception of the world.

Because if, for example, the perception of a particular sound

33 In an experiment with subjects from various cultural background, Deutsch found out that by pairing different musical tones they become 'ambiguous'. And those 'ambiguous' tones are, for example, perceived as descending by Californians, whereas Britons heard them as ascending . (Scientif ic American; November 96)

Page 69: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

62

p a t t e r n can be reduced t o a 'dancer of neurons, how would t h e

d i f f e r e n c e i n pe rcep t ion of t h e same sound pa t t e rn be accounted

for? It could be suggested t h a t p a r t s of t h e bra in invo lved i n

music percept ion are not t h e same, f o r example, f o r r ight -handers

and lef t -handers , o r f o r Ca l i fo rn i ans and Britons. I n t h i s case ,

i f hear ing a p a r t i c u l a r sound p a t t e r n i s j u s t , t o paraphsase

Flanagàn, "xzqry f i r i n g s a t v e l o c i t y v and r i n s e c t o r 2304", w e

would have t o e x p l a i n why the sarne sound pa t t e rn uses d i f f e r e n t

p a r t s of t h e brain i n d i f f e r e n t s u b j e c t s . We would a l s o have t o

exp la in why t h e same sound p a t t e r n i s perceived d i f f e r e n t l y by

different s u b j e c t s . I f every time w e have t h e output R f o r t h e

i npu t S, we can c o n j e c t u r e t h a t t h e p rocess ing mechanisrn i s t h e

same. But i f w e have, f o r t he same i n p u t , d i f f e r e n t o u t p u t s i n

d i f f e r e n t sub j ects, w e can assume that t h e processing mechanisms

are not s i m i l a r i n a l 1 sub j ec t s . Th is assurnption would exclude

uniformity of p rocess ing mechanisms, and leaves some room f o r

s u b j e c t i v i t y . However i t allows an i n t e r p r e t a t i o n of t h e

mind/brain a s a 'black box'.

T h e de sc r ip t i on o f mental phenornena i n phys ica l terms s e e m s t o be

problematic . There a r e two reasons: it is impossible t o determine

which p a r t s of t h e b r a i n are involved i n a p a r t i c u l a r menta l

phenornenon, and a d e s c r i p t i o n i n p h y s i c a l terms cannot account

f o r s u b j e c t i v i t y .

Even i f s c i e n t i s t s could expla in ( i n phys i ca l terms) e x a c t l y what

Page 70: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

a mental phenornenon is there still is a problern:

"Suppose that Mary, a neuroscientist in the 23rd century, is the world's leading expert on the brain processes for color vision. But Mary has lived her whole life in a black-and- white room has never seen any other colors. She knows everything there is to know about physical processes in the brain -its biology, structure and function. This understanding enables her to grasp everything there is to know about easy problems: how the brain discriminates stimuli, integrates information and produces verbal reports. From her knowledge of color vision, she knows the way color names correspond with wavelengths on the light spectrum. But there is still something crucial about color vision that Mary doesnlt know: what it is like to experience a color such as redV1 (Jackson in Chalmers; 1995, p. 82) .

Physical correlates of the color such as red constitute simply a

description of a certain physical reality. "Mary doesn't know:

what it is like to experience a color such as red" means that

(subjective) experience cannot arise sirnply from the knowledge of

the physical correlates and the brain processes to which they are

related. Physical and functional description cannot account for

qualia or intrinsic qualitative characteristics of sensory

experiences. These intrinsic qualitative characteristics of

sensory very often Vary from one subject to another. For example,

in a case of spectrum inversion, one subjectls visual experience

of red is qualitatively 'similaru to another subjectls visual

experience of green. There seems to be no way a particular

physical and functional description can account for inverted

qualia. Where an Inuit identifies ten different types of

snowflakes, rny knowledge of vision and/or the physical correlates

Page 71: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

64

of snowflakes would be of no help in inducing the same conscious

experience. 'A chiliogon' means 'one thousand-sided plane

figure', is a proposition which 1 have no difficulty

understanding. However, the concept itself doesn't give me a

picture of a one thousand-sided plane figure. The concept

'chiliogon' is (epistemo)logically and linguistically different

£rom the percept or the phenomenal experience of 'chiliogon'. In

short, conscious experience cannot simply be deduced from

physical and functional description of the brain to which it is

related. My contention is that the biological make-up as well as

social interactions in a particular environment contribute to our

conception and perception of 'things'". Blindness and/or

deafness can affect conception and/or perception of things. So

does the culture in which a one is imrnersed.

Saying that the mind is the brain (or mental states are brain

states) doesn't seem to mean anything. Compared to Our

predecessors, we can Say that today much is known about the brain

and much can be said about the mind. However the concept of mind,

and that of Ibrain state' seem both to be equally fuzzy. 1 don't

see how one can define the workings or the 'state of the brain'

since little is known about its operations. Furthemore it is

difficult to understand the workings of the brain since it

" This idea will be discussed in the last chapter.

Page 72: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

65

doesn't seem to have been built with specific purposes or

principles of design in mind. And understanding the anatomy of

the brain is not likely to give us the understanding of how

meaning arises (in the mind). Empirical investigations show that

the brain is, most likely, a product of a very complex evolution

process spread over millions of years. In short, before arguing

whether the identity theory is right or wrong, we have to know

what the rnind is, and what a 'state of the brain' is. Afterwards,

we can try to establish a relation of identity between the mind

and the 'states of the brainl.

John von Neumann once remarked that the two most outstanding

problems scisntists are confronted with are: weather prediction

and brain operations. Heinz Pagels also thinks that the brain,

more than the weather, is of 'unsimulatable complexity'. He says

that :

Today we have a much better grasp on the complexity of weather -we understand the main equations and know that it is an unsimulatable system. The brain, however, remains an enigma. Scientists have attempted to find a reliably accurate set of mathematical equations that describe the essential features of the neuronal connections and their operation, But my guess is that even if such equations are found, the brain's complexity will turn out to be another example of unsimulatable complexity. If this is so, then in spite of the fact that at some future time the biophysical law for the brain may be known precisely, the simplest system that simulates a brain's operation is that brain itself. If these ideas are right, then there is a 'complexity barrier' that lies between our instantaneous knowledge of the state of the neuronal network and our knowledge of its future development. The brain, and hence

Page 73: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

the mind, is another example of unsimulatable complexity . (1989;p.227)

1 would go further than Pagels: it is not a matter of complexity

but it is a matter of impossibility. If the 'state of the braint

could be simulated or modelled at one point in time, it would be

possible to predict future 'states of the brain'. However this

would mean that, from a particular neurophysiological

configuration in or of the brain we can predict the subject will

think at a particular moment in the future. Thus whatever we

think and/or do is predetermined3'. 1 don't see how this could be

true. Furthemore this raises the problem of free will. For

example, a recent research has shown that eating behaviour

depends mainly on a fine balance in the activity of cholinergie,

noradrenaline, and serotoninergic pathways. But it is not that

chernical (im)balance that would account for my craving for

35 A UCLA neuroscientist, Ben Libet, has devised an experiment that shows that the motor cortex is activated one-half second before the person becomes aware of their decision to do so. But he conceded that his experiment is not capable of long-term detailed predictions. However 1 don't think that this indicates or proves predetermination. Because there is a time delay between the time the subject decides (or becomes aware of their decision) to move, for example a limb, and the time she/he lets the experimenter know their decision. If the motor cortex is activated before the subject decides, then the subject is not deciding: he/she is just registering a decision made by the motor cortex. And this experiment seems to have another shortcoming: its predictions apply only to motor activities.

Page 74: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

67

perogies at a particular moment in the future. The same chemical

(im)balance cannot make me feel like having a Papua-New Guinean

dish which I have never heard of (nor tried). 1 have, in front of

me, a keyboard with 110 different keys. No one or nothing can

convince me that my decision to press a particular key (or not to

press any) is predetermined (or can be predicted). 1 don't see

how any future knowledge of the brain's operations would change

this.

In order to study the brain, scientists have used two approaches.

One approach is to study brain function after parts of the brain

have been damaged. Functions that disappear or that are no longer

normal after injury to a specific region of the brain can often

be associated with the damaged areas. If, for example, the left

temporal lobe is removed, comprehension of speech is impaired. If

the right temporal lobe is removed, some objects cannot be

recognized. Or, as Christine Temple s a y s , neurological "patients

may have a lobectomy, where one section of the brain is cut, or,

in extreme cases, a hemispherectomy, in which almost half of the

brain is removed. ... The surgical procedures are rare operations of last resort, but provide new information about localization of

brain function" (1993;~. 31) .

The second approach is to study the brain processing of stimuli

Page 75: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

68

and its responses to direct stimulation or to stimulation of

various sense organs. "These can be based on blood flow, glucose

uptake or the pattern of electrical activity generated by the

brainW(Temple;l993;p.33). In the first approach, for example, if

the right temporal lobe of a tennis player is damaged, he or she

would have difficulty recognizing the movement, color and shape

of a tennis ball. But the movement, color and shape of that

tennis ball are processed in different cortical visual centers.

And the separation of these information streams starts in the

retina which is not part of the brain. This shows that the

perception cannot be reduced to a process in the brain since the

retina, although connected to the b r a i n by nerves, is not part of

the brain. And it raises a yet unanswerable question of where the

information is reassembled. Another problem with the first

approach is that interpreting the collected data can be, at best,

very difficult since patients have different medical histories,

and their brains don't have the exact same shape. A key (and

suspicious) assumption in the first approach is that, if the

'quantity' of brain decreases (for example, damages caused by

accident or surgery), so will the mental competence. In other

words, mental competence is directly proportional with the

'quantity' of brain. Logically, this approach leads to the link

between mental capacity or intelligence and the size of the

brain. However the size of the brain is not an indicator of

degree of intelligence. A mentally impaired person may have a

Page 76: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

much larges brain than that of a genius. The size doesn't seem to

be an important factor. If it were, elephants and whales would be

more intelligent than any other living organisms. Some

psychometrists argue that it is the brain-body mass ratio that

determines the level of intelligence. But the brain of the

African elephant-nose fish represents 3.1 percent of its body

mass while the hurnan brain is around 2.3 percent of the human

body mass. The human brain uses 20 percent of the oxygen the body

consumes. Other vertebrates' oxygen consumption ranges between 2

and 8 percent. However the African elephant-nose fish consumes 60

percent of the oxygen its body uses. Unless one believes that

intelligence is directly proportional with the amount of energy

the brain consumes, this doesn't suggest that the African

elephant-nose fish is more intelligent than a human being.

Besides, mental cornpetence of a fish cannot be compared with that

of a human being. Fish and people don't face exactly same

problems. Although, level of intelligence seems to be correlated

with the number and type of functioning neurons and how they are

structurally connected with one another, what is r ea l ly meant by

intelligence is not ~ l e a r ~ ~ .

The second approach is also limited. It studies only one aspect

36 This question w i l l be further discussed in the next chapter .

Page 77: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

70

of mental activity: brain responses to stimuli. Moreover, this

approach presupposes that brain responses to stimuli are al1 that

is to mental activity. 1 don't see why mental activity would be

reducible only to brain responses to stimuli. There is no reason

why mental activity should not be reducible to the thalamus or

the spinal cord responses to stimuli. Mental activity is very

often interpreted as brain responses to stimuli because, since

the Roman physician Galen, it is assumed that thoughts are in the

brain. This asswnption is not true. It is undeniable that brain

(activity) participates in mental activity. But so does the

spinal cord, it synthesizes and transmits impulses. Even the

heart plays a major role, it pumps blood which carries oxygen and

glucose essential to brain activity3'.

That the brain (and/or its activity) is the mind could simply be

an acceptable hypothesis. Because it seems difficult to deny the

fact that mental states are (materially) supported by brain (and

other physiological) states. Research efforts have shown

correlation between mental activities and patterns of nerve

impulses. Tt seems more likely that the mind emerges from

activities of al1 the brain regions, nervous systems, senses, and

even blood pressure! It is the 100 billions (or more) neurons and

3 7 Ancient Egyptians believed that the heart was the locus

of mental activity.

Page 78: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

other cells linked in networks that give rise to consciousness,

intelligence, emotion, memory and creativity. However researchers

at Erasmus University Medical School in the Netherlands have

found a correlation between systolic blood pressure and cognitive

~ k i l l s ~ ~ . 1 might a contend that in order to have mental

activity, the brain is essential but it is not sufficient. The

mind seems to represent the capacity to organize information

(thinking) as well as organized information itself (memories).

Classical physics as well as quantum physics show that

transmission and/or transformation of information implies, at

least, a transmission and/or transformation of energy. Thus

transmission and transformation of information (energy) requires

a material support. In short, the mind cannot exist without the

brain (body) .

What is mind?

Mind is often equated with consciousness, a subjective sense of self-awareness. A vigilant inner core that does the sensing and moving is a powerful metaphor, but there is no a priori reason to assign a particular locus to consciousness or even assume that such global awareness exists as a physiologically unified entity. Moreover, there is more to mind than consciousness or the cerebral cortex. Urges, moods, desires and subconscious forms of learning are mental phenornena in the broad view. We are not zombies. Affect depends on the function of neurons in the same manner as does conscious thought (Fischbach;1992;p.48).

38 High blood pressure left untreated seems to cause memory loss.

Page 79: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

72

It is difficult to ignore the correlation between mental activity

and neuron (brain cell) activity. It has already been established

that the brain functions by complex neuronal circuit.

Communication between neurons is both electrical and chemical.

The 'message' is electrically and/or chemically transmitted from

the dendrites of a neuron, through its soma, and out its axon to

the dendrites of another neuron. Despite Descartes1 assertion

that the mind is distinct and independent from the body,

malfunction in the production, breakdown, and cellular activity

of neurotransmitters in the limbic system may cause certain

psychiatric States. Certain brain chemical imbalances are

associated with mental disorders (Le. schizophrenia,

depression). Imbalance or depletion of such neurotransmitters as

dopamine can affect mood and thinking. It can also create

difficulties in the initiation and control of movements.

In the Sixth Meditation Descartes said: "1 am truly distinct from

my body, and... 1 can exist without it". This could be challenged

by PET scanners which show that thought is a brain process. But

PET scanners would have to be able to detect a non-physical mind

and come up ernpty-handed for them to seriously challenge this

Cartesian claim. Furthemore Descartes could fight back by

Page 80: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

saying: this could be a work of an evil g e n i ~ s ~ ~ . Or there could

simply be a correlation between patterns of brain activity and,

for example, one's thinking. No link could be established between

a PET scan and one's thinking. For this link to be established,

patterns of brain's activity must be capable of revealing the

content of a thought. While a subject is listening to the music

it is passible to observe the patterns of glucose use with a PET

scanner. But these patterns would not tell us whether the music

the subject is listening to is Bach's Toccata and Fugue or its

transcription for orchestra by Stokowski. Besides the patterns

that PET scanners show can be incomplete. ".. .During a delayed- choice task, PET scans are too slow to distinguish between the

neural activity pattern of a target being held in mînd and the

pattern that follow a few seconds later when the target is

recognized" (Beardley; 1997; p. 80) . Furthemore "if we imagine each neuron as a light bulb, a motion picture of the brain in

operation would show an array of billions of lights flashing on

and off in a bewildering variety of patterns. This picture would

look much the same as a Time Square message board, consisting of

many rows of individual flashing lights that, taken together,

form a recognizable pattern. The problem is that, at present we

haven't the foggiest idea of how to interpret these

3 9 Such a response would be anachronistic and hardly convincing since not that many people believe in evil geniuses .

Page 81: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

7 4

patterns" (Casti; l99S;p. 158) . We may not be capable of

interpreting correctly these patterns. I donlt see how meaning

could extracted from flashing lights. But U.T. Place thinks that

ll(t)here is nothing that the introspecting subject says about his

conscious experiences which is inconsistent with anything the

physiologist might want to Say about the brain processes which

cause him to describe the environment and his consciousness of

that environment in the way he does" (in Lyons;1995;p.l15). Place

is suggesting the difference between mental states and brain

(physical) states resides in the mode of description. In other

words, by observing a subjectls brain processes a physiological

psychologist can read and understand the mental content of a

subject. The assumption here is that: meaning can be accounted

for in neurophysiological terms. 1 donlt think it is possible to

know the content of a thought by observing chemical exchanges in

the brain. Thinking involves subjectivity which cannot be

described in neurophysiological terms. Furthemore our

understanding of brain patterns depends on the technology being

used, and Our brainls organization. A case of inverted spectrum

could illustrate the fact that the sarne sense data (and

presumably the same brain or neurophysiological state could yield

two different subjective experiences.

Paradoxically, Restak concludes that the organization of our

brain places limitations on what we c m and cannot know by reason

Page 82: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

or perception. He said:

on the basis of Our brain's organization, we are able to perceive certain aspects of 'reality' while we remain oblivious to others. Errors inevitably creep in that are as much products of our brain as they are anything in the external world. For this reason 1 am not confident that we will ever be completely successful in 'making up our minds' on the question '1s the mind the brain?I. (Restak; 1984;p.344)

The brain functions continually, it stops only at death. A

snapshot or measurement taken at one specific moment cannot give

a full account of the brain activity. Besides, one particular

task (for example, pattern recognition) can be carried out via

multiple and varying neuronal channels. This variability would

make it difficult, if not impossible, for a neuroscientist to

tell which pattern is being recognized by a subject.

In an effort to present the other side of Imind as brain'

argument, Restak talks about Wilder Penfield, a neurosurgeon, who

"became less certain that the studÿ of the brain, a field in

which he had done pioneering work earlier in his career, would

ever lead to an understanding of the mind1'(1984;p.347).

Penfield's views are shared by many neuroscientists. In The Self

and Its B r a i n , John Eccles, a Nobel Prize winner, and Karl Popper

take a dualist approach by arguing that mind and brain are two

distinct entities. The mind and the brain are two categorically

distinct entities. The mind, as a postulated 'entity' is in the

Page 83: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

7 6

realm of concepts, and the brain is in the realm of material

objects- They are different logical types, they cannot be

compared.

In order to avoid the dilemma of explaining the relationship

between the mind and the brain, materialists tend to conflate the

characteristics of the brain and that of the N n d in some hybrid

organ that could be called mind/brain. The materialists'

mind/brain has non-physical characteristics while being a

physical object. And the inquiry into the nature and/or the

workings of the rnind/brain would be much easier since it has

physical properties. Because of these physical properties, we

could even substitute the word "mindn with the word "brainw or

vice versa. But, this is just conjuring away the problem. The

mind with its real or postulated nonphysical characteristics

cannot be reduced to 'somethingl that could be studied within the

parameters of objectivity. Trying to study real or postulated

nonphysical characteristics within parameters of objectivity

would be a category mistake in either case. For example a mental

state such as shame cannot be described and/or reconstructed in

physical terms. And neither can a postulated mental state such as

courage. It is undeniable that having the brain is a necessary

condition or essential to having a mind. But, this doesn't mean

that the mind is the brain. Besides, if the mind is nothing more

than the brain why should w e bother talking about the mind (with

Page 84: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

77

al1 its mysterious and elusive aspects)? After the discovery of

oxygen, chemists abandoned the idea of phlogiston which was

supposedly produced duxing combustion. Like chemists who

abandoned the idea of phlogiston, materialists should also

abandon the idea of mind.

Our current understanding of human neurophysiology tends to

suggest that a well-functioning brain is the material seat and/or

support of the mind (or mental activity) . Having a well- functioning brain i s a necessary condition to having a mind, but

it is not a sufficient condition. It is undeniable that there is

a correlation between some mental states and some brain

(neurophysiological) s t a t e s . And mental states are supported by

neurophysiological states. However I donlt think that mental

states, capacities and properties cannot be reduced to

neurophysiological capacities, states or properties. As Ryle puts

it: "Physicists may one day have found the answers to al1

physical questions, but not al1 questions are physical' (1949;

p.161). Could Hitler's hatred or Einstein's genius be reduced to

processes in the brain?

Page 85: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Science proceeds by laborious accumulation of details; but art reaches its goal at once through intuition.

Arthur Schopenhauer

CBAPTER THREE

Earlier 1 contended that Our socio-cultural environment seems to

shape Our Weltanschauung or the way(s) we see the world. As

Martin Gardner thinks, proponents of artificial intelligence,

stimulated by science fiction read in their youth are convinced

that the hurnan mind is simply 'a computer made of meat' (in

Penrose;l989;p.xiii). Behind this way of looking at things "lay

the following view of the way the mind works: Rational (logical)

thought is a kind of mental calculation that follows certain

prescribed rules, in many ways not unlike arithmetic, Plato

thought this, as did Leibnitz and Boole" (Devlin;1997;p.l).

Asserting that the human mind is simply 'a computer made of meatr

is somewhat speculative because what the mind is is still an open

question. However we can dig out the assumptions underlying this

assertion. In this assertion it is assumed that mental activity

can be encoded in numbers and/or symbols. The second assumption

Page 86: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

79

is that not only human intelligence can te equated with machines'

'intelligence' but cognitive processes in both humans and

machines are similar. In other words, there is no difference in

the way humans think, corne to know something and/or solve

problerns, and computers' performance. It is also assumed that

intelligence has a physical reality, or it is related to and/or

produced by some physical entity. Therefore intelligence is

describable in physical terms. Moreover it is measurable. Thus "a

fair number of researchers in artificial intelligence believe ... that by designing the right programs with the right inputs and

outputs, they are literally creating mindsn (Searle;1990;p.26)

which is essential to having intelligence.

Herbert Simon argues that when

we give people tasks; on the basis of performance in a task we consider that some thought has taken place in reaching a solution to a problem. Similarly, we can give computers the same task; then, it would seem to me, that it is only some kind of vulgar prejudice if we refuse the accolade of intelligence to the computer (1980, p . 1 3 ) .

In Simon's approach, computer could be said to be intelligent on

the basis of 'performance'. However, in humans, absence of

performance doesn't necessarily imply absence of thought and/or

intelligence. Obviously Simon asserts that artificial

intelligence is, at least, comparable to hurnan intelligence.

Before making such a comparison we need to know 'what it means to

be a human being', and 'what it means to be a cornputer'. Finding

Page 87: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

80

out what a computer is would not be a problem. However 'what a

human being isf involves subjectivity or first-person experience

which is difficult to describe. But we can circumvent this

problem by taking the discussion on artificial intelligentsia's

one and only turf which is problem-solving. Computers cannot do

everything that humans do (for example, daydreaming) except

problem-solving. So, if w e want to answer the question of whether

computers can do things that humans do, we can only discuss the

ways humans and machines solve problem. By saying that 'the

accolade of intelligence' should be given to computers on the

basis of their performance, Simon reduces intelligence to the

ability to solve a problem. However 1 intend to illustrate that

human problem solving capabilities are not necessarily similar to

those of machines. Artificial intelligence scientists see problem

solving as the paradigm of intelligence. This contention is based

on two assumptions: (1) the human brain is an information-

processing system, and (2) the brain solves problems by creating

a symbolic representation of the problem (Langley, Simon,

Bradshaw, and Zytkow, 1987). 1 will argue that even if computers

can solve some particulars problems, (1) they use only deductive

reasoning to solve those problems. And (2) Computers are not

aware of the fact that they are solving problem. The discussion

will rely on creativity, intuition, imagination and, also, on the

concept of predication (of meaning) to help highlight the basic

differences between human reasoning and the kind of reasoning

Page 88: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

under simulation by artificial intelligence artifacts.

From the hypothesis that "a necessary and sufficient condition

for a system to exhibit intelligence is that it be a symbol

system, that it have symbol manipulating capabilities"

(Simon;1981,p.19), Simon argues that ",..any physical symbol

system can be organized further to exhibit general intelligence"

(Newell & Simon; 1981; p.41) . Simon's daim corroborates my contention that Artificial Intelligence presupposes an

ontological (objective/physical) mind. But, as stated in chapter

2, my position is that the mind (and/or intelligence) cannot be

reduced to the brain. However 1 will assume that the brain is the

mind just for a pragmatic and/or heuristic reason. Because if

the mind is seen as a physical entity that can be replicated or,

at least, simulated, then the discussion on artificial

intelligence would be possible.

Frorn the assumption that 'mental activity can be encoded into

numbers and/or symbols', Artificial Intelligence scientists

hypothesized that at a certain level of abstraction there is a

similarity between the ways the human mind/brain and the computer

function40. Because, according to a central tradition in the

4 O This, despite the fact that the brain and the cornputer are physically (structure and mechanism) different.

Page 89: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Western philosophy (rationalism), "thinking (intellection)

essentially is rational manipulation of mental symbols (viz.,

ideas)" (Haugeland;1885;pe4). And the assumption that 'mental

activity can be encoded into numbers and/or symbols' could be

traced back to Galileo who held that "nature is written in

mathematical characters" (i . e., sizes, shapes, and motions) . Descartes who is "perhaps the prototypical philosophical

antecedent of cognitive sciencew (Gardner;1985;p.50) assumed that

understanding consisted of forming and manipulating

representations or symbols (Dreyfus 1988). He also contended that

thinking was essentially conscious. In his Fourth Set of Replies,

he said "that there can be nothing in the mind, in so far as it

is a t h i n k i n g thing, of which it is not aware, this seems to

(him) to be self-evident" (Cottingham; 1984;p.171). Consequently,

thinking (and action) can be described in discrete steps.

Descartes

proposed one of the first 'information-processingr device. (His) diagram showed how visual sensations are conveyed, through the retinas, along nerve filaments, into the brain, with signals from the two eyes being reinverted and fused into a single image on the pineal gland. There, at an all- crucial juncture, the mind (or soul) could interact with the body, yielding a cornplete representation of external reality (Gardner; l98S;p. 51) .

Thomas Hobbes rejected Descartes's division of a human being into

mental and physical substances. He contended that everything is

material or physical. "Hence it may be that the thing that thinks

Page 90: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

is the subject to which mind, reason or intellect belong; and

this subject may thus be something corporeal"

(Cottingham;l984;p.l22). He also said that "when a man reasoneth,

he does nothing else but a sum total, form addition of parcels;

or conceive a remainder, from substraction of one sum £rom

another". Hobbes1 contention somehow laid the foundation of what

was going to be called Artificial Intelligence. He hypothesized

that (1) the thing that thinks may be corporeal, (2) reasoning is

addition and substraction of parcels. Thus, thinking is described

as real physical manipulations of real physical symbols.

Haugeland thinks that by contending that: "by ratiocination, 1

mean compu ta tion" , Hobbes prophetically launched Artif icial Intelligence, and conveyed two basic ideas.

First, thinking is 'mental discoursel; that is thinking consists o f symbolic opera tions, just like talking out loud or calculating with pen and paper -except, of course, that it is conducted internally. Second, thinking is at its clearest and most rational when it follows methodical rules -1ike accountants following the exact rules for numerical calculation (1985;p.23).

Here Hobbes's view on rational thinking and that of Descartes

converge: thinking is essentially conscious, and it can be

described in discrete steps. This is a key assumption that

contributed to the development of artificial intelligence.

However this position cannot accommodate other ways of solving

problems such as intuition and/or creativity.

Like his rationalist predecessors, Leibnitz asserted that "al1

Page 91: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

8 4

theory could benefit £rom construal in the forxn of mathematics.

The benefits were clarity, precision, explicitness of detail, and

logical consistency" (Wagman;l99l;p.9). He tried to use symbols

and logic in al1 areas of knowledge and human communication, and

developed logical calculus to which much of thought and language

could be reduced (and artificial intelligence would not be

possible without this) . He went further by actually describing how a thinking machine could work41.

Using Newtonian language of mechanics, David Hume, an empiricist,

set out to: "discover ... the secrets springs and principles by

which the human mind is actuated in its operation"

(Haugeland;1985;p.42). But some contemporary thinkers went

further. For example, J.J. Smart claims that "conscious

experiences are simply brain processes". And "if consciousness is

a brain process, then presumably it could also be an electronic

pro ces^^^^^ (in Moravio;1995;p.85). Smart fails to justify how

consciousness could jump from being a brain process to becoming

an electronic process. He assumes that, like everything that is

physical, the mind/brain (the locus of consciousness) could be

4 1 Mindful of the Church, he avoids discussing the possibility of a thinking machine having a soul.

42 However this claim did not prelude the advent of cornputers. It seems to be a result of the computer culture.

Page 92: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

85

described in synbolic/rnathematical terms, and simulated or

replicated electronically. But would an electronic mind/brain

'generate' consciousness? There is no clear answer. However it is

believed in Artificial Intelligence circles that, at least,

"...any physical symbol system can be organized further to

exhibit general intelligencen (Newell & Simon; 1981; p.41).

Because at a certain level of abstraction the hurnan brain and an

appropriately programmed computer could be considered as two

different instantiations of a single species of a device that

generated intelligent behaviour by manipulating symbols by means

of formal rules. With the right program or coded strings of

command that guide the events happening in the computer, it might

be possible to produce a machine that would be behaviorally

indistinguishable from a conscious person. But that machine would

not have the first person qualitative experiences such as "what

it is like to be a bat or a humant' experience that define

conscious beings. For example, humans cannot conceive

echolocatory experiences of bats.

Before trumpeting that intelligence can be replicated in

machines, we should know, at least, what intelligence is or what

makes us consider a person intelligent. Some artifacts are

considered intelligent when they mimic and, sometimes, surpass

human performance in one area of knowledge or another. For

example, the psychologist George Miller once claimed that he was

Page 93: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

86

"very optimistic about the about the eventual outcome of the work

on the machine solution of intellectual problems. Within Our

lifetime machines may surpass us in general intelligence"

(Weizenbaum;l976;p.205). The first question is: what is general

intelligence? How could it be translated into rules and

principles that are encodable so that machines could use them?

Simon (1981) argues that intelligence could be attributed to any

entity that displays problem-solving capabilities. He argues that

the fact that a computer is capable of solving a problem means

that it is intelligent. On the basis of cornputer's performance,

it is possible to infer that some thinking has been taking place.

In other words, if there is external behaviour X, then thinking

is taking place. However, if external behaviour X is a sufficient

condition to thinking taking place, it is not necessarily a

necessary ~ondition'~. 1 would argue, tautologically, that a

cognitive system is, by definition, a system that is capable of

cognition. And cognition can be defined as any instance of a

mental operation and/or state that has an intrinsic

intentionality. And, intentionality is a property attributed to

4 3 People dontt always frown or gesticulate when they are thinking. Cogitation doesn't always brings about gesticulation.

Page 94: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

87

any mental state that is representative or 'about1 something".

Searle (1992) argues that intentionality is a unique 'phenomenon

that humans and certain other animals have as a part of their

biological nature'45. There is little risk in saying that human

beings are usually cognitive systems. However, if we agree with

Searle, a non-biological system cannot have intrinsic

intentionality. But, according to Dennett, the status of

cognitive system can be extended to non-biological systems. A

(non-biological) system can be treated as an intentional system

whenever treated as if it had cognitive features such as beliefs,

goals and motives. Thus we can claim that computers are cognitive

systems by ascribing to thern beliefs, goals and motives. However

they would not be intrinsically intentional.

Modern computing machines can d i s p l a y some abilities usually

attributed to human beings. Today, computers are capable of

"seeing", "hearing", "sensing", "knowledge acquisition",

"talking", "decision making", "reasoning", "predicting" , etc. But, there is no intrinsic intentionality in computer "thought".

To use Dennett's language, intentionality is always derived or

'as if' in computer performance. However Searle (1984) thinks

4 4 This regardless of whether or not that something exists (for example, phobia, fear of ghosts, etc...).

4 5 However, this position is very often challenged by different physicalist theories.

Page 95: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

8 8

that the mind has four features: intentionality, consciousness,

subjectivity, and mental causation. And, he thinks that any

satisfactory theory of the mind must account for al1 four

features. Consequently an artifact cannot have intentionality.

Furthermore, just the observable performance of these abilities

is not sufficient to establish the relation of analogy between

human faculties and computer's "abilities". Computers could be

said to be rational systems because their working is based

exclusively deductive reasoning. But humans are capable of more

than deductive reasoning. Humans are not only rational systems,

but they are also intentional systems which computers are not. As

Fodor suggests: "...the rational systems are a species of the

intentional ones rather than the other way around'

(Fodor; 1990, p. 8) . Computers are "as-if" rational systems. Not al1 llaç-if 11 rational systems could be said to be intelligent. Could a

slide said to be intelligent? 1 don't think so. So, when can we

Say that an "as-if" rational system is intelligent? Simon thinks

that "simple capabilities ... for handling patterns, the ability to read patterns from outside, to write patterns, to store patterns,

to build up complex patterns from simple patterns, and to make

cornparisons between patterns ... provide the necessary and sufficient condition that a system be intelligent"

(Simon;l981;p.13). Intelligence seems to be contextual and goal-

oriented. The abilities needed to solve an equation is not

necessarily the ones useful in hunting antelopes. Obviously these

Page 96: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

89

different intelligences cannot be compared. The determinant for

intelligence seems to Vary with context.

Richard Feynman or Steven Hawking are considered as very

intelligent people. 1 dontt think that, individually or as a

tearn, they could beat the chess playing computer Deep Blue at

chess. But Gary Kasparov c m . Does this mean that Kasparov is

more intelligent than Feynman and Hawking? The answer will be yes

if we reason as a computing machine and equate the capacity to

win a chess game with high intelligence. Because the

straightforward logic of machine would be: If Deep Blue can beat

(or is more intelligent than) Feynman and Hawking; and Kasparov

can beat (or is more intelligent than) Deep Blue. Therefore

Kasparov can beat (or is more intelligent than) Feynman and

Hawking. However there cannot be a comparison. Deep Blue is

sirnply a chess playing computer incapable of doing

wordprocessing. Kasparov performs at his best while playing

chess. Feynman and Hawking excel as physicists. Even if we

restrict the comparison to chess playing. Deep Blue could beat

Gary Kasparov. But this would be simply because of its ability to

compute hundreds of thousands of moves in microseconds. However,

for Deep Blue to do straightforward wordprocessing, its operating

system has to be changed before perforrning any task for which it

was not designed. Kasparov doesn't have to change his brain in

order to do wordprocessing.

Page 97: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Computer performance is based on symbol manipulation. But symbol

manipulation alone would not yield meaning. Alfred Tarski studied

formal languages of mathematics in order to analyze how formulas

can refer to mathematical objects and how these references can

yield meaning. Keith Devlin thinks that Tarski's studies has one

important consequence: "... it enables you to analyze and perhaps

manipulate symbolic formulas, free of any constraints as to their

meaning" (lgW;p.89). But, in many situations, the meaning of a

word and/or a sentence and/or a group of sentences arises £rom

whatever it is that a word and/or a sentence and/or a group of

sentences refer to. It is contextual. For example, saying that:

"green i deas sleep fu~iously~~'~ is grammatically correct but it

is meaningless. Anybody in their "right mind" would acknowledge

that it doesnlt make sense. Grammar check prograrns are of no

help. Computer systems can check grammar rules but they cannot

check the meaningfulness of a sentence. But

the brain is, first of all, an organ heavily dependent on meaning and context. Even at the level of primary sensation, a filtering process is constantly sorting out what seems to be important at the moment. For instance, out of the background of dozens of simultaneous cocktail conversations, we focus on one exchange simply on the basis of our interest in one of the speakers or the subject under discussion. This selection has nothing to do with linear processing. It concerns the meaning that one conversation has for us compared to others (Restak;1984;p.358).

General Problem Solver tried to highlight set of techniques used

4 6 This example was given by Noam Chomsky.

Page 98: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

91

by hurnans in problem solving. These techniques were obtained from

subjects through protocols which are introspective reports issued

by subjects in experimental settings, typically problem solving

situations. For example, a subject trying to solve a particular

problem may be asked to "think out loud" while working on the

problem; alternatively, after the solution is obtained, a

retrospective report may be provided. These reports, or

protocols, provide data for theorizing about cognitive processes

and strategies -theorizing that may be validated (or falsified)

through implementation in computer systems. This is an example of

what cognitive scientists cal1 problem reduction. It is an

approach that decomposes a problem into a group of smaller

subproblems to which algorithrns can be applied. But humans

sometirnes solve problems by chunking an organization of

information into groups or chunks (which may themselves consist

of smaller chunks). Normally, for example, one does not hear

speech as consisting of individual words; instead, the words are

chunked into larges unit (such as phrases). In many instances,

chunking can be developed through practice. For example,

experienced chess players, unlike beginners, see chess pieces as

organized into meaningful configurations, and can often recreate

a board's pattern of pieces from memory. From a meaningful

configuration, an experienced chess player can develop a strategy

or can 'seer a breakthrough. A chess playing computer simply

evaluates an extremely large number of different positions/moves

Page 99: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

and chooses the one with the highest probability to lead to one

of the winning patterns in its database. Cornputers have another

advantage: they are not as sensitive as humans are to external

conditions. Like computer components, neurons (brain cells) have

specific physical and chernical properties. However, not only are

they vulnerable to physico-chemical changes but neurons are also

sensitive to 'non-physicall changes such as moods. They are alive

but cornputer components are not. Two neurons are never exactly

alike, they have variable sensitivity to neurochemicals. But

computer cornponents are al1 alike, they generally respond in an

on-off manner. A computers canlt generally perform at less than

its best. A t any specific moment, if given proper instructions,

computers can perform to the maximum. However the human mind can

engage in a conscious mental process of evoking events, ideas or

images of objects, relations, attributes, or processes never

experienced or perceived before. And, sometirnes that is when the

human mind performs at its best when it is idling or daydreaming.

But,

computers donlt daydrearn, they donvt idle at a bare percentage of their efficiency. They are whimsical. They do poorly understanding puns or joke. They also don't become inspired, donlt give up in discouragement, dontt suggest better uses for their time. There has never been a computer capable of radically reprogramming itself. This can be accomplished by changing the cornputer's program, but someone with a brain has to do this (Restak;1984;p.360) .

Thus the capacity of self-reorganization represents a major

Page 100: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

93

difference between brains and machines. If we consider the

effects of destroying some part of the brain. The result would

not necessarily be simply a physical damage with a total loss of

some mental capacities but rather an interna1 reorganization of

the remaining brain tissue. Moreover, "the same thing winds up

being done in a different way, although in the process behavioral

sequences may appear that were previously 'suppressedl: primitive

reflexes, emotional responses, and so on. This reorganization is

internally controlled and proceeds fairly automaticallyw

(Restak; l984;p.360) .

After working with subjects suffering from aphasia and acalculia,

Laurent Cohen and Stanislas Dehaene observe that while doing

arithmetics humans approximate numbers before calculating. They

argue that:

ce s y s t è m e , c o n t r a i r e m e n t aux deux a u t r e s (verbal et v i s u e l ) , m a n i p u l e non pas d e s symboles ( ' s e p t ' ou '7 ') mais des q u a n t i t é s a p p r o x i m a t i v e s . ... Notre sy s t ème a n a l o g i q u e de c a l c u l va t r a n s f o r m e r immédiatement 1 es nombres en grandeurs p h y s i q u e s , en l e u r a s s o c i a n t une l o n g u e u r ( m e n t a l e ) qui sera traitée sur une l i g n e numérique. A p a r t i r d e l à , une a d d i t i o n entre deux nombres prendra l a forme d e deux segments s u r l a l i g n e m i s b o u t à bout, une comparaison entre deux nombres sera r e p r é s e n t é e par deux segments mis côte à côte. L ' a v a n t a g e d ' u n t e l s y s t è m e t r a v a i l l a n t e n p a r a l l è l e a v e c l e s d e u x autres, c'est d'être bien p l u s rapide. Son i n c o n v é n i e n t , c 'est q u ' i l es t moins précis (Ikonicoff;1995;p. 61 -62) .

Obviously, using cornparison, approximation, and mental

Page 101: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

representation to solve a problem involves information

processing. It shows that a Homo cogitans cannot be reduced to an

information processor. Thinking can involve processing

information but it cannot be reduced to information processing.

Machine information processing is rnere symbol manipulation. "The

only thing that a digital computer can do is manipulate symbols

in accordance with a set of precisely defined rules stipulated in

advance. To the human user looking on, it might seem that those

symbols mean something or refer to something in the world, but,

like beauty, that meaning or reference is in the eye of the

beholder and not in the computer" (Devlin; l997;p. 155) . In hurnan thinking, semantics is always involved. Furthemore, a theory

that views the mind as an information processor would always corne

in a behaviouristic flavour. The input is the stimulus, and the

output is, obviously, the response. As a consequence, it has some

shortcomings found in behaviourism. For example, thinking to

oneself is a case of pure cognitive activity in which there could

be no observable behaviour.

The c l a h that 'the mind is an information-processor' rests

squarely on a category mistake. There is a misunderstanding of

the natures of the things being talked about. Human beings think,

computing machines cornpute. Only in metaphorical sense that

Page 102: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

95

mental properties can be attributed to computing machines4'.

According to Gilbert Ryle, the test for category differences

depends on whether replacement of one expression for another in

the same sentence results in a type of unintelligibility that he

calls 'abs~rdity"~. It will be absurd to Say that 1 compute that

you are right. Whenever we Say '1 think' the semantics is always

involved, and so is the belief factor. Both the semantics and the

belief factor are non-existent in computing machines.

The semantics and the belief factor imply consciousness and/or

self-awareness. But computing machines do not have consciousness.

Neither do they have self-awareness. For humans, consciousness

and/or the self-awareness cornes with the ability to represent

objects of knowledge which they can voluntarily retrieve. And the

ability to represent objects (not as symbols) is essential to the

process of imagination. As said earlier, imagination, 'a

conscious mental process of evoking events, ideas or images of

' Computing machines cannot have any mental property. There is only one case in which computing machines could be thought as having mental properties. 'Dennett argues that a system may be treated as an intentional system (with cognitive features such as motives and beliefs) whenever treating it as if it had those cognitive features is explanatorily and predictively useful -whether or not that system is biological'. (Dunlop and Fetzer; 1993, p. 67)

This absurdity reflects the prejudices of our languages.

Page 103: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

96

ob j ects, r e l a t i o n s , a t t r i b u t e s , or p r o c e s s e s n e v e r e x p e r i e n c e d o r

p e r c e i v e d b e f o r e ' is a non- l inea r approach t o problem s o l v i n g .

But c o g n i t i v e s c i e n t i s t s a rgue that t h e r e is no such a t h i n g as

c o n s c i o u s n e s s ( a n d / o r q u a l i t a t i v e e x p e r i e n c e ) s o t h a t t h e y c a n

d a i m that humans e q u a l machines. Moreover, s i n c e t h e d i f f e r e n c e

between t w o human s u b j e c t s o r between a human and a machine

canno t be d e t e r m i n e d t h e n t h e r e i s no d i f f e r e n c e . Denne t t sees no

d i f f e r e n c e between "any machine a n d any human e x p e r i e n c e r .

... There is no s u c h s o r t of d i f f e r e n c e , ... ... There j u s t s e e m s t o

bew (Dennet t ; l 9 9 l ; p . 375) .

If t h e r e i s no d i f f e r e n c e between any machine and a n y human

e x p e r i e n c e r , t h e n a l 1 human a t t r i b u t e s can be a s c r i b e d t o a

machine. Thus machines could be said t o be c a p a b l e of t h i n k i n g .

Penrose d i s a g r e e s :

The q u e s t i o n of whether a mechan ica l d e v i c e c o u l d e v e r be s a i d t o think -perhaps even t o e x p e r i e n c e f e e l i n g s , o r t o have a mind- i s n o t a new one. But i t has been g i v e n new impetus , e v e n a n urgency, by t h e advent o f modern cornputer t echnology . .... What does it mean t o e x p e r i e n c e o r t o feel? What i s a mind? Do minds r e a l l y e x i s t s ? Assuming t h a t t h e y do, t o what e x t e n t a r e mind f u n c t i o n a l l y dependent upon p h y s i c a l s t r u c t u r e s w i th which t h e y are a s s o c i a t e d ? Might minds b e able t o e x i s t quite i ndependen t ly of s u c h s t r u c t u r e s ? Or a r e t h e y s i m p l y t h e f u n c t i o n i n g s o f ( a p p r o p r i a t e k inds of such s t r u c t u r e s ? I n any case, i s i t n e c e s s a r y t h a t t h e r e l e v a n t s t r u c t u r e s be b i o l o g i c a l i n n a t u r e ( b r a i n s ) , o r might minds e q u a l l y w e l l be a s s o c i a t e d w i t h p i e c e s of e l e c t r o n i c equipment? A r e minds s u b j e c t s t o t h e laws o f p h y s i c s ? (1989,p.4) .

I f minds a r e s u b j e c t s t o laws o f c l a s s i c a l phys i c s , t h e n a n y

Page 104: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

97

mental phenornenon could described in physical terms. Furthemore

mental states such as beliefs and desires could be anticipated.

Fear and anxiety could be turned off at will. Courage could be

induced. This would raise some questions: how would we explain

the fact that these mental/physical states are very often

triggered by external events that have no physical connection

with our body? How could we abstract, encode and program, for

example, courage?

Voltaire once commented that any army composed of rational men

would always simply run away. For a rational being (entity), it

doesn't take courage to assent to the proposition "1+1=2".

Wowever it takes thymos (courage) to fight an enemy that seems

stronger. Whether apocryphal or not, the story of David and

Goliath illustrates an aspect of human problem solving capability

that is not to be found in computers: courage.

Sometimes, the survival instinct could be stronger than the

reason. Psychologist Reuven Bar-On links intelligence to

feelings. He defines what he calls 'emotional intelligence' as

"capabilities, cornpetencies, skills that influence one's ability

to succeed in coping with environmental dernands and pressure and

directly affect one's overall psychological well-being"

(Mirsky; SA, O4/1997;p.S5) .

Damasio suggests that feelings are a powerful influence on

Page 105: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

reason. "Reason does seem to depend on specific brain systems,

some of which happen to process feelings. Thus there may be a

connecting trail, in anatomical and functional terms, f rom reason

to feelings to body' (1994;p.245). While he acknowledges that

allowing the emotions to interfere with our reasoning can lead to

irrational behaviour, Damasio presents evidence to show that a

complete absence of emotion can likewise lead to irrational

behaviour. He argues that

... while biological drives and emotion may give rise to irrationality in some circumstances, they are indispensable in others. Biological drives and the automated sornatic marker mechanism that relies on them are essential for sorne rational behaviours, especially in the persona1 and social domains, although they can be pernicious to rational decision-making in certain circumstances by creating an overriding bias against objective facts or even by interfering with support rnechanisrns of decision making such as working mernory (1994;p.192).

Despite the success of Artificial Intelligence in some areas, it

will be impossible to create machines that have biological drives

and emotion that are essential for some rational behaviours.

Besides, "humans beings are not always or only intelligent. There

is stupidity in the world, artificial and otherwise"

(Rychlak; l99l;p. 14) . It is true that there are artificial intelligence devices that can make logical decisions, even

understand spoken language. But as Jonathan S~haeffer'~ puts it:

4 9 University of Alberta scientist who programmed Chinook, the computer that defeated Marion Tinsley the world checkers champion.

Page 106: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

"what w e have done is create idiot savants that are only good at

one thing. We havenlt created intelligence, j u s t the illusion of

intelligence". How do we create that (illusion of intelligence?

Simon daims that a resolution of an algebra problem is ''simply a

sequence of recognitions". And, to 'create' intelligence,

cornputer systems called 'expert systemsr are based on pattern

recognition and/or discrimination problem solving approach.

Pattern recognition is important in human expertise. But,

everyday real-world thinking is usually done with comon-sense

(which is contextual) understanding in the background. That is

why, 1 would Say, a description of human action into encodable

and discrete elements is doomed to failure.

A description of human action into encodable and discrete

elements would mean that if, for example, 1 am playing tennis 1

have to be aware of every rnove 1 make besides processing the

information on the ball, the wind, and my opponent's moves.

Consequently, I would have difficulty coordinating my moves.

Lewis Thomas observes that

Working a typewriter by touch, like riding a bicycle or strolling on a path, is best done by not giving it a thought. Once you do, your fingers fumble and hit the wrong keys. To do things involving practiced skills, you need to turn loose the systemç of muscles and nerves responsible for each manoeuvre, place them on their own, stay out of it. There is no real loss of authority in this, since you get to decide whether to do the thing or not, and you can intervene and embellish the technique any time you like; if you want to ride a bicycle backward, or walk with an eccentric loping

Page 107: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

g a i t g i v i n g a l i t t l e s k i p e v e r y f o u r t h s t e p , w h i s t l i n g a t t h e sarne t i m e , you can do t h a t . But i f you c o n c e n t r a t e your a t t e n t i o n on t h e d e t a i l s , keeping i n touch wi th each muscle, t h r u s t i n g you r se l f i n t o a f r e e f a 1 1 wi th each s t e p and c a t c h i n g y o u r s e l f a t t h e l a s t moment by s t i c k i n g o u t t h e o t h e r f o o t i n tirne t o break t h e f a l l , you w i l l end up irnmobilized, v i b r a t i n g wi th f a t i g u e ( 1 9 7 8 ; ~ . 64 i .

1 d o n l t see how, a f t e r doing t h e s e t y p e s of e x e r c i s e , w e can came

up wi th p r o t o c o l s encodable that would h e l p b u i l d a r o b o t capab le

of doing t h e same t h i n g s .

Newell and Simon showed t h a t a computer could s o l v e a class of

problems wi th t h e genera l h e u r i s t i c s e a r c h p r i n c i p l e known as

means-ends a n a l y s i s . I t uses any a v a i l a b l e ope ra t i on t h a t r educes

t h e d i s t a n c e between t h e d e s c r i p t i o n of t h e c u r e n t s i t u a t i o n and

t h e d e s c r i p t i o n o f t h e goal . T h i s h e u r i s t i c technique was

a b s t r a c t e d and incorpora ted i n t o t h e cornputer program GPS. I t was

based on p r o t o c o l s o r human r e p o r t s made a f t e r having s o l v e d a

p a r t i c u l a r problem. But n o t a l 1 h e u r i s t i c s can be t r a n s l a t e d i n t o

a lgor i thms . H e u r i s t i c s are r u l e s o f thumb t h a t may l e a d t o a

s o l u t i o n o f a p a r t i c u l a r problem, b u t t h e y do no t g u a r a n t e e a

s o l u t i o n . They a r e p l a u s i b l e ways of approaching a s p e c i f i c

problem. However an a lgor i thm o r e f f e c t i v e procedure is a

completely r e l i a b l e procedure t h a t can be c a r r i e d o u t o r d e r l y i n

a f i n i t e number o f d i s c r e t e s t e p s . Sometimes, the human mind

so lve s problems i n a non-l inear way n o t d e s c r i b a b l e i n d i s c r e t e

steps.

Page 108: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Roger Penrose (1989) thinks that "the human rnind excels over

artificial intelligence because its creativity and complexity

rests on the indeterminacy of quantum mechanics phenomena at the

m o s t basic level of the brain, whereas artificial intelligence is

confined to the boundaries of classical mechanics and is thus

barred from the discovery of proofs of elegant mathematical

truths such as Godel's theorem" (Wagman;l99l;p.l6). However I do

not agree with Penrose on the fact that his approach is obviously

physicalist. He assumes that the mind is the brain. This

assumption restricts creativity within the limits of a brain

driven-by quantum mechanics despite the fact that Penrose

believes that the creative mind is nonalgorithmic, nonfixed,

nondeterministic, and probabilistic. Antonio Damasio says that

it is interesting to observe that some insightful mathematicians and physicists describe their thinking as dominated by images. Often the images are visual, and they even can be somatosensory. .. .Benoit Mandelbrot, whose liie work is fractal geometry, says he always thinks in images. He relates that the physicist Richard Feynman was not fond of looking at an equation without looking at the illustration that went with it (and note that both equation and illustration were images, in f a c t ) (1995;p.107).

Haugeland thinks that it is wrong to think that cornputer cannot

be creative. He argues that if 5. . there were a 'careful

specification' of al1 relevant processes in our brains (laws of

neuropsychology, or something l i k e that), it would be equally

easy to Say: 'We -or rather our brain parts- always act only as

specified.' But obviously, no such f a c t could show that we are

Page 109: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

never creative or free -and the corresponding claim about

cornputers is no more telling" (1985;p.g). Haugelandls argument

corroborates my earlier contention (chapter 2) that the mind has

to be equated with the brain before comparing it with the

computer. That the brain is the mind or the mind is the brain is,

at bestl a premature conclusion. As argued in chapter 2, the

existence of laws of neuropsychology that would encompass and

describe a l 1 mental phenomena (including subjective experience)

seems highly improbable.

Minsky also objects I1to the idea that, just because w e canl t

explain it now, then no one ever could imagine how creativity

works . ... 1 don't believe that there is anything basically different in any a genius, except for having an unusual combination of abilities, none very special by itself. ... why canlt Iordinary, common sense' -when better balanced and more fiercely motivated -make anyone a genius. ... creative people must have unconscious administrative skills that knit the many things they know together. ... Thus, first rank 'creativity' could be just the consequence of little childhood accidents (Minsky;1982;p,l-2).

The problem is: how could these lunconscious administrative

skillst be translated into intelligible and encodable

propositions that a machine could utilize in order to be

creative? Furthemore what is common sense? 1s it encodable?

Weizenbaum (1976) observed that a poztion of the information the

hurnan 'processesr is kinesthetic, that it is 'stored' in his

Page 110: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

muscles and joints. Everyday know-how doesnlt consist of

procedural rules; but it is knowing what to do in various

situations. And the formulation of a theory of comrnon sense has

turned out to be harder than expected. Common sense knowledge is

often brought to bear on the experiences of daily life. And some

of such knowledge is culturally dependent: for example, speaking

with a click like the Xhosas of South Africa. This kind of

knowledge cannot be represented in a cornputer system because it

cannot be reduced to a series of encodable rules.

Sornetimes we solve problems by intuition. Damasio thinks of

intuition as 'the mysterious mechaniçm by which we arrive at the

solution if a problem without reasoning toward it' (1994;pm188).

In other words, intuition is a form of knowledge or cognition

independent of experience or reason. The concept of intuition is

well illustrated by the mathematical idea of an axiom which is a

self-evident proposition that requires no proof. But Simon thinks

that "a simple recognition capacity ..., a little encyclopedia with an appropriate index can account for a great deal of human

intelligent action, and can account also for a lot for what we

cal1 intuition" (1981;p.l6). He asserts that

When you ask an expert a question and he is able to answer in a moment or two, and you Say, 'Well, how did you know that?' the usual reply would be 'Well, 1 guess it was just my intuition or my experience'. There is no reason to suppose that we have any awareness of the process that leads from recognition to the accessing of the information which

Page 111: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

that recognition makes availablen (Simon; 1981;~. 16) .

However pattern recognition cannot account for creativity.

Weizenbaum thinks that "the history of man's creativity is filled

with stories of artists and scientists who after working hard and

long on some difficult problem, consciously decide to 'forget'

it, ... After some time, often with great suddenness and totally unexpectedly, the solution to their problem announces itself to

them in almost complete f orm" (Weizenbaum; lW6;p . 215) .

A cornputer cannot 'forget' a problem and, later on, unexpectedly

corne up with a solution. A cornputing machine cannot, as a human,

'forget' a problem because it is not a cognitive system. Only an

entity that is capable of knowing, perceiving, desiring, etc can

be considered as a cognitive system. But artificial systems such

as computers -even sophisticated ones called expert sytems- are

simply information processors and symbols manipulators, they are

not cognitive systems. The ability to manipulate symbols doesnft

imply perception, cognition, understanding, etc... Cash registers

and remote controls are information processors without being

cognitive systems. The only way a cornputer or any other piece of

machinery could be said to be cognitive systems is by ascribing

it some kind of intentionality. Since this intentionality is

ascribed it cannot be intrinsic. It would be a "derived"

intentionality or what Dennett calls "as - i f " intentionality.

Page 112: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Cornputers are rational systems using exclusively deductive

reasoning. But humans are capable of deductive, inductive

reasoning, intuition, and more. Moreover humans are not only

rational systems, they are also intentional systems which

computers are not.

An expert system proves mathematical theorem, picks investrnent

stocks, gives a diagnosis, or produces weather forecasts because

it has been devised to do so. But this doesnft mean that an

expert system is a cognitive/intentional system as any human

being. Furthermore there are some demonstrations that a computer

system cannot carry out. For example, Godel's theorem shows that

in any sufficiently powerful logical system statements can be

formulated which can neither be proved nos disproved within the

system, unless possibly the system itself is inconsistent.

When solving a problem in arithmetic or algebra, a human being

and a computer arrive at the answer roughly in the same way. But

"the distinction between mind and machine is clearer when it

cornes to playing chess. Considerable effort has been put into the

development of computer programs that play chess... . However, they achieve their success not by adopting any clever strategies,

but by essentially brute force methods" (Devlin; 1997;~. 146) . Chess playing machines simply evaluate billions of positions

before selecting the one that offers greater chance of success.

Page 113: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Adriaan de Groot, a Dutch chess grand master, finds that even

grand masters do not generate a 'tree' of more than fifty or

seventy-five positions before choosing a move. H e also thinks

that the size of the 'treel is not proportional to the player's

strength. Further studies made by de Groot, show that a grand

master almost never will look at more than 100 possibilities

before selecting a move. And that

... mediocre players, when they are playing sexiously, also look at about a maximum of about 100 possibilities before they make a move. The difference is that the grand master looks at the important possibilities and the tyros look at the irrelevant possibilities, and that's really the only way in which you can distinguish the thinking that they are doing when they are selecting a move. The processes are exactly the same (Simon; l98l;p. 15) .

Thus good chess playing doesnft simply consist of 'numbers

crunching'. It requires the ability to Iseer meaningful

configurations that could lead to a breakthrough. 1 believe that

a good chess player could have a hunch in the first five seconds.

And he would spend the rest of the tirne testing if it will make a

good move. The machine looks at far more possibilities. This fact

increases the chance of selecting the best move. In fact, the

machine is no better than a mediocre player since it looks even

at irrelevant possibilities. A good chess player seems ta have

unusual abilities in visual imagery. Simon thinks that "a chess

master has a long experience of wasted youth, of looking at chess

boards, and in the course of looking at hundreds and thousands of

Page 114: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

chess boards, the master has learned to recognize al1 sorts of

familiar friends. The master does not see that board as twenty-

five pieces, he sees it as four or five or six clusters, each of

which cluster is a familiar friend" (1981;p.15). These clusters

or patterns seem to have meaning. They corne with a lot of

information on what to do about such patterns.

For example, Deep Blue doesn't think, it evaluates billions of

chess positions. On the other hand, the human rnind looks ahead

only few moves. This suggests an analogy:

When it was discovered that Ben Johnson was aided by steroids, he was banned from world amateur track events and stripped of the Olympic gold medal he had been given. A similar fate would befall any chess player who was caught receiving advice frorn a computer during a game. What, then to think of Deep Blue, which in the sprinting analogy would be 100 per cent steroids? The computer does not play chess, it only sirnulates playing chess. If a human did some of the things which the computer simulates (such as looking up the next move in a book), the human would again be banned" (Berry; 1997) .

Moreover a machine doesn't lose concentration. And since the

"tree" of possibility created by hypothetical moves increases

very quickly, Deep Blue has one advantage: it could search

through 200 million positions in a second. It "looks" at the

chess board for positions that it recognizes and numerically

rates them. After weighing al1 the factors according to its chess

Page 115: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

108

knowledgeS0, Deep Blue can choose the move or position with the

highest rate. However "...al1 computers chess programs to date

suffer from a generic flaw: a vacuum when it cornes strategizing.

When there is nothing really happening in the game, it floundersn

(Powell&Stone;l997,p.56). That is, depending on the situation,

Deep Blue (or any other computer chess program) could be

"smarter" than a human or it could be very clumsy. A human player

would try to force the game on his/her own turf, He or she would

create situations in which the computer would not "understand"

his/her positions. Kasparov could not examine Deep Blue's

previous games. Whereas Deep Blue was "trained" with patterns

from Kasparov's games. Deep Blue could even consult chess manuals

during the games* In fact, it was not Deep Blue that won the

match. But it was triumph for human creativity.

Sometimes creativity can involve novel combinations or

transformation of familiar ideas. In this case, creativity can be

described or modelled in computational terms. A computer program

called EMI (Experirnents in Musical Intelligence) invented by

David Cope of the University of California at Santa Cruz is

capable of scanning pieces by a famous composer, automatically

distil their essence (most common patterns), and "createtl a piece

that could easily be attributed to the composer by a casual

50 It can also consult chess manuals in its database.

Page 116: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

109

listener (Scientific American;Ol/98). However human creativity

cannot be reduced to pattern recognition and transformation. Very

often music seems to spring from emotion and experience. EMI has

no emotion, no mernories. It simply sifts through past pieces by a

particular composer for characteristic patterns of melodies,

harmonies and rhythms and then recombine them into something that

could be attributed to that composer. EMI could not have been

able to 'create' Mozart's music if Mozart had not existed. EMI

did not create Mozart's music, it simply recombines patterns in

novel ways.

But creativity could be described as an ability to recombine past

patterns or ideas only if it c m be proved that these ideas or

patterns have arisen in the creator's culture or some other

culture with which they have had contact. One way out would be

adhering to the platonic view of an immortal sou1 that remembers

things from past lives. In either case, it has to be explained

how, in the beginning, a particular idea came about. Artists and

scientists often do not know how their original ideas corne about.

They usually mention intuition. The biologist and physicist Leo

Szilard argued that: "the creative scientist has much in common

with the artist and poet. Logical thinking and an analytical

ability are necessary attributes to a scientist, but they are far

from sufficient for creative work. Those insights in science that

have led to a breakthrough were not logically derived from

Page 117: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

preexisting knowledge" (in Damasio;1994;p.189). But views differ

on the nature of creativity. Jonas Salk thinks that creativity

rests on a "merging of intuition and reason" (in

Damasio;1994;p.189). Einstein referring to his great insights

into the laws of physics, said that 'to these elementary laws

there lead no logical path, but only intuition, supported by

being sympathetically in touch with experience8' (in

Devlin; l997;p. 178) . Experience is, as in chess playing, the ability to ' seeV meaningful configurations.

However a computing machine does not predicate meaning. It

"knows" what is the case but doesn't have an inkling of what is

not the case. Things have to match up perfectly or it will not

proceed with the calculation. Rychlak thinks that

... through the study of predication we will acquire a deeper understanding of the human being. ... The role of error, of learning what was not taught, or presuming what was not intended, is clearly an aspect of human behaviour. An understanding of predication and opposition permits the social scientist to paint a richer picture of what it means to be a human being, one that connects more directly with socio-cultural outlooks such as we find in law, religion, and art. A l 1 such evaluative endeavors cry out for a depiction of the human being as one who predicates rathex than simply mediates experience (l99l;p. 1 4 ) .

1 believe that the hurnan mind is distinct £rom computer programs

that simply process information without understanding. As Penrose

argues: ".,.if the human brain is a computer ... and is therefore dependent on algorithms, how is it that the human brain of a

Page 118: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

111

mathematician constructs mathematical conjectures and

mathematical proofs that involve non-computable numbers, and yet

the algorithms of universal Turing machines hold only for

computable numbers" (Wagman;l99l;p.l6). However from the point of

view of cognitive psychology, artificial intelligence, could be

considered sirnply as a useful methodology that helps make

theories of cognition explicit, detailed, and precise. The same

methodology could help devise a theory of cognitive performance

or an algorithm that can be written in a programming language.

After laboxatory experimental trials and revisions, the prograrn

could be essential to the production of an artifact that can

effectively perform particular "cognitive" task. As Weizenbaum

puts it: "...however much intelligence computers may attain, now

or in the future, theirs must always be an intelligence a l i e n to

genuine human problems and concerns" (1976;~. 213)

Intelligence is not an appropriate adjective for a computer.

Semantics and belief factor are nonexistent in machines.

Human behaviour implies consciousness. Can a machixe be

conscious? Consciousness is a necessary condition to a successful

emulation of human behaviour. But just behaviour is not

sufficient to prove the presence of intelligence. Even if

machines can do what humans can do, they cannot be what humans

are. They don't know what it is like ta be a human. Machines

dontt belong to the category of things alive (conscious).

Page 119: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Conceiving a machine as conscious would be paradoxical. There are

human abilities that cannot be simulated in cornputing machines.

For exarnple, humans "are capable of listening with the third ear,

of sensing living truth that is truth beyond any standards of

provability. It is that kind of understanding, and the kind of

intelligence that is derived from it, which 1 claim is beyond the

abilities of computers to simulate" (Weizenbaum; l976;p- 222) . Descartes foresaw artificial intelligence. But he also

anticipated its limits. He mused that

For we can well imagine a machine so made that it utters words and even, in a few cases, words pertaining specifically to some actions that affect it physically. For instance, if you touch one in a certain place, it might ask what you want to Say, while you touch it in another, it might cry out that you are hurting it, and so on. However, no such machine could ever arrange its words in various different ways so as to respond to the sense of whatever is said in its presence -as even the dullest people can do (in Haugeland;l98S;p.35).

What does make a human or, more precisely, a person able 'to

respond to the sense of whatever is said in his/her presence'? In

order to properly respond to whatever is said, one has to make

sense of whatever is said. To make sense of whatever is said, one

needs an appropriate general background knowledge that would help

him/her understand what is said. The King of Siam thought he

could not reason with Europeans because they believed that ice

existed. The King of Siam's general background knowledge could

not accommodate the notion of ice. And general background

Page 120: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

113

knowledge can be acquired through experience and social

interaction.

Page 121: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

The frightening part about heredity and environment is that . . . p arents provide bath."

(Author unknown)

How many psychotherapists does it take to change a lightbulb?

One. But only if, deep down, the lightbulb is willing to change.

(Anonymous )

What is it exactly about this sequence of sentences that makes it

a 'jokel? How much intelligence does one need in order to

recognize the quotation as a joke, to be amused by it, and to

explain why it is amusing? Could a computer 'understand' or

identify this sequence of sentences as a joke and be amused by

it? A computer could lget' this joke only if it is progranuned to

recognize 'the parameters' (if such a thing exists) of sequences

of sentences that qualify j o k e s . But I have difficulty believing

that a computer would be amused by a joke. And a human being

Page 122: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

would have to have a certain cultural background in order to

recognize the quotation as a joke, to be amused by it, and to

explain why it is amusing.

As argued in the last chapter, humans are more than information

processors. More importantly, they are social beings. In

different places, humans have different ways of doing things,

solving problems, and perceiving the world. Again, as said in

chapter 2, despite their neurophysiological underpinning, our

perceptions, our ways of doing things and/or solving problems, in

short, our mental states are nevertheless subjective.

Subjectivity is, somehow, shaped by culture and the environment

in which the subject lives.

In this chapter, with artificial intelligence as backdrop, the

discussion will be on the role of 'nature' (the human biological

organism) and culture (sets of shared values) or social

interaction in the creation/formation of human mind or cognitive

development. Because, having a human body (biological organisrn)

and social interaction seem to be essential to the formation of

the mind or the development of human intelligence.

I start with the assumption that nature combined with culture

create the human mind. Aristotle once said that: Man is a social

animal. In Aristotlevs definition the two aspects of a human

Page 123: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

116

being or a person are highlighted: the animal or biological and

the social, the moral and the political. These are not two

separate or independent entities, they interact.

"The biological environment is the necessary world in and through

which a biological organization lives and with which it

interacts. If -- as is the case with humans -- social and cultural influences are part of the common environment, then

knowing in humans can never develop humanly without the social

and cultural environment" (Furth;1973;p.16). The story of the

Wild Child of Aveyron illustrates the importance of social

interaction. In 1799, a twelve or thirteen-year old boy who had

been wandering for an unknown time was 'captured' in the forests

of Aveyron in southern France. Like other children who have grown

up without human contact, he behaved in 'strange' ways and could

not speak. How could he speak or behave 'properly' without having

contact with other people? He did not subscribe to the shared

values of the society at large. The psychiarist Phillipe Pinet

concluded that Victor (the Wild Boy) was crazy. However Jean-Marc

Itard, a young doctor to whom Victor had been turned over,

concluded that the Victor's 'strange' behaviour and inability to

speak could be attributed to the lack of social interaction.

In human beings, supposedly, it is the mind that thinks, reasons,

feels, judges, etc.. 1 guess none of these operations is possible

Page 124: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

if there is no external world. Even Descartes needed the external

world in order to launch the process that led him to his famous

'Cog i to ergo sum'. To illustrate the importance of the external

world. Let us take an example of an activity that cornputers are

not capable of and that neither requises, apparently, a direct

outside input nor involves the will. And this activity that could

be said to be (purely) neurophysiological is: drearning. Dreaming

is a mode of consciousness. It is a state in which, I would Say,

the mind is at the mercy of the neurophysiology5'. In principle,

dreaming can be induced neurophysiologically. In other words, it

is possible to get someone to dream by altering their

neurophysiological con£iguration ( L e . taking some hallucinatory

drug). So, it is possible to infer that a mode of consciousness

can be "represented" neurophysiologically. I am talking about

neurophysiological states but not about their "content". A s 1

said earlier, dreaming could be induced. But, 1 don't think that

it is possible to neurophysiologically get someone to have a

particular dream. Bringing the "content" or "object" of dreams in

the picture raises the question of meaning. It seems highly

improbable to corne up with a neurophysiological account of

However I don't know if neurophysiologically speaking there is a difference between dreaming and being fully awake. 1 acknowledge that someone could be awake and be dreaming. But, it cannot be said that the subject dreaming is fully awake unless the word "dreaming" is used metaphorically.

Page 125: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

118

meaning. If it were possible to account for meaning in physical

(Le. neurophysiological tenns), it would be possible get someone

to dream a particular dream. Thus, it would possible, at least in

principle, to 'create' some artificial intelligence device that

could have a particular meaningful dream. Then machines would no

longer be j u s t symbol manipulators because their operation would

capable of yielding meaning. And that would mean that they have

consciousness. In short, they would no longer be machines. But we

are not there yet!

Searle (1984) and many other thinkers have argued that one of the

basic differences between humans and computers is that human

mental activity yields meaning, whereas computers are simply

symbol manipulators. But where does meaning corne £rom? Stephen

Toulmin thinks that in Lev Vygotskyls view the problem of meaning

'cannot be convincingly dealt with by focusing either on our

genetic inheritance and innate capacities alone or on the

influence of external, environmental factors alone"(Toulmin;l978;

p . 3 ) . Even if the roots of intelligence are bi~logical~~, those

of meaning can only be socio-cultural. Both the biological and

the socio-cultural are needed to create the human mind. Though

highly improbable, a biological organism can, in principle, be

artificially engineered. However the socio-cultural aspect can

- -

" As both Piaget and Vygotsky think.

Page 126: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

119

only be negotiated with the outside world. Computers (and other

electromechanical devices that emulate human mind) can manipulate

symbols as humans do. Up to a certain extent (for example, in the

case of problem-solving), it is possible to Say that computers

and other electromechanical devices can "think". However, no

meaning cornes out of computers and other electromechanical

devices'"thinkingw. It is the human mind with its cultural

background that attributes meaning to the results of manipulation

of symbols by computers and other "intelligent" devices.

In a way, "...to disregard the social or cultural context of our

mental lives is to misrepresent the very nature of the mind

itself, for the mind is an essentially social phenornenon"

(Bakhurst 1993, p . 3 ) . Vygotskyps views suggest that the hurnan

mind is a social phenomenon. Piaget seems to share the same

views. In Wertsch's view, both Piaget and Vygotsky put the

socialized individual at the end of cognitive development.

However, my first impression is that Piaget has a tendency to

underestimate the role of social communication. In his work, the

role of social factors in cognitive development is very often

implicit. But, Furth argues that "Piaget doesn't study man in a

biological vacuum. Man is a living organization which in spite

of, or rather because of, his inherent structure and self-

regulation is in no way self-sufficient. The environment is not

Page 127: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

an added luxury or some item dispensable to an essentially

autonomous structure 53" (~urth; 1973;~. 16) .

Both Piaget and Vygotsky agree that intelligence has biological

roots. This cannot be disputed unless one believes in spiritual

beings. Piaget sees intelligence more as an extension or an

outgrowth of biological organization. This approach has an

epiphenomenalist Elavour that undermines the interactionist

thesis. Unlike Piaget and despite the fact that he acknowledges

that intelligence has biological roots, Vygotsky searches for the

beginning of cognitive development in the social life. He thinks

that cognitive development is inseparable from socio-cultural

activities.

Jerome Bruner also argues that in order "to understand man you

must understand how his experience and his acts are shaped by his

intentional states... The form of these intentional states is

realized only through participation in the symbolic systems of

the culture. Indeed, the very shape of our lives -the rough and

perpetually changing draft of our autobiography that we carry in

our minds- is understandable to ourselves and to others only by

virtue of those cultural systems of interpretation" (Bruner, in

Gardner; lW5;p. 37) . Bruner' s v i e w suggests that our mind is not a

" My italics.

Page 128: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

121

finished product. It is a 'rough and perpetually changing" draft.

And changes are induced by social interaction. For example, the

simple fact of being left or right handed (this could be

cultural!)54 can affect the way a subject perceives music (see

chapter 2) .

As 1 said earlier, Piaget and Vygotsky seem to share sirnilar

views on the nature and formation of human mind. They

have considered the developrnent of individual cognitive processes within the larger context of overall human biological and social evolution. ... Piaget's theory of cognitive development has been described as 'a progressive structurization whereby actions and intellectual operations become organized into coherent systems'..., which applies to Vygotsky's outline as well. (Martin and Stewin, 1974, p.348- 9

However there are some diiferences. Piaget -- who has always called himself a genetic epistemologist --" thinks that intelligence is essentially a biological phenornenon. In his mode1

of cognitive development he puts emphasis on the relationship

between the individual and the environment. He "argued that the

54 In some cultures (African, Middle-Eastern), for example, children are strongly encouraged if not forced to use only the right hand while eating or giving something because the left hand is for persona1 hygiene.

55 The juxtaposition of the two words, "genetic" and "epistemologist" makes me wonder if meaning could be found in the genes. That would be surprising. 1 think Piaget used the expression "genetic epistemologistw metaphorically.

Page 129: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

122

development of intelligence is the highest f o m of adaptation of

an indiv idual to his or her environment. Adaptation involves an

interaction between the individual's knowledge and the external

environment, and two basic processes can be identified in this

interaction: assimilation and accommodation" (Eysenck;1984;

p.232).

Social interaction is important. It plays a major role in the

development of the mind. Piaget thinks that it is the social

interaction that gives rise to successive logical structures that

regulate thinking processes. Since srnall children don't seem to

use logic rules 1 can speculate that logic rules are developed

through action (in the world). But, how do these logic rules get

internalized? This is a difficult question to which there seems

to be no satisfactory answer.

Kant contends that the mind is structured to apply causal

relations between events. Although his list of "a priori

concepts" is different from Kant's, Piaget seems to follow the

same line. He thinks that concepts such as causality, the-space,

number, morality and other Kantian categories develop slowly.

However, Piaget does this "without questioning the particular

causal connections a specific culture bas produced nor, what is

methodologically even more problematic, the socio-cultural-

h i s t o r i c a l notion of causality itselfw(Holzrnan and Newman;1993;

Page 130: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

p. 45) 5 6 . Contrary to Piaget, Vygotsky thinks that concept

formation is a social-cultural-historical activity which

"contains the key to the whole history of the childls mental

development'' (Vygotsky; 1987; p. 167) . In both cases sociocultural factors play major roles in the process of understanding the

world, and also in the process of developing problem-solving

capabilities. For example, a particular sociocultural ferment and

some particular needs (in Western societies) led to the advent of

computers. Thus cornputers could be considered as an extention of

deductive reasoning approaches to problem solving.

One of the major socio-cultural factors is speech. Vygotsky

asserts that speech plays an essential role in the organization

of higher psychological functions. For a child, "speech and

action are part of one and the same complex psychological

function, directed toward the solution of the problem at hand"

(Vygotsky;1978;p.24). He thinks that "the most significant moment

in the course of intellectual development, which gives birth to

the purely human forms of practical and abstract intelligence,

56 Holzman and Newman think that Piaget's work on the origins and development of intelligence has been inspired by Kant's a priori synthetic categories. They think that "what the child constructs is a perception and understanding of laws of motion, speed, ternporality and causality that are taken by Piaget to be how the world is, independent of our construction of itn(1993, p. 202)

Page 131: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

124

occurs when speech and practical activity, two previously

completely independent lines of development, converge" (Vygotsky,

1978, p.24) . Vygotsky, more than Piaget, is trying to conciliate the natural or biological and the cultural. His view on the

creation/formation of human mind is inherited from Marx and

Engels. Vygotsky's political background plays a major role in the

shaping of his theory. It is behind his emphasis on the primacy

of labour and tool use. As Wertsch reported it, Marx and Engels

"argued that we become human by engaging in the process of

labourvv (Wertsch;1985;p.77). I would sympathize with the view

that labour interacts with human nature. However, 1 donlt think

that engaging in the labour process is a necessary condition to

becoming a human being or person. Vygotsky doesn't seem to

espouse this view without questioning it. And, he goes a step

further by emphasizing the role of the use of tools in the

process of labour. Although he agrees with Engels' notion that

"the tool signifies specifically hurnan activity" (ibid) , Vygotsky

acknowledges the fact that non-humans also use tools. For

example, some monkeys use stones (as tools) to open nuts.

However, the use of ~psychological toolsvl or "signs" or speech

could signifies specif ically human activityS7. Vygotsky stresses

that "it is decisively important that speech not only facilitates

57 By saying this, 1 assume that to the best of our knowledge, there are no other beings beside humans that use signs. But, this claim could be proven wrong.

Page 132: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

125

the child's effective manipulation of objects but also controls

the child's own behaviour. Thus with the help of speech children,

unlike apes, acquire the capacity to be both the subjects and

objects of their own behaviout"(Vygotsky;l978;p.134).

By arguing that speech as well as intelligence could only be

developmental, Piaget is rejecting rationalism and innatism. But,

his theory has an innatist and rationalist flavour. Piaget is

opposed to predeveloped structures. He thinks that structures

have to be acquired first and then, they can be developed. Piaget

"stresses that the source of structure is the subject himself

acting on the external content and he likens this to Kant's view

that the source of structure is the mind itselfW(Atkinson, 1983,

p.11). The structure and external input are required so that the

mind can develop. Saying that they are necessary conditions for

the development of the mind would not be misleading, 1 would Say.

However, if the source of mind's structure is the mind itself

then there is risk of circularity here. This could be avoided if

Piaget thinks that human mind has built-in mechanisms to develop

logic rules. He thinks that built-in mechanisms or "the

structures may be inborn, or be in the process of forrning, or

they rnay have been already formed through the progressive

organization of actionsW(Piaget, 1962, p.2). If these structures

are like grammar rules (to use Chomskyan jargon) or codes then we

might be led to believe that humans are born with some kind of

Page 133: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

126

innate knowledge. Even so, those built-in mechanisms would have

some kind of "knowledge". It is possible to Say that these rules

or codes aren't really innate since the child is not aware of

them before he or she reaches the stage of concrete operations.

But one question still remains: how does this change occur, even

if this change is developmental? Even by being imersed in a

proper environment, how does the child become capable of

following or using, let us Say, the rules of inference ? If there

is no circularity then Piaget would be flirting with innatism

(which he rejects) .

In Piaget's theory period of sensory motor intelligence and

period of concrete operations (arithmetics, logic, etc . . . ) are

worlds apart. As said earlier, Piaget's mode1 fails to explain

how a child jumps from the period of sensory motor intelligence

to that of concrete operations. I guess that there must be some

intermediate stage(s) during which the child develops tools that

help her or him overcome the initial isolation. Piaget seems to

rely on logical formalism in order to explain the rules of

transformation from one stage of cognitive development to

another. However, Gardner thinks that the logical formalism

underlying those stages is invalidsa.

58 Gardner goes even further. He daims that "the stages themselves are under attack, and (Piaget's) description of the biological processes of stage transformation

Page 134: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Vygotsky thinks that mental or cognitive structures are made of

relations between mental functions. In his view, al1 mental

functions have external or socio-cultural origins. For example,

language is socially based even in its most primitive form.

Children have to use language and communicate with others before

they shift the focus to their own mental processes. And this will

make it possible for the transition from external to internal

speech to take place. It is the internal speech that will

eventually evolve into thought. By saying that al1 mental

functions have external or socio-cultural origins, Vygotsky

doesn't underestimate the biological factor. He wants to show

that the socio-cultural factor5' and biological factor are

equally important.

Thus, in Vygotsky's model, the child develops from a creature at the mercy of his immediate perceptions to an individual capable of controlling and ordering his perceptions through the application of mature thought processes to sensory data. So, also, does Piaget describe the evolution of the cognitive processes from the sensory, through the concrete, to the abstract levels of functioning (Martin and Stewin, 1974, p. 353) .

Even though Piaget rejects the idea of predeveloped structures

and somewhat underestimates the role of social interaction, his

have eluded even sympathetic scholars (see Bairnerd 1978)" (Gardner, 1985, p. 118). However, Gardner acknowledges that "even disproofs of (Piaget's) claims are tribute to his general influence" (ibid) .

'' The use of singular is not intended to suggest that there is only one single socio-cultural factor.

Page 135: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

128

assimilation/accommodation theory implies and needs both factors.

Assimilation occurs when one incorporates new information into

existing knowledge and/or structure. It is possible to

extrapolate that there must be some kind of (predeveloped)

structure so that a neonate can, for the first time, assimilate

new information. Both assimilation and accommodation processes

require external input.

Very often in Piaget's works, there is a mention of a stimulus

being "assimilatedV by the structure. However, it is not clear

how the assimilation is going to take place. Besides, the nature

of structures is unclear. It is difficult to tell if they are

biological or mental. Piaget "identifies structuring with

knowing". As Furth says, Ilsuch a view simply proposes that an

organism cannot respond to a stimulus unless the stimulus is at

least in some rudimentary way meaningful or known to the

organisrn". 1 think, in this case, the biological organismls

response is considered as (a piece) of knowledge60. Despite its

appeal, this comment raises a problem similar to the one

encountered in the preceding paragraph: it seems to suggest that

the organism has some kind of innate knowledge that is not

6 O However, not al1 biological organismls responses can be considered as knowledge. For example, sneezing is a a biological organism's response to a stimulus, but 1 donlt think that the body has knowledge of the stimulus.

Page 136: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

129

necessarily conscious or reflective. In this case, meaning could

be accounted for in biological terms. 1 dontt think that would

make any sense. Piaget's constructivist theory suggests that

meaning is "negotiated" between the organism and its environment.

This excludes the possibility of having innate ideas. It also

excludes the possibility of replicating and/or simulating

intelligence in a computer or any other artifact.

As Backhurst sees it:" ... meaning is the medium of the mental, and meaning is (in some sense) socially constructed; ... the human mind, and the forms of talk in which hurnan beings explain and

predict the operations of minds, should be understood on the

mode1 of t o o l s , and like al1 artifacts, we cannot make sense of

them independently of the social processes which make them what

they are"(1995;p.15). As 1 said earlier, the roots of

intelligence might be biological, those of meaning can be socio-

cultural. Intelligence means faculty of understanding. In othex

words, it is a "mechanism" that makes understanding possible. One

understands something when he or she understands its meaning. So,

without socio-cultural factor or input there cannot be meaning.

And, the faculty or mechanism of understanding would be of no

use. The interaction of both factors is the key to the cognitive

developrnent.

Page 137: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Whether the process of cognitive development is from the

egocentric to the socialised or from the socialised to the

individualised, one thing seems to be clear: both the biological

factor and the socio-cultural factors are necessary for an

individual. The two factors cooperate in order to produce

intelligence. Without the biological, there cannot be a human

being or a person. And, it is impossible to be a person if the

socio-cultural factor is non-existent. Damasio observes that:

'the comprehensive understanding of the human mind requires an

organismic perspective; that not only must also be related to a

whole organism possessed of integrated body proper and brain and

fully interactive with a physical and social environment"

(1994;p.252).

Acknowledging the major role of the socio-cultural factor is a

step forward. As Bruner sees it: "cultural psychology aspires to

render perspicuous the structure of social life as it pertains to

the emergence and flourishing of mind. If we can learn how

cultures make mind, perhaps we can make cultures which make

better, or at least more fulfilled, minds" (Bruner;1990;p. 31).

Despite al1 the effort by Doctor Itard, Victor (the Wild Child)

could develop only a very limited linguistic and behavioral

repertoire. This could show that a total cultural deprivation

early in someone's life could have ixreversible effects that

Page 138: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

131

cannot be modified by 'reprogranuningl the mind. A cornputer's

'behaviourl or perfomance can be radically changed by a

reprogramming. However human beings are not cornputing machines.

Every human being is unique. We can duplicate al1 the parts of

Deep Blue or any other computer and have two identical machines.

And these two machines would perform exactly the sarne way.

However humans are not identical, and they don't think alike.

Even monozygotal or identical twins are two distinct persons who

donft think alike. A brain is not a computer. Nature does not

deliver a 'plug and play' ready to use mind that j u s t needs to be

turned on. Doctor Carla Schatz of the University of California at

Berkeley thinks that the brain lays out "circuits that are its

best guess about what is required for vision, for language, for

whatever. And now it is up to neural activity -no longer

spontaneous, but driven by a flood of sensory experiences- to

take this rough blueprint and progressively refine it" (in

Time;03/02/97;p. 50) .

If experiences refine the brain/mind than a human being "is

defined, in large part, by the problems it faces. Man faces

problems no machine could possibly be made to face. Man is not a

machine. ... although (he) most certainly processes information, he does not n e c e s s a r i l y process it i n the way computers do.

Cornputers and men are not species of the same genusr'

(Weizenbaum; lW6;p. 203) .

Page 139: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

The idea of artificial intelligence presupposes that machines

could be made to do things that usually require human

intelligence. Can machines do things that humans do? Artificial

Intelligence scientists think that it is possible to simulate

human intelligence in machines. They also extrapolate that

machines can have minds.

As said in chapters 1 and 3, Descartes and Hobbes 'prophetically

launched artificial intelligencer. They both conceived thinking

as an essentially conscious process which can be described in

discrete steps. Hobbes also contended that thinking was nothing

but computation. But their views diverged on the nature of the

mind. Descartes contended that the mind was an immaterial

substance. However Hobbes claimed that the N n d was something

corporeal or material. How couid Artificial Intelligence

scientists make machines that have minds if the nature of mind is

Page 140: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

still unknown. The question of the nature of the mind has

remained unresolved for centuries. And the debate still rages on.

As discussed in chapter 3, the project of artificial intelligence

is mainly based on Newell and Simon's physical symbol systern

hypothesis. Newell and Simon contend that the necessary

conditions for something to be intelligent or to have a mind is

that it be a physical symbol system. The capacity to manipulate

physical units (symbols) by reference to syntactic rules is what

takes to have a mind. In short, an intelligent entity must be

physical and capable of manipulating physical symbols. This

excludes the possibility of a nonphysical mind. How could a

nonphysical mind manipulate physical symbols? Thus, from a pro-

artificial intelligence point of view, the mind has to be capable

of manipulating physical symbols. To do so, it must have physical

properties. That is why, in chapter 2, 1 examined the idea of the

brain as the mind. The conclusion is that the mind cannot be

reduced to the brain. However this should not hide the fact that

a well-functioning brain is the material seat of mental activity,

properties, etc.. And this cannot help us answer the question of

whether machines could do what humans do.

The only option left was to examine the criterion artificial

intelligence scientists use to attribute intelligence to

artifacts. Minsky, Simon, and others believe that machines become

Page 141: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

intelligent by carrying out a task that would require

intelligence if performed by humans. Humans use intelligence

while doing arithmetics. Pocket calculators also do arithmetics.

If we agree with Minsky and Simon, the conclusion would be:

pocket calculators are intelligent. This conclusion is

nonsensical. A pocket calculator is simply a tool. It is not

aware of the problem solving process in which it is engaged.

Besided it does know what it is doing. A pocket calculator or any

other artifact simply manipulate symbols They are not cognitive

system. A cognitive system must ber at least, capable of knowing,

desiring, believing, and perceiving. Knowing, desiring,

believing, and perceiving require consciousness and meaning.

Artiiicial Intelligence scientists donrt have an exhaustive list

of everything that requires human intelligence. And they seem to

reduce intelligence to deductive reasoning. There are things that

humans do that machines cannot do (for example, enjoying poetry) . Even if a human being and an artificial intelligence device,

while solving a problem, arrive at the same result, this doesnrt

prove that humans and machines solve problems in the same way.

Cornputers rely squarelly on deductive reasoning. But, as stated

in chapter 3, humans corne to know, as says Locke, "by intuition,

by reason, examining the agreement or disagreement of two ideas,

by sensation, perceiving the existence of particular things"

(Russell;l96l;p.591). Humans also solve problem by creativity,

Page 142: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

135

However mysterious, creativity is nevertheless meaningful and/or

teleological. "Creativity is never simply a question of buil~ing

ideas 'from scratchr. There is always an ongoing, evaluative

process of aligning meanings within proper f r u i t f u l contexts

(predicates) and then extending these lines of patterned

organization to some desired end. The creative person must

recognize various points along the way, as when a line of thought

is becoming inconsistent or missing the targeted

goal" (Rychlak; l99l;p 169) . Kasparov knew that he was playing chess against a machine,and he wanted to win the match. But Deep

Blue was simply manipulating symbols. That is why I believe that

scientists should not expect to uncover the mystery surrounding

the nature of the mind by simulating what they cal1 intelligence

in machines. Simon (1981) acknowledges that there are large areas

of human thought processes that have not yet been explored and we

are still agnostic about where the boundaries are.

The mind cannot be understood in tems of rules or programs. Of

the computational theory of mind, Eisenberg asserts that "what it

cornes down to ... is that they conceive 'mind' as either synonymous with 'brain' or at least as essentially related to

brain in an empirically determinable way. Then they conceive of

the brain as an information-processing machine- As a result, the

mind is seen either as a machine in operation or as the patterns,

procedures, software, or some other observable or scientifically

Page 143: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

136

determinable phenomenonW(l992;p.15). Instead of talking about a

computer mode1 we should talk about a computer metaphor. A

metaphor seems to be a device that allows transactions between

different contexts. With a metaphor insights "from one context

could be transferred into another context". A metaphor cannot be

turned into a scientific deduction. For example, Einstein's

Relativity Theory shows that every description of a physical

event must be relative to some specific space/time coordinates.

But he didnlt intend to claim that: "in life everything is

relative".

1 would side with Weizenbaum who, paradoxically, dismisses the

very concept of mind. He affirms that llthere is... no such a

thing as mind; there are individual minds, each belonging, not to

'manf, but to individual human beings" (1976;~. 223) . A human being

or a person is more than just the its biological organism or

body. If a person were no more than his/her biological organism

then monozygotical twins would simply be two identical copies of

one single person. Monozygotical could have the identical

biological material but each one of them has his/her own mind.

However the mind cannot exist or operate without the body.

Antonio Damasio thinks that the mind is not distinct from the

body. He contends that the mind is (built) from the body and with

the body. And failure to see this is what he calls: Descartes'

error.

Page 144: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

137

Sergio Moravia thinks that "Man can no longer be interpreted as

homo duplex (despite the efforts by neo- and crypto-dualists):

the 'mindu, of course, does not exist as an entity; and the

'body' is an extremely generic concept, itself derived from an

out-dated brand of metaphysics (if anything, one should speak of

the brain and the central nervous system)" (1995;p.3), 1 am not

trying to suggest that the problem of the nature of the mind is a

pseudo-problem. However 1 think that it is a product of a

specific social and cultural elaboration- 1 would consider the

mind as being primarily a heuristic concept. Thus, it does not

matter whether the mind is material or immaterial and/or it

exists or not. 1 would Say that the tenn 'mind' is a symbol that

refers to some 'entity' that we could simply cal1 a human being,

considered individually and existentially as a person. In chapter

4, I struggled to show that the comprehensive understanding of a

human mind, a human being or a person "requires an organismic

perspective; that not only the mind move from a nonphysical

cogitum to the realm of biological tissue, but it must also be

related to a whole organism possessed of integrated body proper

and brain and fully interactive with a physical and social

environment" (Damasio; 1994;p.252) .

Locke thinks that our idea of a person is that of 'a thinking

intelligent being, that has reason and reflection, and considers

Page 145: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

138

itself as itself, the same thinking thing in different times and

places'. Being a person could also be defined in many ways such

as social, moral, legal, and spiritual. Humans have a capability

for culture in the sense of conscious thinking and planning,

transmission of skills and systems of social relationships, and

creative modification of the environment. However machines are

not capable of conscious thinking. Consciousness is one of the

basic factors on which hinges the difference between human

behaviour and computer performance.

Intelligence is generally viewed as the capacity to understand,

to learn, to solve problems. It is the ability to deal with

concrete situations and to profit intellectually from sensory

experience. And intelligence cannot be reduced to deductive

reasoning. Moreover Weizenbaum asserts that "intelligence is a

meaningless concept in and of itself. It requires a frame of

reference, a specification of a domain of thought and action, in

order to make it meaningful" (1976;~. 204) .

Tomputers trace our conceptual steps after we have corne to a

premise and affirmed it demonstratively in the act of

predication. They fail, however, to capture the oppositional

meanings involved in cognition preliminary to such

conceptualization" (Rychlak; 1991; 161) . And predication (of meaning) is the main concept that stresses the difference between

Page 146: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

human intelligence and artificial intelligence. A computer needs

exclusively demonstrative premises in order to process

information accurately and arrive at a useful conclusion. Here is

a linguistic trap for cornputers by Christopher Longuet-Higgins:

Premise 1: Men are numerous.

Premise 2: Socrates is a man.

Conclusion : Socrates is numerous.

1 donlt think that the dullest of 'normal' being would arrive at

such a conclusion. The logical reasoning is perfect, but the

result is illogical. A computer would need an extra premise to

'make sense' of this syllogism. Actually, it would not 'rnake

senser itself but it would give an acceptable conclusion. Since a

computer cannot get the meaning, this conclusion would be

acceptable to the user. Even an expert system needs 'tuningr. It

has to meet the conditions of satisfaction set by the hurnan

expert. These conditions change with the growth of the expert's

own knowledge of the subject. Deep Blue scientist Murray Campbell

conceded that the machine would never be as flexible a s a human

being .

As Weizenbaum s u s up:

Our own daily lives abundantly demonstrate that intelligence manifests itself only relative to specific social and cultural contexts. The most unschooled mother who cannot compose a single grammatically correct paragraph in her native language -as, indeed, many academics cannot do in theirs- constantly makes highly refined intelligent

Page 147: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

judgments about her family. Eminent s c h o l a r s confess that they don't have the kind of intelligence required to do high-school algebra. The acknowledged genius is sometimes stupid in managing h i s private life. Cornputers perform prodigious 'intellectual feats', such as beating champion checker players at their own game and solving huge systems of equations, but cannot change a babyvs diaper. How are these intelligences to be compared to one another? They cannot be compared (1976;p.205).

Page 148: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Armstrong, David

Atkinson, Christine

Backhurst, David

Berry, J

Block, Ned

Boden, Margaret

Casti, J.

Chalmers, David

Cottingham, J & al

Damasio, Antonio

Dennett, Daniel

Dennett , Daniel

Devlin, Keith

Diderot, Denis

The Nature of Mind in The ~rain/Mind Identity Theory; ed. V.C. Borst, St Martin's; 1983

Making Sense of Piaget; Routledge and Kegan Paul; 1983

On the Social Constitution of Mind; Bruner, Ilyenkov, and the Defense of Cultural Psychology; 1995

Deep Blue's not really a player; The Globe and Mail; 17/05/1997

entry on Tonsciousnessl, in S. Guttenplan (ed) ; A Cornpanion to The ~hilosophy of Mind; 1994

The Philosophy of Artificial Intelligence; Oxford University Press; 1990

Complexif ication; HarperPeremial, New York; 1995

The Puzzle of Consciousness; in Scientific Arnerican; December, 1995

The Philosophical Writings of Descartes; Cambridge University Press; 1993

Descartes' Error; Avon Books; 1994

Cornputer Models and the Mind: a view from the East Pole; Psychology and Artificial Intelligence; p.1453-1454; Dec. 1984

Consciousness Explained; Little, Brown and Company; 1991

Goodbye Descartes; John Wiley & Sons, Inc; 1997

in Ltencyclopédie ou dictionnaire raisorné des arts et des metiers; GF Flammarion; 1986

Page 149: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

Dreyfus, Hubert What Computers Canl t Do; Harper & Row; 1972

Dunlop, C. & Fetzer, J. Glossary of Cognitive Science; Paragon House; 1993

Eisenberg, John

Eysenck, Michael

Flanagan , Owen

Fischbach, G. D -

Fodor, Jerry

kr r th , Hans G.

Gardner, Howard

Gardner, Howard

Harth, E r i c h

Hauge land, John

Hobbes, Thomas

Humphrey, Nicholas

Ikonicoff, R

Jackendof f , Ray

Lewis, D.K.

Lucret ius

Lyons, William

The Limits of Reason; OISE Press; 1992

A handbook of Cognitive Psychology; Lawrence Erlbaum Associates; 1984

The Science of the Mind; MIT Press; 1991

Mind and Brain, in Scientific American; September 1992

A Theory of Content; MIT Press; 1990

Piaget and Knowledge: Theoretical Foundations; 1973

Green Ideas Sleeping Furiousiy; The New York Review; March 23, 1995; p. 32-38; 1995

The Mindls New Science; Basic Books; 1985

The Creative Loop; Penguin Books; 1993

Artificial Intelligence, MIT Press, Cambridge; 1985

Leviathan; Collier Books; 1962

A History of the Mind; Chatto & Windus, London; 1992

Avez-vous la bosse des maths? in Science & vie, numéro 936; Septembre 1995

Consciousness and the Computational Mind; MIT Press; 1987

An argument for the Identity Theory, in Journal of Philosophy; 1966

On the Nature of the Universe; Penguin Books; 1983

Modern Philosophy of Mind; Every Man,

Page 150: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

London; 1995

Moravia, S

Nagel, Thomas

The Enigma of the Mind; Cambridge University Press; 1995

what is it like to be a bat?; Philosophical review. Vo1.83; 1974

Newman, F. & Holzman,L. Lev Vygotsky, Revohtionary Scientist; Routledge; 1993

Pagels, Heinz

Penrose, Roger

Piaget, Jean

Place, U.T.

The Dreams of Reason; Bantam Books, New York; 1988

The Emperor's New Mind; Oxford University Press; 1989

The Construction of Reality in the Child; Basic Books, New York; 1954

1s Consciousness a Brain Process? (1956) ; in The Philosophy of Mind, ed. V.C. Chappell, Prentice-Hall, 1962

Powell, 8 6; Stone, B Man vs. Machine; Newsweek, May 5, 1997

Restak, R.M. The Brain; Bantam Books; 1984

The Modular Brain; A Touchstone Book; 1995

Russell, Bertrand History of Western Philosophy; Routledge, London; 1993

Rychlak, Joseph F. Artificial Intelligence and Human Reason; Columbia University Press; 1991

Ryle, Gilbert

Searle, J.

Searle, J.

Shatz, C

Simon, Herbert

The Concept of Mind; The University of Chicago Press, Chicago; 1984

Intentionality: an essay in the philosophy of mind; Cambrdge University Press; 1984

1s the Brain's Mind a Computer Program?; Scientific American; January 1990

in Nash, M; Time; 03/02/1997

1s Thinking Uniquely Human?; University

Page 151: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

of Chicago Magazine; 1981

Stewin, L. & Martin, J. The Developrnental Stages of L. Vygotsky and J. Piaget: A Comparison. The Alberta Journal of Educational Reseaxch; 1974

Temple, Christine The Brain; Penguin Books; 1993

Thomas, Lewis The Lives of a C e l l ; Penguin Books; 1978

Toulmin, Stephen The Mozart of Psychology; The New York Review of Books, 28/09/1978

Voltaire

Von Neumann, John

Vygotsky, L . S .

Vygotsky, L.S.

Wagman, M

Weber, B

Weizenbaum, Joseph

Wertsch, James V.

in L'encyclopédie ou dictionnaire raisonné des arts et des metiers; GF Flammarion; 1986

The General and Logical Theory of Automata, in Pylyshyn, Zenon; Perpectives on the Computer Revolution; Prentice-Hall; 1970

Mind in: The Development of Higher Psychological Processes; Harvard University Press; 1978

Thought and Language. MIT Press; 1986

Cognitive Science and Concepts of Mind; Praeger, New York;1991

Kasparov faces Deep Blue once more; The Globe and Mail; 02/05/1997 Computer Power and Human Reason; W. H. Freeman and Company; 1976

Vygotsky and the Social Formation of Mind; Harvard University Press; 1984

Page 152: CAN COMPUTERS DO Cornipariaon Xxltelligence · Encore un mot, et je te laisse. Aie toujours présent à Ifesprit que la nature n'est pas Dieu; qu'un homme n'est pas une machine; qu

IMAGE NALUATION TEST TARGET (QA-3)

L - L u 11111I.I &&- [[LE -

APPLIED IMAGE, lnc - = 1653 East Main S m t - -. - Rochester. NY f46û9 USA -- -- - - Phone: 71 6/42-0300 -- -- - - F~x: 71 W288-5989