Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through...

36
Reality construction through info-computation Gordana Dodig-Crnkovic, Mälardalen University Sweden [email protected] Abstract. The talk will address some intriguing and still widely debated questions such as: What is reality for an agent? How does reality of a bacterium differ from a reality of a human brain? Do we need representation in order to understand reality? Starting with the presentation of the computing nature as an info- computational framework, where information is defined as a structure, and computation is information processing, I will address questions of evolution of increasingly complex living agents through interactions with the environment. In this context, the concept of computation will be discussed and the sense in which computation is observer-relative. I will use the results of on information integration and representation to argue that reality for an agent is a result of self-organization of information through network-based computation. life is a zillion bits of biology repeatedly rearranging themselves in a webwork of constantly modulated feedback loops Weizmann Institute of Science. "Scientists Show Bacteria Can 'Learn' And Plan Ahead." ScienceDaily. ScienceDaily, 18 June 2009. <www.sciencedaily.com/releases/2009/06/090617131400.htm>. “E. coli bacteria. New findings show that these microorganisms' genetic networks are hard-wired to 'foresee' what comes next in the sequence of events and begin responding to the new state of affairs before its onset. Bacteria can anticipate a future event and prepare for it, according to new research at the Weizmann Institute of Science. In a paper that appeared June 17 in Nature, Prof. Yitzhak Pilpel, doctoral student Amir Mitchell and research associate Dr. Orna Dahan of the Institute's Molecular Genetics Department, together with Prof. Martin Kupiec and Gal Romano of Tel Aviv University, examined microorganisms living in environments that change in predictable ways. Melissa B. Miller and Bonnie L. Bassler (2001) QUORUM SENSING IN BACTERIA, Annual Review of Microbiology Vol. 55: 165-199

Transcript of Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through...

Page 1: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Reality construction through info-computationGordana Dodig-Crnkovic, Mälardalen University [email protected]. The talk will address some intriguing and still widely debated questions such as: What is reality for an agent? How does reality of a bacterium differ from a reality of a human brain? Do we need representation in order to understand reality? Starting with the presentation of the computing nature as an info-computational framework, where information is defined as a structure, and computation is information processing, I will address questions of evolution of increasingly complex living agents through interactions with the environment. In this context, the concept of computation will be discussed and the sense in which computation is observer-relative. I will use the results of on information integration and representation to argue that reality for an agent is a result of self-organization of information through network-based computation. 

life is a zillion bits of biology repeatedly rearranging themselves in a webwork of constantly modulated feedback loopsWeizmann Institute of Science. "Scientists Show Bacteria Can 'Learn' And Plan Ahead." ScienceDaily. ScienceDaily, 18 June 2009. <www.sciencedaily.com/releases/2009/06/090617131400.htm>.

“E. coli bacteria. New findings show that these microorganisms' genetic networks are hard-wired to 'foresee' what comes next in the sequence of events and begin responding to the new state of affairs before its onset.

Bacteria can anticipate a future event and prepare for it, according to new research at the Weizmann Institute of Science. In a paper that appeared June 17 in Nature, Prof. Yitzhak Pilpel, doctoral student Amir Mitchell and research associate Dr. Orna Dahan of the Institute's Molecular Genetics Department, together with Prof. Martin Kupiec and Gal Romano of Tel Aviv University, examined microorganisms living in environments that change in predictable ways.

Melissa B. Miller and Bonnie L. Bassler (2001) QUORUM SENSING IN BACTERIA, Annual Review of Microbiology Vol. 55: 165-199

Quorum sensing is the regulation of gene expression in response to fluctuations in cell-population density. Quorum sensing bacteria produce and release chemical signal molecules called autoinducers that increase in concentration as a function of cell density. The detection of a minimal threshold stimulatory concentration of an autoinducer leads to an alteration in gene expression. Gram-positive and Gram-negative bacteria use quorum sensing communication circuits to regulate a diverse array of physiological activities. These processes include symbiosis, virulence, competence, conjugation, antibiotic production, motility, sporulation, and biofilm formation. In general, Gram-negative bacteria use acylated homoserine lactones as autoinducers, and Gram-positive bacteria use processed oligo-peptides to communicate. Recent advances in the field indicate that cell-cell communication via autoinducers occurs both within and between bacterial species. Furthermore, there is mounting data suggesting that bacterial autoinducers elicit specific responses from host organisms. Although the nature of the chemical signals, the signal relay mechanisms, and the target genes controlled by bacterial quorum sensing systems differ, in every case the ability to communicate with one another allows bacteria to coordinate the gene expression, and therefore the behavior, of the entire community. Presumably, this process bestows upon bacteria some of the qualities of higher organisms. The evolution of quorum sensing systems in bacteria could, therefore, have been one of the early steps in the development of multicellularity.

E. Peter Greenberg

E. Peter Greenberg, Tiny teamwork, NATURE | VOL 424 | 10 JULY 2003 | www.nature.com/nature

Page 2: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Turing

Turing's argument is simply that the brain should also be considered as a discrete state machine. In his classic statement, made in a 1952 radio broadcast [12]:

“We are not interested in the fact that the brain has the consistency of cold porridge. We don't want to say 'This machine's quite hard, so it isn't a brain, so it can't think.'” [12] Transcript in the Turing Archive, King's College, Cambridge.

“The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search!”

“The remaining form of search is what I should like to call the “cultural search”. As I have mentioned, the isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment of other men, whose techniques he absorbs during the first 20 years of his life. He may then perhaps do a little research of his own and make a very few discoveries which are passed on to other men. From this point of view the search for new techniques must be regarded as carried out by the human community as a whole, rather then by individuals.” http://www.turingarchive.org/viewer/?id=127&title=41 (“Intelligent Machinery”, unpublished paper, Turing Digital Archive AMT/C/11 p. 36)

HIGH LEVEL COMPUTATIONAL PROCESSES: SEARCH

(a) intellectual search,(b) genetical or evolutionary search, and(c) cultural search.

Search and sort classify (group) and connect/relate

Search

Compare

Relate & group

Connect

Update

Sterrett, S. G. (2014) Turing on the Integration of Human and Machine Intelligence. [Preprint]

URI: http://philsci-archive.pitt.edu/id/eprint/10316

A Revival of Turing’s Forgotten Connectionist

Ideas: Exploring Unorganized Machines

Christof Teuscher and Eduardo Sanchez

Logic Systems Laboratory, Swiss Federal Institute of Technology CH – 1015 Lausanne, Switzerland

Page 3: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

PLURALISM & CONNECTIVITY OF THEORIES

PROBABLY. APPROXIMATELY, CORRECT“The quest for machines (and codes) that never make mistakes was only a first step toward machines (and codes) that learn from them. Probably Approximately Correct is a detailed, much-needed guide to how nature brought us here, and where technology is taking us next.”

George Dyson, author of Turing’s Cathedral – cover of Probably Approximately Correct

PARALLEL: TURING SEARCHES & TYPES OF EVOLUTION

“This ambitious little book suggests quantitative, mathematical theory to explain all essential mechanisms governing the behavior of all living organisms: adaptation, learning, evolution, and cognition. This theory has the characteristics of a great one; it is simple, general, falsifiable, and moreover seems probably, approximately, correct!” Avi Wigderson, Nevanlinna Prize winner Prof. of Mathematics, Institute for Advanced Study, Princeton

“[Valiant] grounds his hypotheses in solid computational theory, drawing on Alan Turing’s pioneering work on ‘robust’ problem- solving and algorithm design, and in successive chapters he demonstrates how ecorithms can depict evolution as a search for optimized performance, as well as help computer scientists create machine intelligence.... [H]is book offers a broad look at how ecorithms may be applied successfully to a variety of challenging problems.”

TONONI, CONSCIOUSNESS

Tononi G. (2005) Consciousness, information integration, and the brain, Prog Brain Res. 150:109-26.

Abstract “Clinical observations have established that certain parts of the brain are essential for consciousness whereas other parts are not. For example, different areas of the cerebral cortex contribute different modalities and sub modalities of consciousness, whereas the cerebellum does not, despite having even more neurons. It is also well established that consciousness depends on the way the brain functions. For example, consciousness is much reduced during slow wave sleep and generalized seizures, even though the levels of neural activity are comparable or higher than in wakefulness. To understand why this is so, empirical observations on the neural correlates of consciousness need to be complemented by a principled theoretical approach. Otherwise, it is unlikely that we could ever establish to what extent consciousness is present in neurological conditions such as akinetic mutism, psychomotor seizures, or sleepwalking, and to what extent it is present in newborn babies and animals.“

“A principled approach is provided by the information integration theory of consciousness. This theory claims that consciousness corresponds to a system's capacity to integrate information, and proposes a way to measure such capacity. The information integration theory can account for several neurobiological observations concerning consciousness, including: (i) the association of consciousness with certain neural systems rather than with others; (ii) the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; (iii) the reduction of consciousness during dreamless sleep and generalized seizures; and (iv) the time requirements on neural interactions that support consciousness.”

WHY PHENOMENOLOGY IS NOT A RELIABLE BASIS FOR UNDERSTANDING OF MIND, AND WHY NATURALIZATION OF MIND IS NECESSARY FOR EVERY SCIENCE-BASED EXPLANATION

Giulio Tononi (2004) An information integration theory of consciousness, BMC Neuroscience, 5:42, doi:10.1186/1471-2202-5-42

Page 4: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Giulio Tononi (2012) The Integrated Information Theory of Consciousness: An Updated Account. Archives Italiennes de Biologie, Vol 150, No 2/3, 290-326

Abstract

This article presents an updated account of integrated information theory of consciousness (IIT) and some of its implications. IIT stems from thought experiments that lead to phenomenological axioms and ontological postulates. The information axiom asserts that every experience is one out of many, i.e. specific – it is what it is by differing in its particular way from a large repertoire of alternatives. The integration axiom asserts that each experience is one, i.e. unified – it cannot be reduced to independent components. The exclusion axiom asserts that every experience is definite – it is limited to particular things and not others and flows at a particular speed and resolution. IIT formalizes these intuitions with three postulates. The information postulate states that only “differences that make a difference” from the intrinsic perspective of a system matter: a mechanism generates cause-effect information if its present state has specific past causes and specific future effects within a system. The integration postulate states that only information that is irreducible matters: mechanisms generate integrated information only to the extent that the information they generate cannot be partitioned into that generated within independent components. The exclusion postulate states that only maxima of integrated information matter: a mechanism specifies only one maximally irreducible set of past causes and future effects – a concept. A complex is a set of elements specifying a maximally irreducible constellation of concepts, where the maximum is evaluated at the optimal spatio-temporal scale. Its concepts specify a maximally integrated conceptual information structure or quale, which is identical with an experience. Finally, changes in information integration upon exposure to the environment reflect a system’s ability to match the causal structure of the world. After introducing an updated definition of information integration and related quantities, the article presents some theoretical considerations about the relationship between information and causation and about the relational structure of concepts within a quale. It also explores the relationship between the temporal grain size of information integration and the dynamic of metastable states in the corticothalamic complex. Finally, it summarizes how IIT accounts for empirical findings about the neural substrate of consciousness, and how various aspects of phenomenology may in principle be addressed in terms of the geometry of information integration.

Consciousness Wars: Tononi-Koch versus Searle (blog)Posted on March 17, 2013 http://coronaradiata.net/2013/03/17/consciousness-wars-tononi-koch-versus-searle/

Guilio Tononi has proposed a theory of consciousness he calls “Integrated Information Theory” (IIT)*. Very roughly, the theory is based on Shannon’s concept of information, but extends this by adding a property refers to as “Integrated Information”. Information will exist in an entity when it has information and is connected. This property is called “Phi” (pronounced “phee”, written φ) and can be computed. The higher the Phi, the more conscious the entity.

Although theories of consciousness are not new, this one is special: it has high-profile converts, perhaps the most notable being Christof Koch. Koch, Cal Tech professor and chief scientific officer of the Allen Brain Institute, is best known for his book, The Quest for Consciousness: a Neurobiological Approach. A new Koch book, Consciousness: Confessions of a Romantic Reductionist is largely a description of and paean (eulogy) to IIT. It’s fair to view Tononi and Koch as collaborators.

John Searle is an eminent philosopher who thinks about the brain and is taken seriously by neuroscientists. Until recently he and Koch were on the same page. For example, Searle has endorsed Koch’s concept of studying the Neural Correlates of Consciousness (NCC). Searle frequently writes for the New York Review of Books, and has on occasion generated debate. Notable was Searle’s 1995 critical review of Daniel Dennett “Consciousness Explained” that generated a prolonged exchange.

In the January 10, 2013 issue of the New York Review of Books Searle reviews “Confessions” and solidifies his disputative reputation**. The review is devastatingly critical. The essence of Searle’s criticism is that IIT employs a mindful observer to explain mind. There is a little man in the middle of the theory; that information isn’t information until it is “read” by an entity with a mind. There may be

Page 5: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

message in the information carrier, but it becomes information when read.***

The story doesn’t end there. The March 7 issue of the New York Review of Books contains an exchange of letters between Koch-Tononi and Searle (not behind paywall).

My read: I thought the original Searle article was clear and powerful. I’ve read both Tononi and Koch and never quite gotten IIT. I found the Tononi/Koch letter a muddle, and Searle’s reply clear. Since I don’t really get IIT, I don’t want to take sides. Opinions are welcome in the comments section. It will be interesting to see how this plays out.

Update March 18. Panpsychism is a battleground in the Koch/Tononi letter and Searle’s response. According to wikipedia, which seems an adequate source here,

In philosophy, panpsychism is the view that all matter has a mental aspect, or, alternatively, all objects have a unified center of experience or point of view. Baruch Spinoza, Gottfried Leibniz, Gustav Theodor Fechner, Friedrich Paulsen, Ernst Haeckel, Charles Strong, and partially William James are considered panpsychists.

My take: both get hits. Searle doesn’t acknowledge the “local” panpyschism of IIT. IIT has a spatially restricted, spatially centered panpsychism, according to Tononi and Koch in their response letter. That’s why my consciousness doesn’t mix with yours. If a theory of consciousness uses panpsychism, especially a special form, isn’t it assuming the very hard part, asking for special help from novel laws of physics?

Update 2 march 19 A few hours ago I re-read chapter 8 of Koch’s “Confessions”, which contains the entirety of Koch’s description of IIT. I also reread Searle’s review of “Confessions”, and the NYRB letter exchange. In Chapter 8 I searched for a clear description of “connectedness” but couldn’t find it. I don’t know if connectedness is statistical or involves causality. I also looked for an indication that IIT’s panpsychism is localized — that it is centered around a local maxima — but couldn’t find it. My conclusion is that the Koch book is, at best, a remarkably incomplete description of IIT. (and the Koch book is what Searle reviewed.) IIT depends heavily on connectedness; to evaluate IIT we must know what what connectedness means and how a system could detect its own localized connectedness without an external observer. Perhaps readers could direct us to answers.

Update 3 Jan 1, 2013. Christof Koch describes panpsychism in a Scientific American: Is Consciousness Universal?. Refers, nicely, to Searle.—

* Tononi has a book “Phi: a voyage from the Brain to the Soul“. It’s not a traditional scientific explanation of Phi, but an important resource. Seems intended to deliver the feeling, rather than bedrock substance of Phi.

**Can Information Theory Explain Consciousness? Most of the review is behind a paywall. Contact me ([email protected]). Zenio permits me to send out the text of the paper, one email at a time.

*** Colin McGinn makes this argument eloquently in a more recent article in the NYRB, Homunculism which is a review of Kurzweil’s current book. This is not behind a paywall.About these ads

Can a Photodiode Be Conscious?

Christof Koch and Giulio Tononi, reply by John R. SearleIn response to:Can Information Theory Explain Consciousness? from the January 10, 2013 issue

The heart of John Searle’s criticism in his review of Consciousness: Confessions of a Romantic Reductionist [NYR, January 10] is that while information depends on an external observer, consciousness is ontologically subjective and observer-independent. [NOT REALLY! IT IS OBSERVER-DEPENDENT IN A SELF REFLECTIVE LOOP. OBSERVER IS A CONSCIOUS AGENT HERSELF. WE ARE AWARE OF OUR BEING CONSCIOUS.] That is to say, experience exists as an absolute fact, not relative to an observer: as recognized by Descartes, je pense donc je suis [I think therefore I am] is an undeniable certainty. Instead, the information of Claude Shannon’s theory of communication is always observer-relative: signals are communicated over a channel more or less efficiently, but their meaning is in the eye of the beholder, not in the signals themselves. So, thinks Searle, a theory with the word “information” in it, like the integrated information theory (IIT) discussed in Confessions, cannot possibly begin to explain consciousness.

Page 6: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Except for the minute detail that the starting point of IIT is exactly the same as Searle’s! Consciousness exists and is observer-independent, says IIT, and it is both integrated (each experience is unified) and informative (each experience is what it is by differing, in its particular way, from trillions of other experiences). IIT introduces a novel, non-Shannonian notion of information—integrated information—which can be measured as “differences that make a difference” to a system from its intrinsic perspective, not relative to an [EXTERNAL] observer. Such a novel notion of information is necessary for quantifying and characterizing consciousness as it is generated by brains and perhaps, one day, by machines.

Another of Searle’s criticisms has to do with panpsychism. If IIT accepts that even some simple mechanisms can have a bit of consciousness, then isn’t the entire universe suffused with soul? Searle justly states: “Consciousness cannot spread over the universe like a thin veneer of jam; there has to be a point where my consciousness ends and yours begins.” Indeed, if consciousness is everywhere, why should it not animate the iPhone, the Internet, or the United States of America?

Except that, once again, one of the central notions of IIT is exactly this: that only “local maxima” of integrated information exist (over elements, spatial and temporal scales): my consciousness, your consciousness, but nothing in between; each individual consciousness in the US, but no superordinate US consciousness. Like Searle, we object to certain kinds of panpsychism, with the difference that IIT offers a constructive, predictive, and mathematically precise alternative.

Finally, we agree with Searle that one looks in vain for mouthwatering admissions of guilt in Confessions. That is true from the point of view of a priest eager for sins to be revealed. Yet among scientists, there exists a powerful edict against bringing subjective, idiosyncratic memories, beliefs, and desires into professional accounts of one’s research. Confessions breaks with this taboo by mixing the impersonal and objective with the intensely personal and subjective. To a scientist, this is almost a sin. But philosophers too can get close to sin, in this case a sin of omission: not to ponder enough, before judgment is passed, what the book and ideas one reviews are actually saying.Christof KochChief Scientific OfficerAllen Institute for Brain ScienceSeattle, Washington

Giulio TononiProfessor of PsychiatryUniversity of WisconsinMadison, Wisconsin

John R. Searle replies:

One of my criticisms of Koch’s book Consciousness is that we cannot use information theory to explain consciousness because the information in question is only information relative to a consciousness. Either the information is carried by a conscious experience of some agent (my thought that Obama is president, for example) or in a nonconscious system the information is observer-relative—a conscious agent attributes information to some nonconscious system (as I attribute information to my computer, for example). [INFORMATION AS DATA STRUCTURE EXISTS IN A MACHINE AS WELL. Now the objection may be that no theory exists without human to understand it, but that is not true only for Tononi’s Phi.]

Koch and Tononi, in their reply, claim that they have agreed with this all along, indeed it is their “starting point,” and that I have misrepresented their theory. I do not think I have and will now quote passages that substantiate my criticisms. (In this reply I will assume they are in complete agreement with each other.)

1. The Conscious Photodiode. They say explicitly that the photodiode is conscious. The crucial sentence is this:

Strictly speaking, then, the IIT [Integrated Information Theory] implies that even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness. Moreover, the photodiode’s consciousness has a certain quality to it….*

This is a stunning claim: there is something that it consciously feels like to be a photodiode! On the face of it, it looks like a reductio ad absurdum of any theory that implies it. Why is the photodiode conscious? It is conscious because it contains information. But here comes my objection, which they claim to accept: the information in the photodiode is only relative to a conscious observer who knows what it does. The photodiode by itself knows nothing. If the “starting point” of their theory is a distinction between absolute and observer-relative information, then photodiodes are on the observer-relative side and so are not conscious.

Page 7: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

2. The Observer Relativity of Integrated Information. They think they get out of the observer relativity of information by considering only integrated information, and integrated information, they think, is somehow absolute information and not just relative to a consciousness. But the same problem that arose for the photodiode arises for their examples of integrated information. Koch gives several: personal computers, embedded processors, and smart phones are three. Here is an extreme claim by him:

Even simple matter has a modicum of Φ [integrated information]. Protons and neutrons consist of a triad of quarks that are never observed in isolation. They constitute an infinitesimal integrated system. (p. 132)

So on their view every proton and neutron is conscious. But the integrated information in all of these is just as observer-relative as was the information in the photodiode. There is no intrinsic absolute information in protons and neutrons, nor in my personal computer, nor in my smart phone. The information is all in the eye of the beholder.

3. Panpsychism. They claim not to be endorsing any version of panpsychism. But Koch is explicit in his endorsement and I will quote the passage over again:

By postulating that consciousness is a fundamental feature of the universe, rather than emerging out of simpler elements, integrated information theory is an elaborate version of panpsychism.(p. 132, italics in the original)

And he goes on:

The entire cosmos is suffused with sentience. We are surrounded and immersed in consciousness; it is in the air we breathe, the soil we tread on, the bacteria that colonize our intestines, and the brain that enables us to think. (p. 132)

Any system at all that has both differentiated and integrated states of information is claimed to be conscious (Koch, p. 131). But my objections remain unanswered. Except for systems that are already conscious, the information in both simple systems like the photodiode and integrated systems like the smart phone is observer-relative. And the theory has a version of panpsychism as a consequence. [I agree with Searle that panpsychism is embarrassing for a scientific theory. However his argument based on absolute and relative information is strange. Subjective information is not absolute. In the very next moment it is integrated it will be exposed to the scrutiny by comparison with other human information, integration into a larger context of community of practice. There is no absolute information. (except for perhaps for Louis XIV ]

But the deepest objection is that the theory is unmotivated. Suppose they could give a definition of integrated and differentiated information that was not observer-relative, that would enable us to tell, from the brute physics of a system, whether it had such information and what information exactly it had. Why should such systems thereby have qualitative, unified subjectivity? In addition to bearing information as so defined, why should there be something it feels like to be a photodiode, a photon, a neutron, a smart phone, embedded processor, personal computer, “the air we breathe, the soil we tread on,” or any of their other wonderful examples? As it stands the theory does not seem to be a serious scientific proposal.

*Giulio Tononi, “Consciousness as Integrated Information: A Provisional Manifesto,” The Biological Bulletin, Vol. 215, No. 3 (December 2008), pp. 216–242, quotation from p. 236. ↩

MARCIN SCHROEDER“In my opinion, Tononi is talking not about integration of information, but about connectivity of a network. I do not see in his approach any transformation of information. He understands information in purely Shannonian way, as message or messages conducted in the neural network. In what sense can we say that information is integrated? As a consequence he gets through his Phi that protons have some form of consciousness. For me it is just evidence that something is wrong.

In my approach I am using my own definition of information with its dual manifestations: selective (more or less Shanonian) and structural. I am formalizing it using closure spaces. Here it is not very important. When we analyze the structural manifestation of information (they always co-exist), we can assess the level of integration of information by checking to what degree the structure describing it can be decomposed into components. In typical cases (messages, but considered simply as sequences of characters, but disregarding their syntactics) information is totally disintegrated. It can be decomposed

Page 8: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

to very simple components (the structure is a Boolean algebra). In the case of quantum mechanics the structure is totally coherent, and you cannot decompose it. But QM is not the only case of such integrated information.The main difference (comparing with Tononi) is that I can describe a theoretical device integrating information (partially or completely) and the output is essentially different from the input.

Marcin J. Schroeder Dualism of Selective and Structural Manifestations of Information in Modelling of Information Dynamics SAPERE BOOK Studies in Applied Philosophy, Epistemology and Rational Ethics Volume 7 2013. Computing Nature. Turing Centenary Perspective. Gordana Dodig-Crnkovic, Raffaela Giovagnoli p. 125-137

The New York Review of Books, January 10, 2013Can Information Theory Explain Consciousness?

John R. Searle

Consciousness: Confessions of a Romantic Reductionist by Christof Koch MIT Press, 181 pp.

The problem of consciousness remains with us. What exactly is it and why is it still with us? The single most important question is: How exactly do neurobiological processes in the brain cause human and animal consciousness? Related problems are: How exactly is consciousness realized in the brain? That is, where is it and how does it exist in the brain? Also, how does it function causally in our behavior?

Collection of Arturo Schwarz, Milan/Scala/Art ResourceTristan Tzara: Self-Portrait, 1928

To answer these questions we have to ask: What is it? Without attempting an elaborate definition, we can say the central feature of consciousness is that for any conscious state there is something that it feels like to be in that state, some qualitative character to the state. For example, the qualitative character of drinking beer is different from that of listening to music or thinking about your income tax. This qualitative character is subjective in that it only exists as experienced by a human or animal subject. It has a subjective or first-person existence (or “ontology”), unlike mountains, molecules, and tectonic plates that have an objective or third-person existence. [For whom? For us or for themselves?] Furthermore, qualitative subjectivity always comes to us as part of a unified conscious field. At any moment you do not just experience the sound of the music and the taste of the beer, but you have both as part of a single, unified conscious field, a subjective awareness of the total conscious experience. So the feature we are trying to explain is qualitative, unified subjectivity.

Now it might seem that is a fairly well-defined scientific task: just figure out how the brain does it. In the end I think that is the right attitude to have. But our peculiar history makes it difficult to have exactly that attitude—to take consciousness as a biological phenomenon like digestion or photosynthesis, and figure out how exactly it works as a biological phenomenon. Two philosophical obstacles cast a shadow over the whole subject. The first is the tradition of God, the soul, and immortality. Consciousness is not a part of the ordinary biological world of digestion and photosynthesis: it is part of a spiritual world. It is sometimes thought to be a property of the soul and the soul is definitely not a part of the physical world. The other tradition, almost as misleading, is a certain conception of Science with a capital “S.” Science is said to be “reductionist” and “materialist,” and so construed there is no room for consciousness in Science. [Why should that be? Medicine is an example of science that takes subjective feelings such as pain seriously.] If it really exists, consciousness must really be something else. It must be reducible to something else, such as neuron firings, computer programs running in the brain, or dispositions to behavior.

Page 9: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

There are also a number of purely technical difficulties to neurobiological research. The brain is an extremely complicated mechanism with about a hundred billion neurons in humans, and most investigative techniques are, as the researchers cheerfully say, “invasive.” That means you have to kill or hideously maim …

The New York Review of Books

What Is Consciousness?

Stevan Harnad, reply by John R. Searle

In response to:

Consciousness: What We Still Don't Know from the January 13, 2005 issue

John Searle [“Consciousness: What We Still Don’t Know,” NYR, January 13] points out, rightly, that so far brain and cognitive science is merely studying the correlates of consciousness (where and when in the brain does something happen when we feel something?), but thereafter the goal is to go on to explain how those correlates cause consciousness (and presumably also why). Koch (and Crick) seem to think it’s a matter of finding out when and where the bits are put together (“the taste of coffee in our mouth, the slight headache, and the sight of the landscape out the window”), whereas Searle thinks it’s a matter of finding the “unified field”—but either way, it’s correlates now, and the explanation of how only later.

I want to cast some doubt on that later causal explanation ever coming. The best example is the simplest one: Searle mentions just a simple feeling (anxiety)—something that is not “about” something else, just a feeling we have. So suppose we find its correlates, the pattern of brain activity that occurs whenever we feel anxious. And suppose we go on to confirm that that brain activity pattern is not only correlated with anxiety, but it causes it (in that (1) it comes before the feeling, (2) if it is present we feel anxiety, (3) if it is absent we do not, that (4) there is no other correlate or cause that we have missed, and (5) we can explain what causes that pattern of brain activity).

Now what about the “how”? How does a pattern of brain activity generate feeling? This is not a question about how that pattern of brain activity is generated, for that can be explained in the usual way, just as we explain how a pattern of activity in a car is or a kidney is generated. It is a question about how feeling itself is generated. Otherwise the feeling just remains something that is mysteriously (but reliably) correlated with certain brain patterns.

We don’t know how brain activity could generate feeling. Even less do we know why. This is not spiritualist voodoo. It is just an emperor’s new clothes observation about the powerlessness of the usual kind of causal how/why explanation when it comes to feeling. For if you venture [attempt] something like “Well, the anxiety is a signal to the organism to be careful,” the natural reply is “Fine, but why did the signal have to be felt, rather than merely transmitted…?” That is the how/why problem of consciousness, and it is something that we not only still don’t know but (until further notice) it looks like something we never will know.

John R. Searle replies:

Stevan Harnad’s letter raises a challenge to the very possibility of any scientific account of consciousness of the sort that both Koch and I favor. He says, “We do not know how brain activity could generate feeling. Even less do we know why.” And he laments the “powerlessness of the usual kind of causal how/why explanation when it comes to feeling.”

It is important to understand that he is not lamenting our present neurobiological ignorance. He thinks even if we had a perfect science of the brain, we would be unable to answer the how/why question. I am not convinced. Suppose we knew in exact detail all of the neurobiological mechanisms and their mode of operation, so that we knew exactly which neuronal events were causally necessary, or sufficient, or both, for which subjective feelings. Suppose all of this knowledge was stated as a set of precise laws. Suppose such knowledge were in daily medical use to help overcome human pain, misery, and mental illness. We are a long way from this ideal and we may never reach it, but even this would not satisfy Harnad. It is hard to see what more he wants. If a complete science of the sort I am imagining would not satisfy him, he should tell us what would. If nothing could satisfy him, then it looks like the complaint may be unreasonable. It is not clear to me what he is asking for when he asks

Page 10: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

for more.

Ultimately I think that Harnad has a deep philosophical worry, and it is one that we should all share. It seems astounding [amazing] that objective neuronal processes should cause our subjective feelings. But in coping with this sense of mystery we should remind ourselves that it is just a plain fact that neuronal processes do cause feelings, and we need to try to understand how. We should share his sense of mystery, but not let it discourage us from doing the real work.

He is convinced that the mystery of consciousness is unique. But it is well to remind ourselves that this is not the first time we have confronted such mysteries. From the point of view of Newtonian mechanics, electromagnetism seems metaphysically mysterious. How could magnetism ever be explained by Newton’s laws? And from the point of view of nineteenth-century science, life seemed a mystery. How could mechanical processes explain life? As we attained a much richer scientific understanding, these mysteries were overcome. It is hard to recover today the passions with which mechanism and vitalism were once debated. I am urging that the right attitude to the problem of consciousness is to overcome the mystery by increasing our knowledge, in the same way that we overcame earlier mysteries.

Our most fundamental disagreement is harder to state. I believe his sense of “how/why” demands more than science and philosophy can offer. In the end when we investigate nature we find: This is what happens. This is how it works. If you want to know how/why a body falls, the standard answer is to appeal to gravity. But if you want to know how/why gravity works, I am told that the question is still not answered. But suppose it were, suppose we had a unified theory of everything that explained gravity, electromagnetism, and everything else. That would still leave us with the question, Why are the data accounted for by this theory and not some other? In the end, how/why questions stop with theories that state how nature works and the mechanisms according to which it works.

Stevan Harnad Chaire de recherche du Canada Centre de neuroscience de la cognition Université du Québec à Montréal

“Consciousness, Dr. Tononi says, is nothing more than integrated information. Information theorists measure the amount of information in a computer file or a cellphone call in bits, and Dr. Tononi argues that we could, in theory, measure consciousness in bits as well. When we are wide awake, our consciousness contains more bits than when we are asleep.”

Sizing Up Consciousness, Carl Zimmer, NYT September 20, 2010

http://www.nytimes.com/2010/09/21/science/21consciousness.html?pagewanted=all&_r=0

MY COMMENTS to Searle-Harnad discussion

Certain brain activity does not cause anxiety, it is anxiety itself!

It is similar to vision. Visual sensory signals entering the brain cause different processes in it. That is what is vision. Activity (information integration processes) in the brain is what is consciousness.Integrated information is not a static structure, it is dynamics of the networks of networks in the brain.

“Information” in a form of message gets implemented in (fractal structures) of information processing layers of our brains – from particular neuron excitations to the whole brain and back. That activity is “the difference that makes the difference”.

Is the same question valid for simple stimuli – like taste which seems far less complex than vision?

Page 11: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Lets say sweetness of sugar, what is it, how and why? (see below about taste)

The Sense of Taste (explanation)http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/T/Taste.htmlTaste is the ability to respond to dissolved molecules and ions called tastants.

Humans detect taste with taste receptor cells. These are clustered in taste buds and scattered in other areas of the body. Each taste bud has a pore that opens out to the surface of the tongue enabling molecules and ions taken into the mouth to reach the receptor cells inside. There are five primary taste sensations: salty sour sweet bitter umami

Properties of the taste system.

• A single taste bud contains 50–100 taste cells representing all 5 taste sensations (so the classic textbook pictures showing separate taste areas on the tongue are wrong).

• Each taste cell has receptors on its apical surface. These are transmembrane proteins which

admit the ions that give rise to the sensation of salty;

bind to the molecules that give rise to the sensations of sweet, bitter, and umami.

• A single taste cell seems to be restricted to expressing only a single type of receptor (except for bitter receptors).

• A stimulated taste receptor cell triggers action potentials in a nearby sensory neuron leading back to the brain.

• However, a single sensory neuron can be connected to several taste cells in each of several different taste buds.

• The sensation of taste — like all sensations — resides in the brain [evidence].

• And in mice, at least, the sensory neurons for four of the tastes (not sour) transmit their information to four discrete areas of the brain.

Salty

In mice, perhaps humans, the receptor for table salt (NaCl) is an ion channel that allows sodium ions (Na+) to enter directly into the cell depolarizing it and triggering action potentials in a nearby sensory neuron.

In lab animals, and perhaps in humans, the hormone aldosterone increases the number of these salt receptors. This makes good biological sense:

• The main function of aldosterone is to maintain normal sodium levels in the body.

• An increased sensitivity to sodium in its food would help an animal suffering from sodium deficiency (often a problem for ungulates, like cattle and deer).

Sour

Sour receptors detect the protons (H+) liberated by sour substances (acids). This closes transmembrane K+ channels which leads to depolarization of the cell [Link], and the release of the neurotransmitter serotonin into its synapse with a sensory neuron.

Sweet

Sweet substances (like table sugar — sucrose C12H22O11) bind to G-protein-coupled receptors (GPCRs) at the cell surface.

Each receptor contains 2 subunits designated T1R2 and T1R3 and is coupled to G proteins.

The complex of G proteins has been named gustducin because of its similarity in structure and action to the transducin that plays such an essential role in rod vision.

Activation of gustducin triggers a cascade of intracellular reactions:

production of the second messengers inositol trisphosphate (IP3) and diacylglycerol (DAG) which

releases intracellular stores of Ca++ which

allows in the influx of Na+ ions depolarizing the cell and causing the

release of ATP, which

triggers action potentials in a nearby sensory neuron.

The hormone leptin inhibits sweet cells by opening their K+ channels. This hyperpolarizes the cell making the generation of

Page 12: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

action potentials more difficult. Could leptin, which is secreted by fat cells, be a signal to cut down on sweets?

Bitter

The binding of substances with a bitter taste, e.g., quinine, phenylthiocarbamide [PTC], also takes place on G-protein-coupled receptors that are coupled to gustducin and the signaling cascade is the same as for sweet (and umami).

Humans have genes encoding 25 different bitter receptors ("T2Rs"), and each taste cell responsive to bitter expresses a number (4–11) of these genes. (This is in sharp contrast to the system in olfaction where a single odor-detecting cell expresses only a single type of odor receptor.)

Despite this — and still unexplained — a single taste cell seems to respond to certain bitter-tasting molecules in preference to others.

The sensation of taste — like all sensations — resides in the brain. Transgenic mice that

• express T2Rs in cells that normally express T1Rs (sweet) respond to bitter substances as though they were sweet;

• express a receptor for a tasteless substance in cells that normally express T2Rs (bitter) are repelled by the tasteless compound.

So it is the activation of hard-wired neurons that determines the sensation of taste, not the molecules nor the receptors themselves.

Umami

Umami is the response to salts of glutamic acid — like monosodium glutamate (MSG) a flavor enhancer used in many processed foods and in many Asian dishes. Processed meats and cheeses (proteins) also contain glutamate.

The binding of amino acids, including glutamic acid, takes place on G-protein-coupled receptors that are coupled to heterodimers of the protein subunits T1R1 and T1R3. The signaling cascade that follows is the same as it is for sweet and bitter.

Taste Receptors in Other Locations

Taste receptors have been found in several other places in the body. Examples:

• Bitter receptors (T2Rs) are found on the cilia and smooth muscle cells of the trachea and bronchi [View] where they probably serve to expel inhaled irritants;

• Sweet receptors (T1Rs) are found in cells of the duodenum. When sugars reach the duodenum, the cells respond by releasing incretins. These cause the beta cells of the pancreas to increase the release of insulin.

So the function of "taste" receptors appears to be the detection of chemicals in the environment — a broader function than simply taste.

INFORMATION CONNECTED TO ENERGY

Energy is related to autonomous agency, because only for an agent information can make a difference that makes a difference, because agent can take different decisions based on information obtained. Matter-energy aspects are connected to information inasmuch they make autonomous agency possible. A system would not be able to act on its own in the physical world if it would not have energy to do so and it would not be able to act and interact if it would not possess materiality. Thus those are preconditions of agency.

Consciousness might be not only “integrated information” as Tononi suggests, but the continuous process of integrating information. Consciousness is not static but dynamic; it is not an object or a pattern, it is a process. Time and timing is central for consciousness,

Page 13: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

MIŁKOWSKI, EXPLAINING THE COMPUTATIONAL MIND

Marcin Miłkowski, Explaining the Computational Mind. Cambridge, Mass.: MIT Press, 2013.

Chapter 5

So there is some bottom level which is no longer computational.

1. The scope of computational explanations

“I am going to show in this chapter that, on my mechanistic account, only one level of the mechanism – the so-called isolated level – is explained in computational terms. The rest of the mechanism is not computational, and, indeed, according to the norms of this kind of explanation, it cannot be computational through and through.”

This will also be true of highly complex hierarchies, where a computational submechanism is embedded in a larger mechanism, and this larger one in another. A submechanism contributes to an explanation of the behavior of a larger mechanism, and the explanation might be cast in terms of a computation, but the nesting of computers eventually bottoms out in non-computational mechanisms. Obviously, pancomputationalists, who claim that all physical reality is computational, would immediately deny the latter claim. However, the bottoming-out principle of mechanistic explanation does not render pancomputationalism false a priori. It simply says that a phenomenon has to be explained as constituted by some other phenomenon than itself. For a pancomputationalist, this means that there must be a distinction between lower-level, or basic, computations and the higher level ones. Should pancomputationalism be unable to mark this distinction, it will be explanatorily vacuous.” “In other words, the constitutive and contextual levels need not be computational, and they are not explained computationally. What lies within the scope of computational explanation is only the isolated level, or the level of the whole computational mechanism as such.”

“In other words, the physical implementation of a computational system – and its interaction with the environment – lies outside the scope of computational explanation.” “The upshot for the computational theory of mind is that there is more to cognition than computation: an implementation of a computation is required for the explanation to be complete; and explaining the implementation includes understanding of how a computational system is embodied physically and embedded in its environment.”

Upshot = result, consequence

“Resource limitations are also impossible to explain computationally. Instead, they act as empirical constraints on theory; for example, Newell & Simon (1972) impose the condition that the capacity of short-term memory not exceed the limit of 7 plus or minus 2 meaningful chunks. To put the same point somewhat differently, in order to understand a cognitive computation and to have a theory of it, one needs to know the limits of the underlying information-processing system.”

“If we want to take complexity into account, the causal structure of the system will come to the fore, and the computational facets of the system will be just one of many. But the cogency of the computational theory of mind does not rest on a bet that only the formal properties of a computation are relevant to explaining the mind. Rather, it is based on the assumption that cognitive systems peruse information, and that assumption is not undermined by the realization that the way these systems operate goes beyond information-processing.”

ME:From the above it is evident that the model of computation that Milkowski analyses in his book is a top-down computation, designed computation. Even though he rightly argues that neural networks are computational models and even dynamical systems can be understood as computational, Milkowsky does not think of intrinsic computation as a causal mechanism.

Page 14: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

The first question is the grounding problem. I will argue that this really is a pseudo problem.

Grounding is always anchored in an agent who is the narrator of the explanation. In the next step, narrator postulates the granularity of his/her picture. No picture has infinite granularity and nothing in principle hinders to imagine even lower levels of existence (such as more and more elementary particles).

When constructing computational models, Milkowski’s focus on only one layer is pragmatically justified, but not a matter of principle. Even though one can reconstruct many intrinsic computational layers in human brain (again depending on granularity), for an observer/narrator typically one layer is in focus at a time. In such simplified models the layers above and below even though computational are sketchy and used to represent constraints and not mechanisms.

INTRINSIC COMPUTATION

“Physical computing, in the broadest sense, means building interactive physical systems by the use of software and hardware that can sense and respond to the analog world.” (Consisting of interactive system connected with the real world via sensors and actuators).

Intrinsic quantum computationJames P. Crutchfield, Karoline WiesnerPhysics Letters A

Work of Cardelli, Luca

PAC probably approximately correct, Leslie Valiant

WHY PANCOMPUTATIONALISM IS USEFUL AND PANPSYCHISM IS NOT

“Panpsychism is the doctrine that mind is a fundamental feature of the world which exists throughout the universe.”

Seager, William and Allen-Hermanson, Sean, "Panpsychism", The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2013/entries/panpsychism/>.

Pancomputationalism (natural computationalism, computing nature) is the doctrine that whole of the universe, every physical system, computes. In the words of []

“Which physical systems perform computations? According to pancomputationalism, they all do. Even rocks, hurricanes, and planetary systems — contrary to appearances — are computing systems. Pancomputationalism is quite popular among some philosophers and physicists.”

Piccinini, Gualtiero, "Computation in Physical Systems", The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2012/entries/computation-physicalsystems/>.

Info-computationalist variety of pancomputationalism does not make the claim that:

“almost anything with enough things to play the role of the physical states will satisfy this quite weak demand of what it is to be an implementation. For example, any collection of colored stones arranged as the update table will be taken to implement the table. SMA (simple mapping account of computation) only demands extensional agreement. It is a de-facto demand. This leads to a form of pancomputationalism where almost any physical system implements any computation.”

Turner, Raymond, "The Philosophy of Computer Science", The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2013/entries/computer-science/>.

Page 15: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Computation and causality

John Collier

Collier, John. 2010. “Information, Causation and Computation.” In Information and Computation, ed. Gordana Dodig-Crnkovic and Mark Burgin. Singapore: World Scientific Publishing.

Collier, John. 1999. “Causation is the transfer of information.” In Causation, natural laws and explanation, ed. Howard Sankey, 279–331. Dordrecht: Kluwer.

Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3: 1-19.

Clark, Andy. 1997. Being there: Putting brain, body, and world together again. MIT press.

Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, Mass. MIT Press.

Dennett, Daniel C. 1991b. “Real Patterns.” Journal of Philosophy 88 (1): 27-51.

Dennett, Daniel C. 1995. Darwin’s dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster.

Dennett, Daniel C. 2005. Sweet Dreams. Philosophical Obstacles to a Science of Consciousness. Cambridge, Mass.: MIT Press.

Drescher, Gary L. 1991. Made-up minds: a constructivist approach to artificial intelligence. Cambridge, Mass.: MIT Press.

“Made-Up Minds addresses fundamental questions of learning and concept invention by means of an innovative computer program that is based on the cognitive-developmental theory of psychologist Jean Piaget. Drescher uses Piaget's theory as a source of inspiration for the design of an artificial cognitive system called the schema mechanism, and then uses the system to elaborate and test Piaget's theory. The approach is original enough that readers need not have extensive knowledge of artificial intelligence, and a chapter summarizing Piaget assists readers who lack a background in developmental psychology. The schema mechanism learns from its experiences, expressing discoveries in its existing representational vocabulary, and extending that vocabulary with new concepts. A novel empirical learning technique, marginal attribution, can find results of an action that are obscure because each occurs rarely in general, although reliably under certain conditions. Drescher shows that several early milestones in the Piagetian infant's invention of the concept of persistent object can be replicated by the schema mechanism.”

“During the implementation of a theory as a computational model, the developer has to make a number of decisions.” Milkowski p. 189

“However, AI systems are capable of all this and more, so we ought to be more careful: if there is no mathematical proof that something cannot be done, any verdicts are mere speculation.” p. 204.

(By “this” Milkowski refers to computers beating humans in chess and Jeopardy, being capable of theorem proving, speech recognition and generation, natural language translation etc.)

Regarding mathematical proof, it is not that simple. Mathematics is an intelligent adaptive system that

develops continuously. If we lack mathematical tools within present state mathematics, we can construct them in the next step.Possibility of human level AI will most likely be demonstrated constructively – by development of

human level artifactual intelligent devices and not so much via mathematical proof that such devices are possible. That conclusion is based on the observation that human learning is an open-ended inductive and abductive process.

Page 16: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

DYNAMIC SYSTEMS

“It is sometimes claimed that dynamic explanations of cognition are distinctly different from computational ones (van Gelder 1995, Beer 2000). It is, of course, a truism that physical computers are dynamic systems: all physical entities can be described as dynamic systems that unfold in time. What proponents of dynamicism claim, however, is something stronger:

van Gelder, T., 1992, “What might cognition be if not computation?” Indiana University, Cognitive Science Research Report, 75.

Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the ‘brain-as-computer’ assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term ‘analog-model’ does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing (often cognitive) task.” (Shagrir)

Miłkowski, Marcin. 2007. “Is computationalism trivial?” in Computation, Information, Cognition – The Nexus and the Liminal, ed. Gordana Dodig Crnkovic and Susan Stuart, 236-246. Newcastle: Cambridge Scholars Press.

Shagrir, Oron. 2010d. “Brains as analog-model computers.” Studies In History and Philosophy of Science Part A 41 (3): 271-279.

Piccinini Gualtiero and Shagrir Oron (2014) Foundations of computational neuroscience. Current Opinion in Neurobiology, 25:25–30

“We discuss foundational issues such as what we mean by ‘computation’ and ‘information processing’ in nervous systems; whether computation and information processing are matters of objective fact or of conventional, observer-dependent description; and how computational descriptions and explanations are related to other levels of analysis and organization.”

“Thus, unlike computational scientists in most other disciplines, computational neuroscientists often assume that nervous systems (in addition to the scientists who study them) perform computations and process information.”

“A rock that is sitting still, for example, does not compute the identity function that describes some of its behavior (or lack thereof).”

This might not be so evident as it looks like at first.Even though a rock is sitting still as a whole in relationship to us, it is made of molecules and atoms that never are sitting still. Also, those atoms and molecules are immersed in a quantum mechanical vacuum that is never still but popping up virtual particles only to absorb them again in the next instant. All those processes, vibrations, creations and annihilations of virtual particles, and movements in space of crystal lattices, atoms and molecules are the result of information exchanges within informational structures (data structures) that are what we call matter-energy constituting the stone. On several levels of organization, from quantum mechanics up to crystal lattices the stone that looks like completely still is actually full of movement. And that movement can be seen as manifestation of computing nature.

Causation is transfer of information (Collier 1999) and computation is causation at work.

Page 17: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Collier, John. 1999. “Causation is the transfer of information.” In Causation, natural laws and explanation, ed. Howard Sankey, 279–331. Dordrecht: Kluwer.

Thus, not only neurons and whole brains compute (in the framework of computing nature) but also the rest of nature computes at variety of levels of organization.

“As to information, there is also a precise and powerful mathematical theory that defines information as the reduction of uncertainty about the state of a system. The same theory can be used to quantify the amount of information that can be transmitted over a communication channel [10]. Again, the mathematical theory of information does not tell us whether and how the brain processes information, and in what sense. So establishing the foundations of computational neuroscience requires more work.” Shagrir, Picinini [THIS CAN BE USED AS AN ARGUMENT AGAINST TONONI]

Craver CF: Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. New York: Oxford University Press; 2007.

Picinnini Shagrir:“Levels of organization and levels of analysis

Nervous systems as well as artificial computational systems have many levels of mechanistic organization [45, Craver 46 _]. They contain large systems like the brain and the cerebellum, which decompose into subsystems like the cortex and the brainstem, which decompose into areas and nuclei, which in turn decompose into maps, columns, networks, circuits, neurons, and subneuronal structures. Computational neuroscience studies neural systems at all of these mechanistic levels, and then it attempts to discover how the properties exhibited by the components of a system at one level, when they are suitably organized into a larger system, give rise to the properties exhibited by that larger system. If this process of linking explanations at different mechanistic levels is carried out, the hoped result is an integrated, multi-level explanation of neural activity.But computational neuroscience also involves levels of analysis. First, there is the level of what a neural sub-system does and why. Does it see or does it hear? Does it control the arm or the head? And what function does it compute in order to perform this function? Answering these what and why questions leads to what Marr called a ‘computational theory’ of the system [47 ]. The theory specifies the function computed and why it is computed, without saying what representations and procedures are used in computing it. Specifying the representations and procedures is the job of the ‘algorithmic theory’. Finally, an ‘implementation theory’ specifies the mechanisms by which the representations and algorithms are implemented [47 ].

47 Marr DC: Vision: A Computation Investigation into the Human Representational System and Processing of Visual Information. San Francisco: Freeman; 2010/1982.Articulates the distinction between the computational, algorithmic, and implementation levels of analysis. The 2010 edition (MIT) includes foreword by Shimon Ulman and afterword by Tomaso Poggio that discuss the relevance of Marr’s approach today.

FORTHCOMING 2014Kaplan D (Ed): Integrating Mind and Brain Science: Mechanistic Perspectives and Beyond. Oxford University Press; 2014.Peebles D, Cooper RP (Eds): TopiCS (Topics in Cognitive Science) The Role of Process-Level Theories in Contemporary Cognitive Science. 2013. [forthcoming].

Shea N: Naturalising representational content. Philos Compass 2013, 8:496-509. Argues for pluralism about representational content and that data from cognitive neuroscience should play a greater role in theories of content.:

“It is a familiar point that many special sciences deal in properties that are not reducible to basic physics. What makes them naturalistic is that their properties have substantive explanatory connections to properties in other sciences; and that they supervene on the physical. Non-reductive physicalism is compatible with the existence of ceteris paribus bridge laws or robust (but not exceptionless) generalisations connecting properties from different levels or schemes of explanation. Some kinds of representational content may be like that

Page 18: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

too.”

Not only that they “supervene”, they are generated (under certain constraints) by physical processes from physical components (substrate)

Shea: Naturalizing content:

“The central insight derives from the invention of mechanical computers, which gave us the idea that mental representations are physical particulars that are realized in the brains (and maybe bodies) of thinkers and interact causally in virtue of non-semantic properties (e.g. ‘‘form’’), in ways that are faithful to their semantic properties.Given pluralism about content, we should look at representational explanations of behaviour in a wide variety of different domains, in order to uncover a variety of kinds of representational content. Where an information processing theory is successful at predicting behaviour and supported by evidence about internal states, that raises a prima facie case that the representations relied on are real, and have the contents relied on in the theory. That is not to say that philosophers have to take scientists’ theories on trust.”

CAUSALITY AND INTRINSIC COMPUTATION

“Computational descriptions of physical systems need not be vacuous. We have seen that there is a well-motivated formalism, that of combinatorial state automata, and an associated account of implementation, such that the automata in question are implemented approximately when we would expect them to be: when the causal organization of a physical system mirrors the formal organization of an automaton. In this way, we establish a bridge between the formal automata of computation theory and the physical systems of everyday life. We also open the way to a computational foundation for the theory of mind.” (Chalmers)

Chalmers, D. J., 1996, “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 309–33.

Sprevak, M., 2012, “Three challenges to Chalmers on computational implementation,” Journal of Cognitive Science, 13(2): 107–143.

What is at stake in a theory of implementation?

GDC: The problem is that this is exactly backwards. It is not so instructive to study how brain implements computation (how do we know 1+1=2 top-down) but how intrinsic information processing, that is evidently going on in the brain can be interpreted as computation. What are the characteristics of that new kind of computation that information processes in the brain constitute?

In that sense of bottom-up intrinsic computation Chalmers characterization holds:“A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation.” (Chalmers 2012, p. 326)

Chalmers (2012) ‘A computational foundation for the study of cognition’. Journal of Cognitive Science 12: 323 – 357.

This position is called the Standard Position (SP) by Sprevak (2012).

It is applicable to intrinsic computation (bottom up, natural/intrinsic), but not to designed conventional computation (top-down) as this “mirroring” would be a very complex process of interpretation, coding, decoding and interpretation again.

Page 19: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

In what follows I would like to answer three Sprevak’s concerns about computationalism: (R1) Clarity: “Ultimately, the foundations of our sciences should be clear.”

(R2) Response to triviality arguments: “(O)ur conventional understanding of the notion of computational implementation is threatened by triviality arguments.”

Searle’s (1992) informal triviality argument (“that a brick wall contains some pattern of physical transitions with the same structure as Microsoft Word”) and Putnam’s triviality argument (“The physical transitions in the rock mirror the formal transitions: A B A B. Therefore, according to SP, the rock implements FSA M.”)

(R3) Naturalistic foundations: “The ultimate aim of cognitive science is to offer, not just any explanation of mental phenomena, but a naturalistic explanation of the mind.”

Sprevak concludes that meeting all three above desiderata of computational implementation is hard, and that “Chalmers’ account provides the best attempt to do so, but even his proposal falls short.” Chalmers account, I will argue should be seen from the perspective of intrinsic, natural computation.

Let me summarize the distinction between intrinsic /natural/ spontaneous computation and designed computation used in our technological devices.

In the info-computationalism, that is a variety of pancomputationalism, physical nature spontaneously performs different kinds of computations (information dynamics) at different levels of organization. This is intrinsic, natural computation and is specific for a given physical system. Intrinsic computation(s) of a physical system can be used for designed computation, such as one found in computational machinery, but it is far from all computation that can be found in nature.

Why is natural computationalism not vacuous? For the same reason that physics is not vacuous which makes the claim that the entire physical universe is material. Now we will not enter the topic of ordinary matter-energy vs. dark matter-energy. Those are all considered to be the same kind of phenomena – natural phenomena that must be studied with methods of physics.

If we would apply the same logic as critics of natural computationalism, we would demand from physicists to explain where matter comes from. Where does elementary particle come from? They are simply empirical facts, for which we have enough evidence to believe that they exist. We might not know all of their properties and relationships, we might not know all of them, but we can be pretty sure that they exist.

When physical entities exist in nature, unobserved, they are part of Ding an sich. How do we know that they exist? We find out through interactions. What are interactions? They are information exchanges. Epistemologically, constraints or boundary conditions are also information for a system.

So the bottom layer for computational universe is the bottom layer of its material substrate and it is not different from the question of physical models and the status of matter-energy in the physical world.They are taken to be empirically justified.

Especially when it comes to agents, entities capable of acting on their own behalf, our habitual way of understanding is in terms of energy and work.

All living beings possess cognition (understood as all processes necessary for an organism to survive, both as an individual and as a part of a social group – social cognition), in different degrees, from bacteria to humans. Cognition is based on agency; it would not exist without agency. The building block of life, the living cell, is a network of networks of processes and those processes may be understood as computation. Of course it is not any computation whatsoever, but exactly that biological ongoing process.

Page 20: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

Now one might ask what would be the point in seeing metabolic processes or growth (morphogenesis) as computation? The answer is that we try to connect cell processes to the conceptual apparatus of concurrent computational models and information exchange that has been developed within the field of computation and not within biology – we talk about “executable cell biology”. Jasmin Fisher & Thomas A Henzinger Executable cell biologyUndoubtedly, biology and mathematics are inextricably connected to this bioinformatic and biocomputational view, but info-computational approach gives something substantial that no other approach gives, and that is possibility of studying real-time dynamics of a system.Processes of life and thus mind (that earlier was rendered as psyche) are absolutely critically time-dependent.

At this moment concurrent computational models are the field that can help us understand real-time interactive concurrent networked behavior in complex systems of biology and its physical structures (morphology).

That is the pragmatic reason why it is well justified to use conceptual and practical tools of info-computation in order to study living being. Of course, in nature there are no labels saying: this process is computation. We can see it as computation, conceptualize in terms of computation, model as computation and call computation any process in the physical world. One type of physical process is one type of intrinsic (physical) computation. Doing so we expand our understanding of both physical processes and computation.

Studying biological systems at different levels of organization as layered computational architectures give us powerful conceptual and technological tools for study of real world systems.

Given the above argument for info-computational modeling of nature, and the argument that every living organism possess some degree of cognition one can ask: why should we not do the same move and ascribe mentality to the whole of the universe (that is called panpsychism)?

The simple answer is: in the case of panpsychism we have no good reason. Unlike computational models of physical processes we have no good “psychical models”. In fact only naturalists accounts of consciousness provide models, others prefer to see it as a “mystery”. From the naturalist, knowledge generation point of view, trying to understand everything as psyche got it backwards – we do not know what to do after the very first move, other that say that it is “mysterious”.On the contrary, if we try to understand psyche or better to say mind and consciousness as physical info-computational processes in the nervous system of a cognizing agent, we immediately have an arsenal of modeling tools to address the problem with and successively and systematically learn more about it.

That is the main reason why panpsychism is not a good scientific theory. Instead of opening all doors for investigation, it declares consciousness permeating the entire universe and that's it. One can always generalize concepts if they lead to better understanding and enables further modeling. But there is no good reason to apply homeopathic approaches to the idea of consciousness diluting it to concentrations close to zero, if that procedure will not give us anything in terms of understanding of mechanisms of mind, and particularly the most complex mind – our own, human.Moreover as a theory, panpsychism belongs to medieval tradition – that which is to be explained is postulated. I wonder why would anyone ever get unconscious in a conscious universe? What would be the difference between human consciousness and the “consciousness” of a bacterium or of even consciousness of vacuum?

Up to now I explicated my info-computationalist position relative to natural computationalism, pancomputationalism, computing nature and computationalism (with respect to human mind, as presented by Milkowsky) as well as why I do not see panpsychism as a fruitful approach and coherent theoretical construal.(The act of construing or interpreting; interpretation)

Page 21: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

EXAMPLES OF ENGINEERING CONSTRUCTION OF THE FUNCTIONING BRAIN

EXAMPLE 1Eliasmith, Chris (2013) How to build a brain. Oxford University Press

Eliasmith, C., Stewart T. C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science. Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266.

How to build a brain

The book 'How to build a brain' from Oxford University Press came out in May 2013. It exploits the Neural Engineering Framework (NEF) to develop the Semantic Pointer Architecture (SPA) for cognitive modeling. The book uses Nengo to explain and demonstrate many of the central concepts for these frameworks. This section of the website supports the book by providing links to models, demos, videos, and tutorials mentioned in the book, which are on this website. Submenus at left will take you to content related to specific chapters. The remainder of this page introduces the SPA.

If you're looking for information specifically on Spaun (which is in chapter 7 of the book), please see our Science paper, and our videos of the model in action.

The Semantic Pointer Architecture (SPA)Briefly, the semantic pointer hypothesis states:

“Higher-level cognitive functions in biological systems are made possible by semantic pointers. Semantic pointers are neural representations that carry partial semantic content and are composable into the representational structures necessary to support complex cognition.”

The term 'semantic pointer' was chosen because the representations in the architecture are like 'pointers' in computer science (insofar as they can be 'dereferenced' to access large amounts of information which they do not directly carry). However, they are 'semantic' (unlike pointers in computer science) because these representations capture relations in a semantic vector space in virtue of their distances to one another, as typically envisaged by connectionists.

The four main topics in the book that are used to describe the architecture are semantics, syntax, control, and learning and memory. The discussion of semantics considers how semantic pointers are generated from information that impinges on the senses, reproducing details of the spiking tuning curves in the visual system. Syntax is captured by demonstrating how very large structures can be encoded, by binding semantic pointers to one another. The section on control considers the role of the basal ganglia and other structures in routing information throughout the brain to control cognitive tasks. The section on learning memory describes how the SPA (Semantic Pointer Architecture) includes adaptability (despite this not being a focus of the NEF) showing how detailed spiking timing during learning can be incorporated into the basal ganglia model using a biologically plausible STDP-like rule.

Page 22: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

An exampleIn chapter 7 of the book, the SPA Unified Network (Spaun) model is presented that demonstrates how a wide variety of (about 8) cognitive and non-cognitive tasks can be integrated in a single large-scale, spiking neuron model. Spaun switches tasks and provides all responses without any manual change in parameters from a programmer. Essentially, it is a fixed model that integrates perception, cognition, and action across several different tasks (many of which are described elsewhere on this site). Spaun is the most complex example of an SPA to date (see figure 1).

Figure 1: A high-level depiction of the Spaun model, with all of the central features of a general Semantic Pointer Architecture. Visual and motor hierarchies provide semantics via connections to natural input (images) and output (a nonlinear dynamical arm model). Central information manipulation depends on syntactic structure for several tasks. Control is largely captured by the basal ganglia action selection elements. Memory and learning take place in both basal ganglia and cortex. The model itself consists of just under a million spiking neurons.

Let us consider how the model would run on one run of one cognitive task (see a video of Spaun running this task). This task is analogous to the Raven's Matrix task, which requires people to figure out a pattern in the input, and apply that pattern to new input to produce novel output. For instance given the following input "[1] [11] [111] [2] [22] [222] [3] [33] ?" the expected answer is 333. The input to the model first indicates which task it is going to perform by presenting it with an 'A' followed by the task number (e.g. 'A 7' for this task). Then it is shown a series of letters and brackets and it has to draw the correct answer with its arm. The processing for such a task goes something like this:

• The image impacts the visual system, and the neurons transform the raw image input (784 dimensions) to a lower dimensional (50 dimensions) semantic pointer that preserves central visual features. This is done using a visual hierarchy model (i.e., V1-V2-V4-IT) that implements a learned statistical compression

• The visual semantic pointer is mapped to a conceptual semantic pointer by an associative memory, and store in a working memory. Storing semantic pointers is performed by binding them to other semantic pointers that are used to encode the order the information is presented in (e.g. to distinguish 1 2 from 2 1).

• In this task, the semantic pointers generated by consecutive sets of inputs are compared with each other to infer what relationship there exists (e.g. between 1 and 11; or 22 and 222).

• The shared transformation across all the input is determined by averaging the previously inferred relationships across all sets of inputs (so the inferred relationship between 1 and 11 is averaged with that between 22 and 222, etc.)

• When the '?' is encountered, Spaun determines its answer by taking the average relationship and applying it to the last input (i.e., 33) to generate an internal representation of the answer.

• This representation is then used to drive the motor system to write out the correct answer (see figure 2b), by sending the relevant semantic pointers to the motor system.

Page 23: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.

• The motor system 'dereferences' the semantic pointer by going down the motor hierarchy to generate appropriate control signals for a high-degree of freedom physical arm model.

Figure 2: Example input and output from Spaun. a) Handwritten numbers used as input. b) Numbers drawn by Spaun using its arm.

All of the control like steps (e.g. 'compared with', 'inferred', and routing information through the system), are implemented by a biologically plausible basal ganglia model. This is one example of the 8 different tasks that Spaun is able to perform. Videos for all tasks can be found here.

EXAMPLE 2

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System

Subrata Ghosh, Krishna Aswani, Surabhi Singh, Satyajit Sahu, Daisuke Fujita and Anirban Bandyopadhyay (2014) Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System, Information 2014, 5, 28-100.

Let me summarizeIndeed, there is This brings me to theAnother point is that, in contrast to

Page 24: Abstract - MDHgdc/work/ARTICLES/2014/2-AISB 2014...  · Web viewReality construction through info-computation. Gordana Dodig-Crnkovic, Mälardalen University Sweden. gordana.dodig-crnkovic@mdh.se.