Jump to content

Talk:Artificial consciousness/real

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ataturk (talk | contribs) at 03:52, 15 March 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This is the sub-page of the Artificial Consciousness talk page where the Strong AC position is argued. That AC can be real.

Here is the article copied from its frozen position and it has been edited further. The weak AI position is also attempted to be made coherent.



Artificial Consciousness

An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness.

Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined it manifests aspects of attention which equate to those of a human at the neurological level, and, if attention is deemed a necessary pre-requisite for consciousness, then the fly is claimed to have a lot going for it.

Schools of Thought

Broadly there seems to be two schools of thought when it comes to artificial consciousness and they have analogues in the weak and strong AI factions.

Weak AC

One school of thought is that artificial consciousness will never be real consciousness, but merely an approximation to it, a mimicing of something which only human beings (or maybe other sentient beings) can truly experience or manifest.

Strong AC

The other school of thought is that artificial consciousness is (or will be should it ever be realised) real consciousness which just happens not to have arisen naturally.

The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then is not the human being a machine? If there is something which is not a machine about a human being then we are talking about the soul or a magic spark and the Weak AI argument must then be made in religious or metaphysical terms. Alternatively, if the human being is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.

Human-like Artificial Consciousness

As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being real, will be human-like because this is the only real model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.

Testing artificial consciousness

It is asserted by those who hold that AC will be human-like that one necessary ability of human-like artificial consciousness is the ability to predict external events where it is possible for an average human, i.e. to anticipate events in order to be ready to respond to them when they occur.

In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving the same abilities as the average human because consciousness is described by them in reference to human abilities. This reasoning requires AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains (weak) AC, and is only as close to (real) consciousness as we objectively know about consciousness.

Matt Stan should be allowed to repair the start of this para which I have garbled. And of course to do any other changes!

Such artificially conscious system must then fit that anticipation into an engine that factors it in with the other drivers of the artificially intelligent creature. Without telepathy, thought can not be known to occur anywhere other than in your own head, and yet you can know that an entity that you are observing is conscious. Therefore an artificially conscious creature need none of the intelligence borne of thought in order to be convincing, i.e. it can appear pretty dumb but still be considered conscious. Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine disqualifies it from a human perspective from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that a simulation of a depiction of a conscious machine, such as the robots in Star Wars, could count as examples of artificial consciousness.

To holders of the strong AC hypothesis an area of contention is which subset of possible aspects of consciousness must be verifiably present before a device would be deemed really conscious. One view is that all aspects of consciousness (whatever they are) must be present before a device passes. An obvious problem with that point of view, which could nevertheless be correct, is that some functioning human beings might then not be judged conscious by the same comprehensive tests. Another view is that AC must be capable of achieving these aspects, therefore a test may fail just because the system is not developed to the necessary level. The (strong) view is that some devices may be less conscious than others, yet still be (really) conscious.

Studying Artificial Consciousness

As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

There are two broad approaches taken in the study of AC and these are not incompatible. One is a top-down approach: Brains, particulalrly the human brain (which is currently the only device which all can agree is conscious), are analysed. The other is a bottom-up approach where (elements of) consciousness are attempted to be synthesized by computer scientists.

Professor Igor Aleksander of Imperial College, London, stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language. This is a controversial statement, given that artificial consciousness is thought by most observers to require strong AI. Some people deny the very possibility of strong AI; whether or not they are correct, certainly no artificial intelligence of this type has yet been created.

Examples of artificial consciousness from literature and movies