AI: In The Midst Of Delusion?

Preliminary:

Before the start of this academic season, before my introduction to CIS-203 class, AI, when I hear the term "A thinking computer," I thought it was a metaphor to describe the amazing computational speed and information output of today’s or future best computers would demonstrate. But I assumed wrongly. Through the course of the season, I found out that the majority of AI people actually think that one-day an intelligent machine would be created. A machine that could think, feel, and rather emotional that could show cognizance ability and awareness of its surrounding or environment that could further pave a road to the creation of artificial human by human being, more on that, artificial human by artificial human, computers. I argue this is a very wrong perception and, I say, it has never been achieved and will never be achieved in the future either. The people of AI, who are proponent of this perception of creating the "Artificial Human" machine, strongly basis their argument on the notion of implementing large amount of and complete knowledge base representation of human knowledge inside the computer brain that simulate the natural human mind. This knowledge is represented syntactically inside the brain of the computer to simulate the human mind. Experts in the field call this syntax representation a program and a collection of data. This is where my argument starts.

Claim:

The achievement that possessed so far in today’s computer world could lead someone to believe that indeed what comes next is unpredictable. For example, witnessing without questioning the development of sensor based robots that could sense objects in their way and so could be deceiving. When the computer sense an object ahead of it by using its mechanical sensors, could one conclude that the computer is actually sensing and feeling or understanding or contemplating its environment? Even though the people who disagree this notion of the proponents’ claim that the computer is indeed aware of its environment, by only saying it’s an unproven claim, I boldly claim the computer doesn’t know or aware of or contemplate about its environment.

Premises:

My argument basis the idea of one of the most known antagonist John R. Searle. He supported and added new grounds on his original proposal through on going debates he has been part of for the last 20years. In his article "Minds, Brains, and Programs", he proposes a scenario a very persuasive argument which refutes the claim that artificial intelligence can actually exist. This scenario consists of an English speaking man who is locked in a room. While in this room, the man is given a set of papers with Chinese writing on them. The man doesn’t know how to read or speak even the smallest amount of the Chinese language, and thus has no idea as to what the writing means. The man is then given a second set of Chinese writing along with a set of English rules which show how to compare the first set of papers with the second set of writing. At last, the man is given a third set of papers written in Chinese, as well as a set of instructions written in English which tells the man to correlate the symbols of the third set of Chinese writings with the first two sets. The instructions also tell the man how to make responses with Chinese symbols to the ones found in the third set of writings. Unbeknownst to the man, these writings actually take the form of a script, a story, and a set of questions, respectively, which the man provides answers to by following the English instructions. The answer the man provides is indistinguishable to that of a native speaker of Chinese would give. Even though he gives a similar answer to the questions he was asked, the man have no clue or give the actual meaning to the statements he’s constructed. The crucial part is this: given any rulebook, you would never understand the meanings of those characters you manipulate. So, John Searle says, this man’s brain is the computer machine and his mind represents the computer program. This is what computer is all about. It is fed knowledge, and spits out a computed result based on the knowledge that it’s fed.

Personal opinion-1:

My argument further leaps to claim that human’s mind is so mystical that it has an unexplainable intrinsic components that work and synchronize together with the human soul that leads human to be creative and aware or emotional to the environment. Arguers of AI, such as John Searle, explain what differ the human being from the so-called "thinking machine" of AI is the explanation given to the human mind that its ability to be Intrinsic-intentionally or intentionally introspective. In other words, computers don’t understand the information and data, which they manipulate, and don’t have, true intent for their actions. No way computers analyze or come up with outbreak ideas and theories such as, that of Einstein’s theory of relativity or the invention of fire by the early man. Computers just spit out what they were fed and provide solutions for problems or questions by deriving these solutions from the information they originally consumed. So my inference from this premise is computers will never present ground breaking creativity that exists from nothing. This implies computers will never be able to think or know what they do.

Personal opinion-2

The other thing is AI group’s misuse of the word "thinking machine" which could entail moral issue. First of all, just by using the word or by equating the machine to human being, AI people are misleading the world and could create or manufacture a disillusioned generation whom are being fed ill-advised and very questionable notion that could lead them to approach the world in an unwise manner. When this generation finds out that what he’s being told all his life doesn’t come true, it will start to question the credibility of science and education as a whole. This could lead to an unfortunate outcome. A generation waits for computers to do the job. If in deed, a "thinking machine" were achievable, we would create as more job less people as we could. What would we achieve from these? The answer would be negative.

Compromise:

However by using syntactic representation, AI researchers may one-day, in the far future, presents a simulation of a complete work of brain neurons. Again, only syntactic knowledge base representation in the computer brain could only achieve this. But the computer still lacks to be aware of the semantics of the syntax representation it was given by the human being or by the programmer. As a result, computers remain what they are, take input and outputs results according to the person who feeds the input intends it to output. This leads us to the ultimate truth that computer will never ever be able to think. They just remain to be the mirror image of the present Mind State of the human being.

Conclusion:

In conclusion, AI needs image clean up. What they have done for the world so far is tremendous and profound. But their goal should not be creating a machine that could think by itself, say so, creating artificial human. The focus should stay on the sole fact, which is upgrading the computer world to assist man in his struggle to know his environment better. Data processing and computation in an efficient way should be and remain the goal of the future machines.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Work Cited

Searle, John R. 1980. ‘Minds, Brains, and Programs’. "In The Mind’s I", ed. Douglas R.

Hofstadter, pp. 353-373. New York: Basic Books, Inc.

Minsky, Marvin "Why People Think Computers Can’t". MIT, Cambridge, Massachusetts