Sidharth Viswanathan Testing the Turing Test in

Sidharth ViswanathanTesting the Turing Test in late 2017 Testing the Turing Test in late 2017 Abstract: Alan Turing once famously predicted in 1950 “Can machine think?”, The phras-es ‘machines’ and ‘think’ is defined by the problem. This aims to test ‘Turing Test’ in2017, after seven decades after Alan Turing proposed the question. Amazing progressin machine-learning, largely based on Processing Hardware. While GPUs have been amajor driver of this recent progress can a Computer and Human survive in Turing Test.Later this test can also be extended for validating the day to day applications likechecking Chatbots versatility .

This research helps to see the progress of the hardwaresoftware improvements from the earlier tests to 2017, that is in verifying the hardwareof the machine in which tests is done. Also when we are Proving Turing Test wrong incertain situations, it can be concluded that there is a flaw in the system design or thereis mis-conception of ideas for the system built . Additionally, architectural defects inHuman-Computer Interactions, are uncovered.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Overview of Today’s Hardware: There is a remarkable convergence of trends in applications, machine-learningand hardware, which increases opportunity for major hardware and software changes.With the progress of Machine Learning in within seven decades, can Turing Test besuccessful? Also, this paper aims to find will human get defeated by a Computer? Overview Of Machine Learning Hardware: Neural networks research has been controversial, going through a typical hypecurve. First Neural Network Algorithms developed was a brain-inspired algorithm, butunfortunately capabilities were very limited, only capable of doing linear classification,though the multi- layer perceptron was later shown to be capable of non-linear classifi-cation. Towards at end of 1980s and the beginning of the 1990s, such neural networksbecame fairly more efficient, even leading to hardware neural network accelerators,such as the ETANN from Intel. However, at the time, hardware neural networks hadthese three limitations; (1) the application scope of neural networks was fairly restrict-ed; (2) the clock frequency of processors was increasing fast enough that an accelera-tor could be outperformed by a software neural network run on a processor after a fewtechnology generations; (3) competitive machine-learning algorithms emerged, espe-cially Support Vector Machines (SVM).Also, more than that, Cybenko’s theorem 2,stipulated that a neural network which is a single hidden layer could approximate anycontinuous function with infinite precision, also suggests that deeper and larger neuralnetworks would bring less valuable returns. This combination of these factors createdthe condition for the temporary eliminates of neural networks.

Still, researchers such as Yoshua Bengio Geoffrey Hinton 5 or Yann LeCun 6kept pushing the notion of neural networks in the community, and around 2006, neuralnetwork models with large and wide layers (and at the time also combined with auto-encoders), i.e., so- called Deep Neural Networks (or DNNs), were shown to achievecompetitive results on some applications with respect to state-of-the-art machine-learning techniques.

As GPUs started to allow the training of larger neural networks, onlarger training sets, the performance of these deep neural networks kept increasing,and they have now consistently been shown to achieve state-of-the-art performance ona broad range of applications. Hardware Using Machine Learning 1 Hardware using Machine Learning: There is a remarkable convergence of trends in applications, machine-learningand hardware, which creates opportunities for hardware machine-learning acceleration.We now know that machine-learning has become ubiquitous in a broad range of Cloudservices and embedded devices. Simultaneously, as we mentioned in the previous sec-tion, deep neural networks have become the state-of-the-art machine-learning algo-rithms. And roughly at the same time, technology constraints have started to progres-sively initiate a shift towards heterogeneous architectures and the notion of hardwareaccelerators.

Hardware Neural Network Accelerators1 The idea behind digital computers may be explained 7 by saying that thesemachines are intended to carry out any operations which could be done by a humancomputer. The human computer is supposed to be following fixed rules; he has no au-thority to deviate from them in any detail. We may suppose that these rules are suppliedin a book, which is altered whenever he is put on to a new job. He has also an unlimitedsupply of paper on which he does his calculations. He may also do his multiplicationsand additions on a “desk machine,” but this is not important.

If we use the above ex-planation as a definition we shall be in danger of circularity of argument. We avoid this Working of a Digital Computer then and now : by giving an outline. of the means by which the desired effect is achieved. A digitalcomputer can usually be regarded as consisting of three parts: (i) Store. (ii) Executiveunit. (iii) Control. The store is a store of information, and corresponds to the humancomputer’s paper, whether this is the paper on which he does his calculations or thaton which his book of rules is printed.

In so far as the human computer does calcula-tions in his bead a part of the store will correspond to his memory. The executive unit isthe part which carries out the various individual operations involved in a calculation.What these individual operations are will vary from machine to machine. Usually fairlylengthy operations can be done such as “Multiply 3540675445 by 7076345687” but insome machines only very simple ones such as “Write down 0” are possible. We havementioned that the “book of rules” supplied to the computer is replaced in the machineby a part of the store. It is then called the “table of instructions.” It is the duty of thecontrol to see that these instructions are obeyed correctly and in the right order. Thecontrol is so constructed that this necessarily happens.

The information in the store isusually broken up into packets of moderately small size. In one machine, for instance, apacket might consist of ten decimal digits. Numbers are assigned to the parts of thestore in which the various packets of information are stored, in some systematic man-ner. A typical instruction might say”Add the number stored in position 6809 to that in4302 and put the result back into the latter storage position.” Needless to say it wouldnot occur in the machine expressed in English. It would more likely be coded in a formsuch as 6809430217.

Here 17 says which of various possible operations is to be per-formed on the two numbers. In this case the)e operation is that described above, viz.,”Add the number…” It will be noticed that the instruction takes up 10 digits and soforms one packet of information, very conveniently. The control will normally take theinstructions to be obeyed in the order of the positions in which they are stored, but oc- casionally an instruction such as “Now obey the instruction stored in position 5606, andcontinue from there” may be encountered, or again “If position 4505 contains 0 obeynext the instruction stored in 6707, otherwise continue straight on.

” Instructions ofthese latter types are very important because they make it possible for a sequence ofoperations to be replaced over and over again until some condition is fulfilled, but indoing so to obey, not fresh instructions on each repetition, but the same ones over andover again. To take a domestic analogy. Suppose Mother wants Tommy to call at thecobbler’s every morning on his way to school to see if her shoes are done, she can askhim afresh every morning. Alternatively she can stick up a notice once and for all in thehall which he will see when he leaves for school and which tells him to call for theshoes, and also to destroy the notice when he comes back if he has the shoes withhim. The reader must accept it as a fact that digital computers can be constructed, andindeed have been constructed, according to the principles we have described, and thatthey can in fact mimic the actions of a human computer very closely. The book of ruleswhich we have described our human computer as using is of course a convenient fic-tion. Actual human computers really remember what they have got to do. If one wantsto make a machine mimic the behaviour of the human computer in some complex op-eration one has to ask him how it is done, and then translate the answer into the formof an instruction table.

Constructing instruction tables is usually described as “pro-gramming.” To “programme a machine to carry out the operation A” means to put theappropriate instruction table into the machine so that it will do A. An interesting varianton the idea of a digital computer is a “digital computer with a random element.

” Thesehave instructions involving the throwing of a die or some equivalent electronic process;one such instruction might for instance be, “Throw the die and put the-resulting numberinto store 1000.” Sometimes such a machine is described as having free will (though I would not use this phrase myself), It is not normally possible to determine from observ-ing a machine whether it has a random element, for a similar effect can be produced bysuch devices as making the choices depend on the digits of the decimal for . Most ac-tual digital computers have only a finite store. There is no theoretical difficulty in theidea of a computer with an unlimited store. Of course only a finite part can have beenused at any one time.

Likewise only a finite amount can have been constructed, but wecan imagine more and more being added as required. Such computers have specialtheoretical interest and will be called infinitive capacity computers. The idea of a digitalcomputer is an old one. Charles Babbage, Lucasian Professor of Mathematics at Cam-bridge from 1828 to 1839, planned such a machine, called the Analytical Engine, but itwas never completed. Although Babbage had all the essential ideas, his machine wasnot at that time such a very attractive prospect. The speed which would have beenavailable would be definitely faster than a human computer but something like I00 timesslower than the Manchester machine, itself one of the slower of the modern machines,The storage was to be purely mechanical, using wheels and cards.

The fact that Bab-bage’s Analytical Engine was to be entirely mechanical will help us to rid ourselves of asuperstition. Importance is often attached to the fact that modern digital computers areelectrical, and that the nervous system also is electrical. Since Babbage’s machine wasnot electrical, and since all digital computers are in a sense equivalent, we see that thisuse of electricity cannot be of theoretical importance. Of course electricity usuallycomes in where fast signalling is concerned, so that it is not surprising that we find it inboth these connections.

In the nervous system chemical phenomena are at least as im-portant as electrical. In certain computers the storage system is mainly acoustic. Thefeature of using electricity is thus seen to be only a very superficial similarity. If we wishto find such similarities we should took rather for mathematical analogies of function. Modern Machine: We can’t say the computer works different, but today 8 at the 21st Century,most efficient, cool, profound Machines tend to a tomorrow’s recycle metals. Comput-ers generally have an ability to capture a symbolic representation of spoken languagefor long-term storage freed information from the limits of individual memory.

In todaythe technology is ubiquitous in industrialised countries. Not only do books, magazinesand newspapers convey written information, but so do street signs, billboards, shopsigns and even graffiti. A candy is covered with writing. The background presence ofthese products of “literacy technology” does not require active attention, but the infor-mation to be conveyed is ready for use at a abstract level. Silicon-based information technology in today, in contrast, is far from having be-come part of the environment. More than 100 million personal computers have beensold, and nonetheless the computer remain largely in a world of its own.

The state ofthe art is perhaps analogous to the period when scribes had to know as much aboutmaking ink or baking clay as they did about writing. A computer today, runs billion instructions per second, store billions of millionbits of data, fetch and store in symphony etc. In the age of silicon, number of transis-tors keeps doubling every three years. Testing Turing Test ; Human Vs Sophisticated Machine: The key part of the journal Finding whether the hardware making use ofthese technologies, can still survive that is “Can the human be indistinguishable from a computer that uses powerful Machine Learning, high powered Hardware?”.When a users ask a question Will X please tell me the length of his or her hair ? 8Q: Please write me a sonnet on the subject of the Forth Bridge.

A : Count me out on this one . I never could write poetry. Q: Add 34957 to 70764.A: (Pause about 30 seconds and then give as answer) 105621. Q: Do you play chess?A: Yes. Q: I have K at my K1, and no other pieces.

You have only K at K6 and R at R1. Itis your move. What do you play? A: (After a pause of 15 seconds) R-R8 mate.

These questions were tested with a low powered ‘not so efficient’ systems and ahuman when tested few decades ago. Say, when the user ask to perform any complex operation, the computer mayeither respond with either right or wrong answer. Also, with Machine Learning, thecomputer can understand that, test is done and so the computer might also deliberate-ly say wrong answer Can a tester Find, whether computer or Human is answering these ques-tion? 1. Calculation of Response Time of Computer: Can you playchess? Tester No Speed of a computer or modern computer is unmatchable with the speed of thehuman. A computer can compute 88% faster than a human does. So, human perfor-mance will be lesser. So the Turing Test fails if the computer doesn’t try to trick humandeliberately by applying Smart Machine Learning Processes.

If Machine today deliber-ately uses Machine Learning to trick the users then the Turing Test works. 2. Asking to throw a object, or asking them one of the codeword will make theTuring Test fail in certain circumstances. A computer can do everything but not make aphysical motion without any external interfaces. This is the major drawback of Today’scomputer even with more sophisticated computers today. No computer today can dosome physical motion on their own. So when a Tester try asking “Can you throw that” computer Yes! will certainly fail the Turing Test, unless both Computer and Human can say “They can-not do that” and rejecting it. Inferences from Testing Turing Test in 2017: Turing Test works in today’s Computer if and only if the Machine Learning Algo-rithms are applied and the Computer can learn about the Human who is beside and theTester.

As the Machines are too powerful today when compared to 1950’s where thetests were made initially, unlike 1950’s Computers have grown way better than Humandoes, and Turing Test fails in this point, as Human can’t match the processing time tak-en by a Modern Sophisticated Multicore Machines. Turing Test passes only if MachineLearning Algorithms is applied alongside. Calculation: Finally, a Algorithm for Machine Learning works, Software Architecture is correctonly if the Turing Test is satisfied. This can be concluded that, Turing Test is used forverifying the Correctness of the Algorithm and validating the Software Architecture.

Future Works: This work might be extended to testing Turing Test on Chatbots on MobilePhones, since mobile is not as smarter as a Multicore Modern PC. References: 1. Olivier Temam, Enabling Future Progress in Machine-Learning, Google= 2. G. Cybenko. (1989) Approximations by superpositions of sigmoidal functions,Mathematics of Control, Signals, and Systems, 2 (4), 303-314. 3.

Y. LeCun, L. Bottou, Y. Bengio, P Haffner. (1988) Gradient-based learning ap-plied to document recognition – Proceedings of the IEEE. 4.

Y. Bengio, P. Lamblin, D Popovici, H.

Larochelle. (2007) Greedy layer-wisetraining of deep networks – Advances in neural information processing systems. 5. T. Chen, Z.

Du, N. Sun, J. Wang, C. Wu, Y. Chen, O. Temam. (2014) DianNao:A Small-Footprint High- Throughput Accelerator for Ubiquitous Machine- Learning, In-ternational Conference on Architectural Support for Programming Languages and Op-erating Systems (ASPLOS).

6. D. E.

Rumelhart, G. E. Hinton, and R. J. Williams. (1986) Learning internal rep-resentations by error propagation.

In Parallel distributed processing: explorations in themicrostructure of cognition, vol. 1. 7. COMPUTING MACHINERY AND INTELLIGENCE (1950) BY A. M.

Turing8. The Computer for the 21st Century by Mark Weiser

x

Hi!
I'm Casey!

Would you like to get a custom essay? How about receiving a customized one?

Check it out