Please respond to each classmate’s responses separately in at least 250 words per question. Answer must be in APA format and include at least one reference each. The original question is posted below to use as a reference only. You only need to respond to the classmate responses.
Original Question 1:
Argue that the Turing Test is a strong and valid test for human-like intelligence in machines. Propose a single modification that would provide the greatest improvement to the test.
Classmate 1 Response:
Since the creation of robotics and artificial intelligence there have always been individuals who have warned about this creation and what it may do to the human race as a whole. The Turing test was created by Alan Turing to better understand if artificial intelligence was capable of thinking in the mannerisms in which humans think by having them interact with an individual who then had to choose which of the two people from the interactions were the computer and which was the human.
Since this test relies solely on the interpreters opinion, there can be errors in the decision making process. Artificial intelligence in its creation is a series of algorisms that will create responses based on the current input it has received. Some machines have been able to fool the interpreter based on these algorisms and some research studies have shown no ability for the interpreter to choose the correct individual with any sense of accuracy (Walsh 2016).
French (2012) suggests that it may be time to go past the Turing Test and look at other opinions of evaluation when it comes to artificial intelligence. Computer programs are able to do much more advanced programing than when the Turing Test was first created and there needs to be ways to overcome it.
One addition I would make to the Turing Test to start to better understand if it is a computer or human that the interpreter is interacting with would be a requirement that the interpreter would have to change the subject mid-conversation to see how the different parties responded in those settings. If these parties that are interacting are able to change subject as well then it would show that they are able to change more quickly to the situations.
French, R. r. (2012). Moving Beyond the Turing Test. Communications Of The ACM, 55(12), 74-77.
Walsh, T. t. (2016). Turing’s Red Flag. Communications Of The ACM, 59(7), 34-37.
Original Question 2:
In the case of testing artificial intelligence against the Wechsler Preschool and Primary Scale of Intelligence, third edition, I would say that the testing is completely accurate and valid for what was being discussed. In Ohlssons (Ohlsson et. al) study many of the question and scoring of artificial intelligence was screened at the lowest level possible and still did not score as high as an average four year old did on this exam.
Classmate Response 2:
The exam took the exact questions that would be asked of the typical person involved and input that into different computer programs to get results. I will admit that I do not think putting the questions into google, DuckDuckGo and Bing were not the most appropriate way to get a valid look of artificial intelligence which is how the background of the original study started given the amount of information on the internet that can be loaded from anywhere and the answers are not the best answer, but just the most used websites with those keywords. However, as the study continued it did show promise into finding means of artificial intelligence that stood a chance at actually passing the test. In some areas the computer answered quite well, but in other aspects it could not seem to understand what was being asked of it.
Ohlsson, Stellan, Sloan, Robert H, Turan, Gyorgy, and Urasky, Aaron. (2014). Measuring artificial intelligence systems performance on a verbal IQ test for Young children. As retrieved from https://arxiv.org/ftp/arxiv/papers/1509/1509.03390…