Sunday, April 2, 2017

Not Man vs. Machine, but Man with Machine

In Science Fiction movies, one of the most stereotypical plots within the genre is that of man vs. machine, where machines, in one way or another, try to eradicate humans, surpass human intelligence, or transcend human beings all together. While there are plenty of exceptions to this stereotype, this struggle between man and machine is the first idea that pops into minds when discussing artificial intelligence.  However, I believe that within the real world, this struggle is not something developers should be concerned about given today’s technology.

Based on the provided readings, today’s artificial intelligence seems to be more focused on analyzing large data sets than taking over the world, and it is through this analysis that artificial intelligence differs from human intelligence.  For example, AlphaGo developed from its examination of over 150,000 human players’ games, a feat impossible for a single human to complete. Artificial intelligence also emerges from copious amounts of repetition; for example, neural networks rely on massive amounts of trial and error to solve levels of Super Mario Bros. Through these repetitions that often result in failure or even through playing itself repeatedly, the computer “learns” from its mistakes and forms a sort of intelligence from its discoveries.

While we humans also learn from our mistakes, the manner in which we learn is fairly different from this analysis approach. Humans learn mostly from intuition and experience; because of our more complex neural networks, we are to pick up and understand complex ideas through limited examples. One case to prove this concept is that Lee Sedol learned how become the best Go players in the world through basic practice, not through thousands or millions of played games.  Humans are also able to more easily distinguish between physical objects; one article showed how current computers grossly misidentify objects within an image, such as mistaking a road sign for a refrigerator.  In summary, I believe that this article stated this relationship between human and artificial intelligence well; “Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.”

Based on these comparisons of artificial and human intelligence, I think that software such as AlphaGo, DeepBlue, and Watson are steps in the right direction to producing humanlike AI.  While they are not currently proof of the viability of strong artificial intelligence, they or similar programs may become examples in the not-so-distant future. These AIs are definitely not simply gimmicks; we can learn a great deal from their creation and their testing to aid in the future development of more advanced systems. Therefore, current AIs, in my opinion, are programs that, while not achieving full human intelligence, are stepping stones to achieving even more humanlike machines.

In testing such systems’ intelligence, the first thing that comes to mind is the Turing test, which, since the 1950s, has operated as the main document discussing whether machines could truly think.  However, the Chinese Room is a counterargument to Turing’s Test that was established in the 1980’s, but Turing does briefly address this issue within his paper. Within “The Argument from Consciousness,” Turing outlines an objection to his argument which states that until a computer can produce a sonnet through its own emotions and not through preprogramming, then it cannot be considered as a thinking machine. This is essentially the Chinese Room’s argument as well, that a machine cannot “think” if it constructs sentences off of a set of rules instead of through intuition and thought.

Turing’s counterargument to this claim, in my opinion, is fairly weak; he states that if you do not accept the validity of his test, then the only way to prove that the machine has the ability to think is to become that particular computer. He later goes onto say that this question should be explored deeper when technology is more advanced.

Based on this response, I believe that the Chinese Room is a strong counterargument to the Turing Test, for how do we really know if the machine is truly thinking, generating these ideas on its own volition? I think that Turing Test sets strong initial guidelines for testing AI, but recently, I continually to see strong proofs that this test does not truly prove whether a machine can “think.”

Returning to my opening statement with the previous information in mind, I have come to the conclusion that the paranoia surrounding the power and dangers of future AI should be acknowledged but should not be an active concern. Comparing the AI we have now to those seen in major Sci-fi films, humanity not even close to producing something on HAL’s (2001: A Space Odyssey) or Ava’s (Ex Machina) level.  It has taken us quite a while just to reach this level of artificial intelligence, and because of this, I don’t think that AI will suddenly develop in massive, unexpected leaps that would cause the development of human intelligence in computers.

While I don’t think we need to currently worry about computers destroying the human race, I believe that some of the concerns raised about AI are warranted. For example, giving complete control of weapons systems to an AI is rather frightening to me (this fear could easily be unwarranted, however). While humans could program safeguards into such systems, giving AI the ability to control weapons could be dangerous.

In addition, the fear of AIs replacing within the workplace, I believe, is extremely warranted. While I understand that humans perform some menial tasks more efficiently than computers, there are plenty of tasks for which AIs would have a stronger résumé than their human counterparts, such as driving, calculating, or even running hotels.  However, I do not think that humanity needs to stress over these scenarios just yet, for AI’s intelligence level currently cannot shatter our society. This opinion, however, may rapidly change in coming years as technology continues to advance.

Along those same lines, I also do not believe that artificial intelligence could be considered similar to the human mind; however, AIs could potentially have their own form of mind in the future.  The movie Her explores this idea of AIs being minds, for at the end of the movie (spoilers), the AIs present in the movie transcend their code and “physical containers” into a higher state of being.  While this idea is far-fetched, AI could potentially develop into a hybrid that is not necessarily humanlike or logically-based, but something entirely new. A unique mind.

On the flip side, humans cannot be called biological computers either; while humans have been labeled as computers and have the capacity operate like glorified machines, we are more than breathing calculators. To claim that we are all simple computers would be to label us as uniform and artificial.  This naming would not only disregard our creative and additional cognitive abilities but would also take away our humanity, and making this assumption could pose threats to the rights, respect, and dignity that all humans inherently deserve. Calling us computers would take away our individuality, the spark within us that fuels creativity, and originality which separates us from all other beings on this earth.

No comments:

Post a Comment