Sunday, April 2, 2017

Busting open Vault 7: Project 3 Individual Reflection

Link to Podcast: https://drive.google.com/a/nd.edu/file/d/0BwX2A7FfvP-ONHpRMWhsdWpTV2M/view?usp=sharing

Vault 7, as exposed by WikiLeaks recently, details programs which can be utilized by the CIA to hack into almost any device imaginable to use them as glorified wiretaps.  One such example, as I discussed in our podcast, was “The Weeping Angel,” which would turn a Samsung Smart TV into a microphone by causing the user to believe that it has been turned off. It is hacks like these that continue to reaffirm my fears of encompassing government surveillance, and while I understand, as Matt mentioned in our podcast, that such a program cannot be installed remotely, the mere proof that such programs are being developed frightens me as someone who values their privacy.

Because of my fears, I believe that it right for WikiLeaks to bring information like Vault 7 to the public’s attention; however, the website should exhibit more caution than have been when releasing such information. Ignoring the problems that WikiLeaks addresses is not bliss as this ignorance perpetuates the problem. Therefore, telling the public of these morally grey actions is extremely important, but those involved with such information need to be protected from its release’s potential negative consequences.

Protecting and/or separating the “message” from the “messenger” is easier said than done, though.  If one is tactful and is not in any way bringing attention to themselves in regards to the information, then one may be able to separate themselves from their message. However, digital footprints can, now more than ever, form the connection between message and messenger, and this seems to be especially true if this message is posted on WikiLeaks. Reflecting Julian Assange’s inability to separate himself from his beloved website, I believe that Wikileaks itself is too controversial of a website for whistleblowers to post anonymously.

Instead of revealing such information through WikiLeaks, I believe that posting through media outlets such as The New York Times would allow whistleblowers to better hide their identity.  Also, trusting the government to reveal such information would, in most cases, be unwise; while they would be less likely to damage national security when disclosing previously-private information, the government seems as if they would be more likely to never inform the public. Therefore, in a whistleblowing situation, I believe that the messenger cannot rely on the government to eventually make this information public, but that messenger also must be wary of leaking information to places such as WikiLeaks, as their information and identity may not be kept private.

In terms of the release of such information, I believe it is nearly impossible to objectively decide whether whistleblower-material should be released to the public. Withholding information concerning behavior that harmed civilians or could potentially harm societal functions is ethically wrong, but releasing information that endangers US security or put lives unnecessarily at risk should not be revealed to the public. In general, however, I believe that whistleblowing is a commendable act, as it uncovers unethical behavior and promotes change and the prevention of such actions.

Concerning whistleblowing, I also do not believe that the messenger needs to be fully transparent or needs to be transparent by revealing certain information to the public. While I am unsure of all of the connotations “transparency” hold in this context, forcing a person into being transparent by causing them to reveal information against his or her will is morally wrong. The whistleblower has the right to be as transparent as he or she wishes when disclosing sensitive materials. While transparency may seem ideal in most situation, everyone has their own secrets, and completely transparent methods of releasing information is not always in the best interests of the messengers or the receivers.


Not Man vs. Machine, but Man with Machine

In Science Fiction movies, one of the most stereotypical plots within the genre is that of man vs. machine, where machines, in one way or another, try to eradicate humans, surpass human intelligence, or transcend human beings all together. While there are plenty of exceptions to this stereotype, this struggle between man and machine is the first idea that pops into minds when discussing artificial intelligence.  However, I believe that within the real world, this struggle is not something developers should be concerned about given today’s technology.

Based on the provided readings, today’s artificial intelligence seems to be more focused on analyzing large data sets than taking over the world, and it is through this analysis that artificial intelligence differs from human intelligence.  For example, AlphaGo developed from its examination of over 150,000 human players’ games, a feat impossible for a single human to complete. Artificial intelligence also emerges from copious amounts of repetition; for example, neural networks rely on massive amounts of trial and error to solve levels of Super Mario Bros. Through these repetitions that often result in failure or even through playing itself repeatedly, the computer “learns” from its mistakes and forms a sort of intelligence from its discoveries.

While we humans also learn from our mistakes, the manner in which we learn is fairly different from this analysis approach. Humans learn mostly from intuition and experience; because of our more complex neural networks, we are to pick up and understand complex ideas through limited examples. One case to prove this concept is that Lee Sedol learned how become the best Go players in the world through basic practice, not through thousands or millions of played games.  Humans are also able to more easily distinguish between physical objects; one article showed how current computers grossly misidentify objects within an image, such as mistaking a road sign for a refrigerator.  In summary, I believe that this article stated this relationship between human and artificial intelligence well; “Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.”

Based on these comparisons of artificial and human intelligence, I think that software such as AlphaGo, DeepBlue, and Watson are steps in the right direction to producing humanlike AI.  While they are not currently proof of the viability of strong artificial intelligence, they or similar programs may become examples in the not-so-distant future. These AIs are definitely not simply gimmicks; we can learn a great deal from their creation and their testing to aid in the future development of more advanced systems. Therefore, current AIs, in my opinion, are programs that, while not achieving full human intelligence, are stepping stones to achieving even more humanlike machines.

In testing such systems’ intelligence, the first thing that comes to mind is the Turing test, which, since the 1950s, has operated as the main document discussing whether machines could truly think.  However, the Chinese Room is a counterargument to Turing’s Test that was established in the 1980’s, but Turing does briefly address this issue within his paper. Within “The Argument from Consciousness,” Turing outlines an objection to his argument which states that until a computer can produce a sonnet through its own emotions and not through preprogramming, then it cannot be considered as a thinking machine. This is essentially the Chinese Room’s argument as well, that a machine cannot “think” if it constructs sentences off of a set of rules instead of through intuition and thought.

Turing’s counterargument to this claim, in my opinion, is fairly weak; he states that if you do not accept the validity of his test, then the only way to prove that the machine has the ability to think is to become that particular computer. He later goes onto say that this question should be explored deeper when technology is more advanced.

Based on this response, I believe that the Chinese Room is a strong counterargument to the Turing Test, for how do we really know if the machine is truly thinking, generating these ideas on its own volition? I think that Turing Test sets strong initial guidelines for testing AI, but recently, I continually to see strong proofs that this test does not truly prove whether a machine can “think.”

Returning to my opening statement with the previous information in mind, I have come to the conclusion that the paranoia surrounding the power and dangers of future AI should be acknowledged but should not be an active concern. Comparing the AI we have now to those seen in major Sci-fi films, humanity not even close to producing something on HAL’s (2001: A Space Odyssey) or Ava’s (Ex Machina) level.  It has taken us quite a while just to reach this level of artificial intelligence, and because of this, I don’t think that AI will suddenly develop in massive, unexpected leaps that would cause the development of human intelligence in computers.

While I don’t think we need to currently worry about computers destroying the human race, I believe that some of the concerns raised about AI are warranted. For example, giving complete control of weapons systems to an AI is rather frightening to me (this fear could easily be unwarranted, however). While humans could program safeguards into such systems, giving AI the ability to control weapons could be dangerous.

In addition, the fear of AIs replacing within the workplace, I believe, is extremely warranted. While I understand that humans perform some menial tasks more efficiently than computers, there are plenty of tasks for which AIs would have a stronger résumé than their human counterparts, such as driving, calculating, or even running hotels.  However, I do not think that humanity needs to stress over these scenarios just yet, for AI’s intelligence level currently cannot shatter our society. This opinion, however, may rapidly change in coming years as technology continues to advance.

Along those same lines, I also do not believe that artificial intelligence could be considered similar to the human mind; however, AIs could potentially have their own form of mind in the future.  The movie Her explores this idea of AIs being minds, for at the end of the movie (spoilers), the AIs present in the movie transcend their code and “physical containers” into a higher state of being.  While this idea is far-fetched, AI could potentially develop into a hybrid that is not necessarily humanlike or logically-based, but something entirely new. A unique mind.

On the flip side, humans cannot be called biological computers either; while humans have been labeled as computers and have the capacity operate like glorified machines, we are more than breathing calculators. To claim that we are all simple computers would be to label us as uniform and artificial.  This naming would not only disregard our creative and additional cognitive abilities but would also take away our humanity, and making this assumption could pose threats to the rights, respect, and dignity that all humans inherently deserve. Calling us computers would take away our individuality, the spark within us that fuels creativity, and originality which separates us from all other beings on this earth.