Saturday, May 13, 2017

Driving Me Crazy

People are just lazy, and because we want to be lazy, we are somehow motivated to innovate and advance today’s society just on the hope of someday being lazy. That, I believe, is the primary reason behind the development of self-driving cars.

All joking aside, the two motivations for researching and constructing self-driving cars appears to be safety and convenience. Accidents involving cars are one of the leading cause of death in the United States, and taking control from humans and placing it into the hands of software would take incapacitated or illegal drivers off of the road. Even good human drivers can cause accidents due to the briefest of distractions or being tired; in comparison, programs cannot become distracted, as code, when told, will continually gather information to keep its passengers safe.

In addition to saving lives, the convenience of self-driving cars is a major motivator in their production.  A multitude of people, including myself, dislike driving or become incredibly nervous while driving; a self-driving car would ease the anxiety that I have and ensure that my condition would not harm others. Also, self-driving cars would allow people to be more produce as they travel or allow them to get the sleep or rest that few people receive on a daily basis.  In the same vein as convenience, shipping costs for both consumers and distributors would decrease, for companies would no longer have to pay truck drivers, who are the main transporters of goods in the country.

Even with these perks, people are vehemently against the concept of self-driving cars.  On a surface level, people enjoy the freedom and control they exert while driving, and the action brews a passion in some hearts that would be missed if cars were controlled by computers. Also, hackers are a serious threat to self-driving cars, for even cars today, with their limited computer-driven systems, can be hacked to stop abruptly or swerve into traffic, endangering the passengers inside. Truck drivers and those working within the market of transportation would lose their jobs, and because transportation is one of the leading industries in America, thousands of people would be without employment.

Personal opinions aside, the main question concerning these autonomous cars is this: would they truly make our roads safer? One article states that to prove that self-driving cars are safe, they would have to drive 275 million miles without an accident, because about only about one person is killed per hundred million miles driven. To complete this test, it would take about 12 ½ years with a fleet of 100 vehicles driving 24 hours a day.  This testing, however, would pay off in the long run, for the safety benefits that come with these cars, as discussed earlier, far outweigh the slippery slope arguments of hacking and the eventual robot takeover of earth. Tesla summed up this idea nicely, saying that “self-driving vehicles will play a crucial role in improving transportation safety and accelerating the world’s transition to a sustainable future.”

In the same vein of safety, the question arises as to whom to keep safe in a dilemma; programmers, when designing the car’s software would need to devise a solution to the infamous “Trolley Problem”, deciding whether to keep the passenger or those outside of the car safe.  I don’t think that there is a clear-cut answer to this issue, but I think that minimizing the loss of life would be the only fair way to program the driving software.  Taking into account age, personality, and “importance” are too convoluted and complex issues to incorporate into a program at this time; therefore, minimizing the loss of life would be the only objective of way of dealing with complex crashes.

In addition to life-and-death situations, another question that emerges from this topic is who would be liable in a fatal accident.  In my opinion, if the user is told by the manufacturer that the car is 100 percent safe and an accident occurs, then the manufacturer should be liable for the crash.  If, however, the user has some sort of control over the vehicle, i.e. not fully autonomous, then the user should be held responsible for the crash.  I don’t think that the programmers should necessarily held responsible in any case; while they were the ones who developed the software, the company who created the car in the first place is responsible for performing the proper safety checks to make the car road-safe.

Self-driving cars would not only impact manufacturers and users of cars, but normalizing the autonomous vehicles would also have social and economic ramifications. On the social front, people could become even more physically and socially connected, for self-driving cars would allow users to safety contact friends and family during their drives and could allow for safer travel to remote locations.  Economically speaking, Lyft believes that people won’t have a need to purchase cars anymore, for the pricing of taking a taxi would be cheap enough that people would simply rely on their services.  In addition, truck drivers and the transportation business would be completely uprooted, as I have explained before.

The government would also be impacted by the universalization of self-driving cars, but honestly, I don’t think that the government would have to do much in terms of regulating them.  I believe that the regulation should be done by the manufacturers, and the government, in turn, should just use the same regulations they have put in place. However, a major government change would be the production of licenses. In today’s society, licenses are commonly used as IDs, and without the need to license people to drive, the government would have to rethink their strategy concerning proofs of identification.

With all of these facts in mind, I would probably want a self-driving car.  While I don’t like driving, I understand it as a necessity and don’t mind driving now, so I’m fairly indifferent on the subject of autonomous vehicles. I do very much appreciate the safety benefits they provide, and overall, I think they are a smart investment.  If they would help save lives and would benefit society as a whole, then I’m totally on board.

He's a Pirate

(The last blog will be coming later today)

I think at some point in my life, I wanted to be a pirate, but nowadays, pirates are pretty lame, especially those who haunt the Internet, stealing from content creators and others.  The government acknowledged the lameness of pirates by enacting the DMCA; Robert S. Boynton defines this law by saying that it is “designed to protect copyrighted material on the Web. The act makes it possible for an Internet service provider to be liable for the material posted by its users.”

He goes on to explain that if an Internet service provider is being sued for the content of a subscriber’s Web site, that Internet provider can simply remove the material in order to avoid any legal action. This is called a “safe harbor” provision, where service providers can use a “notice and takedown” technique to quickly disable the distribution of this illegal content.  Through this provision, intermediary sites, where users separate from the site can post information, can execute a “notice and takedown” if the copyright holder flags the video as infringement. This idea, alongside other rules defined throughout the law, made it possible for websites to not live in constant fear of being sued into oblivion (David Kravets).

But what constitutes as piracy?  Unfortunately, there are no clear-rules defining how much work must be stolen in order for it to be considered as piracy.  Some of the factors in determining this, David Kravets explains, is whether the material was commercially used, whether it hurt the original’s market, and whether that work is a parody. While this rule may seem fair on the surface, the DMCA also enforces the idea that consumers cannot lawfully copy DVDs that they have lawfully purchased; therefore, people claim that the law negates consumers’ rights and violates fair use laws.

In my opinion, I do not believe that it is ethical or moral for users to download and share copyrighted material.  Content creators poured their time, energy, and unique skillsets in order to create something beautiful to share with the world, and by sharing it without a cost, people are robbing creators of the money they rightfully earned.  Without paid entertainment, the entertainment business would cease to exist; people who make their livelihood off of movies, TV, music, and artwork are unrightfully losing money to these pirates.

The morality of sharing copyright material becomes a little obscured when discussing when a person owns a version of the material or just wishes to sample the material. If a person owns a copy of the material rightfully, I believe that it is ok to copy that material as long as he or she does not distribute that copy to anyone else.  This action would be difficult to police, however, and counting on someone to not distribute that copy may be harrowing.

As for sampling or testing the material before purchasing, I believe that this action is moral or ethical as long as the person is not testing or sampling the copyrighted material outside of its intended purpose.  If a person was interested in buying something and wanted a preview before investing, then I believe that it is moral for him or her to test the copyrighted material with an intent to buy.  However, if a person is solely utilizing the material for testing, then he or she must delete that material if he or she no longer has a use for it; keeping “unwanted” testing material around could quickly and easily lead to copyright infringement.

Even though I advocate for not sharing in copyrighted material, I have participated in this sharing in one way or another (please don’t turn me into the authorities). I would rather not go into the details, but I justified my actions by believing that the information which I was gathering could not be obtained in any other manner; I could not, after several hours, find and purchase this information legally, as it was not available to purchase or view on the company’s website or through a legal third party.  I still regret my actions and wish to support them financially and legally, but when only illegal sources make that information available, I would rather obtain that information than not have access to it at all.

In my “innocent” world, I would like to believe that many people follow the same thought process that I do; they make this information free and available on the Internet for those who either cannot afford to obtain the copyrighted material or simply do not have a way of obtaining it legally. According to some of the articles, however, people are hoarders; they want to collect as much information as possible, without any intention of using it, for the sake of having a wealthy stash of information. The Internet easily enables this behavior, and through pirating, this information can easily be obtained without breaking the bank.  Others, too, want to make information free and available for everyone, eliminating any sort of boundaries that would prevent people from having access to any sort of information. Others still may not even realize that these acts of viewing and obtaining copyrighted material from the Internet are illegal.

With streaming services such as Netflix and Spotify, it definitely has become easier to watch and view copyrighted material legally.  With Netflix being only a few dollars a month and Spotify being free, both of these services are inexpensive, legal ways of dealing with copyrighted material that has staunched and will continue to discourage the act of pirating.  However, I think that pirates will always be around. For example, Netflix and Spotify cannot buy the rights to every single TV, movie, and song in existence; there will always be content that cannot be obtained any way besides through pirating. While these services will be sufficient for the general public, there will always be people interested in content that cannot be obtained by large, legal companies.

Based on this fact, I don’t think that piracy is a solvable problem. There will always be pirates out there, and the more the law cracks down on them, the craftier and more skilled they will become.  I think that streaming services and the idea of making content easier to access legally will prevent the creation of some new pirates, but there will always be content out there that businesses to do not make legally available online. I do, however, believe that it is a real problem; industries and content creators are continuously losing money to pirates and their illegal actions.

And I have contributed to this problem.

I guess I’m pirate after all.

Friday, May 5, 2017

Warning: Exposure to Coding Could Change Your Life

I apologize once again for the length of this piece; this idea of computer science for all is something that, given my background and upbringing, I am fairly passionate about compared to some of the other topics we’ve covered in class.

On the subject of coding as the new literacy, one article mentioned that until about two centuries ago, most people could not read and write in comparison to today’s society. Coding, according to this same article, is following a similar trend. However, because most people will not be exposed to raw code in their day to day lives, it is difficult to become code literate today; we’re, as the article said, still operating in the scribal age of coding. In general, I pretty much agree with this course of history.  I think that some people, at this point in time, should know code, but is not essential because our technology does not currently require that knowledge. In the future, my opinion might change, but given the technology’s current progress, I don’t think that code is necessary the new literacy. I do believe, however, that is an important skill to learn nonetheless.

Speaking from my own experiences, I believe that everyone should at least be exposed to Computer Science and coding. In my high school, Computer Science classes consisted of learning how to use the Microsoft Office Suite and playing typing games; my teachers did not know how to code whatsoever. If my parents hadn’t forced me to attend a coding camp during high school, I doubt that I would have decided to make coding my profession, and plenty of my classmates could have become successful programmers if they had been introduced to the concept before college. Because of our school’s ignorance, I was one of only four students to who decided to study anything vaguely related to computers.

A study claims, however, that exposing high school students would be a waste of time, for only a few people have an aptitude to understand code’s logical nature. I disagree with this study and its logic and instead agree with the words of Mark Guzdial, who states that students who have a fixed mindset, those believe that their abilities and talents are inherited, will struggle with coding if they do not have a natural tendency to understand it. A student with a growth mindset will instead be more successful, learning to accept failures as opportunities to grow and work harder.  Because a majority of the coding process is failing and correcting errors, I believe that maintaining a growth mindset is critical to success in Computer Science if a student is not naturally gifted in computer science.

Through a growth mindset, I believe that anyone can learn Computer Science, and I am living proof of this concept. I was definitely not “born” or graced with an aptitude for programming; while I kept up with the class for the first few weeks of Fundamentals of Computing I, I definitely have struggled my way through almost all of my other Computer Science classes. Through hard work, though, I was able to overcome my inabilities and learn how to successfully code. Based on my experience, I believe that while not everyone want to put in the effort, hard work would allow anyone to understand code; it just may take some people longer than others to internalize basic concepts.

This hard work, too, can pay off. Companies in every field imaginable are looking for computer scientists, but because most students are not being exposed to coding in high school, people develop an interest in coding too late in their careers or not at all.  Therefore, high school need to require students to take a fundamental computing class. Currently, our lives are run by technology. The average person has at least three computers on their person at a time, and it is important to have at least a high school level of understanding concerning those devices. People in nearly any industry (nurses, retailers, businessmen) encounter machines that run on code on a daily basis, and people in those positions should fundamentally know how code works in order to run the computers that help them save and improve lives.

There are some complications, however, of bringing Computer Science to high schools across the country.  The biggest argument against this movement is that there are not enough teachers trained to instruct high schoolers on the subject; most Computer Scientists end up industry because of higher-paying opportunities. Also, poorer schools could not keep up with the changing technology; replacing computers and other devices every few years is incredibly expensive for most school systems. In addition, parents and others believe that coding is not currently useful in most people’s day-to-day activities, and it is not currently worth it to invest in such a “specialized” field.

While I believe that these concerns for CS4All are valid, I still heartily believe that Computer Science courses should be added as a requirement to high schoolers’ curriculum. I don’t think that teaching grade schoolers how to code would be effective overall; based on experience, younger children are usually more interested in visual changes instead of code itself and the experience of coding. Younger children also do not know what they want in life and are bound to change their career goals despite exposure to code during school. It may be pertinent to make coding optional for grade school students or encourage parents to follow-up with their children, but overall, I think that coding should be introduced at the high school level.

By offering Computer Science classes to all high schoolers, children who never believed that they could be programmers could be inspired to code, and just being exposed to the logical thinking needed to code would be beneficial to students in any profession.  Also, I do not believe that a programming class should be an option in place of other language classes. Learning how to code requires a completely different mindset than what’s needed to understand a foreign language; therefore, I believe that it would be erroneous to group these subjects together. While I understand that it’s difficult to add more courses to a required curriculum, I believe that given the direction of society, exposure to coding in high school is critical in the formation of more Computer Scientists.

I think that the key to teaching Computer Science is computational thinking because even if students will never write a single line of code after class, they may end up using programs that rely on block-based coding or will need to know the basics of coding to be hired. Focusing on the visual aspects of coding as well as the final product is also critical in the early stages of coders’ development; learning that I could make anything with code was an important stepping stone in my career as a programmer. Too much emphasis, however, on this aspect may be detrimental and may cause students to believe that they have been “tricked”, so it’s also important to show the nitty-gritty of coding as well.  Therefore, I believe that the optimal high school computer science program would focus on these three ideas – computational thinking, visualization, and hard coding – in order to give students a well-rounded view what coding truly entails.

Anyone can learn how to program. It’s just like any other action in this world.  Some people will have an innate talent for it; some people will not.  Some people will want to code, and others could care less about it. Those with both talent and drive will perform successfully, but those who work hard and dream of being successful will also become coders.  It just might take some people more time to learn than others. In coding, practice also makes perfect; the more you write and learn to think computationally, the better a coder will become. I think that test discussed earlier is bogus; all you need is drive and the ability to accept failure in order to be successful.

While I believe that anyone can learn how to program, I don’t think that everyone necessarily should learn the gory details of coding. As Basel Farag said, “the line between learning to code and getting paid to program as a profession is not an easy line to cross.” I agree with this idea and believe that everyone should cross the “learning to code” line, but not everyone is destined to become a paid, coding professional.  Being exposed to coding high school is important to our development as a society and to people’s daily lives, but requiring anything beyond a semester in high school is unnecessary. The majority of people will still not use detailed coding information in order to function in their jobs and daily lives, and there are a multitude of jobs that require intense specialization in another subject other than coding. Without these other jobs, society would cease to function; therefore, I believe that it is not completely necessary for people to learn how to code.  Exposure to coding, however, is definitely necessary, for without an introduction into the world of code, people would not realize coding’s true potential.

I do plan on turning in the past two blogs by the end of Finals Week.  Life got the best of me these past few months, and even if you cannot give them a grade, I would still like to finish them because I really do care about my performance in this class.

Sunday, April 2, 2017

Busting open Vault 7: Project 3 Individual Reflection

Link to Podcast: https://drive.google.com/a/nd.edu/file/d/0BwX2A7FfvP-ONHpRMWhsdWpTV2M/view?usp=sharing

Vault 7, as exposed by WikiLeaks recently, details programs which can be utilized by the CIA to hack into almost any device imaginable to use them as glorified wiretaps.  One such example, as I discussed in our podcast, was “The Weeping Angel,” which would turn a Samsung Smart TV into a microphone by causing the user to believe that it has been turned off. It is hacks like these that continue to reaffirm my fears of encompassing government surveillance, and while I understand, as Matt mentioned in our podcast, that such a program cannot be installed remotely, the mere proof that such programs are being developed frightens me as someone who values their privacy.

Because of my fears, I believe that it right for WikiLeaks to bring information like Vault 7 to the public’s attention; however, the website should exhibit more caution than have been when releasing such information. Ignoring the problems that WikiLeaks addresses is not bliss as this ignorance perpetuates the problem. Therefore, telling the public of these morally grey actions is extremely important, but those involved with such information need to be protected from its release’s potential negative consequences.

Protecting and/or separating the “message” from the “messenger” is easier said than done, though.  If one is tactful and is not in any way bringing attention to themselves in regards to the information, then one may be able to separate themselves from their message. However, digital footprints can, now more than ever, form the connection between message and messenger, and this seems to be especially true if this message is posted on WikiLeaks. Reflecting Julian Assange’s inability to separate himself from his beloved website, I believe that Wikileaks itself is too controversial of a website for whistleblowers to post anonymously.

Instead of revealing such information through WikiLeaks, I believe that posting through media outlets such as The New York Times would allow whistleblowers to better hide their identity.  Also, trusting the government to reveal such information would, in most cases, be unwise; while they would be less likely to damage national security when disclosing previously-private information, the government seems as if they would be more likely to never inform the public. Therefore, in a whistleblowing situation, I believe that the messenger cannot rely on the government to eventually make this information public, but that messenger also must be wary of leaking information to places such as WikiLeaks, as their information and identity may not be kept private.

In terms of the release of such information, I believe it is nearly impossible to objectively decide whether whistleblower-material should be released to the public. Withholding information concerning behavior that harmed civilians or could potentially harm societal functions is ethically wrong, but releasing information that endangers US security or put lives unnecessarily at risk should not be revealed to the public. In general, however, I believe that whistleblowing is a commendable act, as it uncovers unethical behavior and promotes change and the prevention of such actions.

Concerning whistleblowing, I also do not believe that the messenger needs to be fully transparent or needs to be transparent by revealing certain information to the public. While I am unsure of all of the connotations “transparency” hold in this context, forcing a person into being transparent by causing them to reveal information against his or her will is morally wrong. The whistleblower has the right to be as transparent as he or she wishes when disclosing sensitive materials. While transparency may seem ideal in most situation, everyone has their own secrets, and completely transparent methods of releasing information is not always in the best interests of the messengers or the receivers.


Not Man vs. Machine, but Man with Machine

In Science Fiction movies, one of the most stereotypical plots within the genre is that of man vs. machine, where machines, in one way or another, try to eradicate humans, surpass human intelligence, or transcend human beings all together. While there are plenty of exceptions to this stereotype, this struggle between man and machine is the first idea that pops into minds when discussing artificial intelligence.  However, I believe that within the real world, this struggle is not something developers should be concerned about given today’s technology.

Based on the provided readings, today’s artificial intelligence seems to be more focused on analyzing large data sets than taking over the world, and it is through this analysis that artificial intelligence differs from human intelligence.  For example, AlphaGo developed from its examination of over 150,000 human players’ games, a feat impossible for a single human to complete. Artificial intelligence also emerges from copious amounts of repetition; for example, neural networks rely on massive amounts of trial and error to solve levels of Super Mario Bros. Through these repetitions that often result in failure or even through playing itself repeatedly, the computer “learns” from its mistakes and forms a sort of intelligence from its discoveries.

While we humans also learn from our mistakes, the manner in which we learn is fairly different from this analysis approach. Humans learn mostly from intuition and experience; because of our more complex neural networks, we are to pick up and understand complex ideas through limited examples. One case to prove this concept is that Lee Sedol learned how become the best Go players in the world through basic practice, not through thousands or millions of played games.  Humans are also able to more easily distinguish between physical objects; one article showed how current computers grossly misidentify objects within an image, such as mistaking a road sign for a refrigerator.  In summary, I believe that this article stated this relationship between human and artificial intelligence well; “Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.”

Based on these comparisons of artificial and human intelligence, I think that software such as AlphaGo, DeepBlue, and Watson are steps in the right direction to producing humanlike AI.  While they are not currently proof of the viability of strong artificial intelligence, they or similar programs may become examples in the not-so-distant future. These AIs are definitely not simply gimmicks; we can learn a great deal from their creation and their testing to aid in the future development of more advanced systems. Therefore, current AIs, in my opinion, are programs that, while not achieving full human intelligence, are stepping stones to achieving even more humanlike machines.

In testing such systems’ intelligence, the first thing that comes to mind is the Turing test, which, since the 1950s, has operated as the main document discussing whether machines could truly think.  However, the Chinese Room is a counterargument to Turing’s Test that was established in the 1980’s, but Turing does briefly address this issue within his paper. Within “The Argument from Consciousness,” Turing outlines an objection to his argument which states that until a computer can produce a sonnet through its own emotions and not through preprogramming, then it cannot be considered as a thinking machine. This is essentially the Chinese Room’s argument as well, that a machine cannot “think” if it constructs sentences off of a set of rules instead of through intuition and thought.

Turing’s counterargument to this claim, in my opinion, is fairly weak; he states that if you do not accept the validity of his test, then the only way to prove that the machine has the ability to think is to become that particular computer. He later goes onto say that this question should be explored deeper when technology is more advanced.

Based on this response, I believe that the Chinese Room is a strong counterargument to the Turing Test, for how do we really know if the machine is truly thinking, generating these ideas on its own volition? I think that Turing Test sets strong initial guidelines for testing AI, but recently, I continually to see strong proofs that this test does not truly prove whether a machine can “think.”

Returning to my opening statement with the previous information in mind, I have come to the conclusion that the paranoia surrounding the power and dangers of future AI should be acknowledged but should not be an active concern. Comparing the AI we have now to those seen in major Sci-fi films, humanity not even close to producing something on HAL’s (2001: A Space Odyssey) or Ava’s (Ex Machina) level.  It has taken us quite a while just to reach this level of artificial intelligence, and because of this, I don’t think that AI will suddenly develop in massive, unexpected leaps that would cause the development of human intelligence in computers.

While I don’t think we need to currently worry about computers destroying the human race, I believe that some of the concerns raised about AI are warranted. For example, giving complete control of weapons systems to an AI is rather frightening to me (this fear could easily be unwarranted, however). While humans could program safeguards into such systems, giving AI the ability to control weapons could be dangerous.

In addition, the fear of AIs replacing within the workplace, I believe, is extremely warranted. While I understand that humans perform some menial tasks more efficiently than computers, there are plenty of tasks for which AIs would have a stronger résumé than their human counterparts, such as driving, calculating, or even running hotels.  However, I do not think that humanity needs to stress over these scenarios just yet, for AI’s intelligence level currently cannot shatter our society. This opinion, however, may rapidly change in coming years as technology continues to advance.

Along those same lines, I also do not believe that artificial intelligence could be considered similar to the human mind; however, AIs could potentially have their own form of mind in the future.  The movie Her explores this idea of AIs being minds, for at the end of the movie (spoilers), the AIs present in the movie transcend their code and “physical containers” into a higher state of being.  While this idea is far-fetched, AI could potentially develop into a hybrid that is not necessarily humanlike or logically-based, but something entirely new. A unique mind.

On the flip side, humans cannot be called biological computers either; while humans have been labeled as computers and have the capacity operate like glorified machines, we are more than breathing calculators. To claim that we are all simple computers would be to label us as uniform and artificial.  This naming would not only disregard our creative and additional cognitive abilities but would also take away our humanity, and making this assumption could pose threats to the rights, respect, and dignity that all humans inherently deserve. Calling us computers would take away our individuality, the spark within us that fuels creativity, and originality which separates us from all other beings on this earth.

Sunday, March 26, 2017

Is This the Real Life? Is This Just Fantasy?

I guess a more apt title would be “Why I don’t have a Facebook Account Part II,” but I think that this title fits the overall topic of this blog better.

While I am probably the only person who has a Twitter but not a Facebook account, I do not see or look for news whenever I log onto the social media. I also do not have friends or family willing to let me peek at their accounts, so I have never been directly exposed to this mysterious thing called “Fake News.”  Fake News has never and does not directly affect me, but after reading these articles, I think I have some understanding now of this wide-spread phenomenon.

Fake News are news articles, usually published online, that dish out false information for the purpose of “clickbait,” or for simply making money from people clicking and viewing the story. They are usually filled with false and sensationalized information about a specific political topic, and while they are occasionally written in hopes of persecuting other political parties, these articles are simply ploys to gain revenue from website advertisements. Fake News is also highly sensationalized but is written in such a way that readers may believe that is true if they do not fact-check this source.

Overall, I think that the concept of fake news is fascinating; I wonder who came up with this idea of writing such articles to generate views and revenue.  I’m not really offended by the existence of Fake News; I think that they can be very entertaining, like The Onion, as long as readers know beforehand that such news is fake.  This is, however, where the problem of Fake News arises, for these articles are written in such a way that it is almost impossible to tell the difference between reality and fantasy. If certain fake news is wildly spread and believed to be true, lies can influence readers’ opinions in a false manner, and that’s a pretty scary prospect in my opinion.

As scary as that possibility may seem, I don’t think that Fake News should be totally censored, however. I believe that not only do people have a right to publish what they wish on the internet to a certain extent, these articles seem to have some entertainment value as well. Also, free speech would allow for their publication.  In the case of displaying links to these articles in other websites’ news sections, private companies have the ability to choose whether they decide to promote such links or not.  Removing these articles from the internet entirely, however, would be wrong.

Trusting social media to pick the proper news to display on their website, fake or not, is a trickier situation. On this topic, I believe that Michael Munez captures my opinion quite well; he states that “imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”  On principal, I do not have a problem with private companies deciding which news content to display, but if a company decides to include a nonbiased news section on their page, then it needs to be fact-checked and filtered for fake news. For example, I believe that the Facebook employees who chose articles for the website’s trending news section based on bias were acting immorally and against the idea of nonbiased, reliable news.  I also believe, however, that it would be acceptable for a website to prefer “biased” news if that is what the corporation or organization believes and expressly states that fact alongside such articles.

While I believe that deciding what type of news content should be displayed is up to the individual company, I’m leery of the idea of Facebook and Twitter deciding the accuracy and truth of an article.  Because of the aforementioned case of personal bias, I believe that such social media should enlist the help of third party resources to ensure the article’s accuracy and save the time and energy of creating a separate team to complete this task.  However, I do believe that is it imperative that companies mark the nature of the news source they are promoting in order to properly inform their users.

Leaving this clarification out of websites is the most dangerous aspect of Fake News; Max Read states in his article that “many of those stories were lies, or “parodies,” but their appearance and placement in a news feed were no different from those of any publisher with a commitment to, you know, not lying.” Luckily, Timothy B. Lee has the perfect solution to this terrifying prospect: “one way to help address these concerns is by being transparent. Facebook could provide users with a lot more information about why the news feed algorithm chose the particular stories it did.”
I believe that transparency in this situation is key. While companies have a right to promote whatever articles they wish, both they and their users need to be acutely aware of the article’s nature before reading.  Companies have an innate responsibility to provide accurate news depending on their goals and beliefs, and social media and aggregators of information have this same responsibility of truthfully reporting the type of news before users read it.

The type of news specifically reported to me was all the same until I left Nebraska; everyone around me had the same views on religion, politics, the weather, anything. It wasn’t until I came to Notre Dame and started having intense conversations about these topics that I realized that people held such different viewpoints on every debatable subject. Therefore, being in a bubble and breaking out of it has allowed me to experience and understand various viewpoints on different subjects.  I think this bubble bursting could occur through the internet as well, if users decided to receive information from multiple sources and be open to the existence of varying opinions.

Even though the threat of a “post-fact” world looms if people did not exhibit this behavior regarding other’s opinions, I think that people always have naturally gravitated toward more sensationalized news that triggers emotions, and there were then always truth seekers, willing to look past the emotions to find accuracy within articles. Facebook is currently pushing this search for truth as well, dispelling the “post-fact” news by eliminating fake news from their trending section.

Fake news will now always exist in today’s society it will always exist, and that’s not a bad thing. However, as people become more aware of its dangers as I have, I think that the truth will prevail despite the threat of a “post-fact” world.


Tuesday, March 21, 2017

Come On, Let's Play Monopoly

(What the title is referencing)

On the topic of corporate personhood, I believe that Kent Greenfield defined this topic very well. He stated in his article that “understand that “corporate personhood” simply expresses the idea that the corporation has a legal identity separate from its shareholders.”  However, I believe that corporate personhood is a step beyond this idea; it is, instead, granting businesses the ability to have similar rights to a person, including the ability to spend campaign money or practice a particular set of religious beliefs.

At least, that’s what I believe it to be, based off the readings.  This whole concept is extremely confusing, especially for someone who is trying to comprehend this while motion sick in an airplane.  Therefore, I would like to blame any misconceptions of this topic on the fact that this blog is being completed in an airplane. (Thanks a lot for scheduling a blog over break.  I really appreciate it.)

(That was sarcasm, if you couldn’t tell.)

The ramifications of the concept manifest themselves, in one way, in the debate as to where the line should be drawn regarding freedom of speech; because corporations singularly do not have a conscience, how can they decide on a whole which religion they decide to support? The views of the corporation may not always represent the views of everyone involved within the corporation; therefore, adapting a religion for an entire company can cause issues and divides between workers and administrators.

On the other hand, treating the corporate person as a separate entity allows people who caused an accident to avoid the blame and incrimination; the corporation will instead be blamed. But again, this easily becomes a double-edged sword. By claiming that the corporation is separate and by placing the blame all on one invisible person, corporations as a whole can more easily execute immoral acts. However, if one person causes an issue, then everyone in the company is punished in a way because there is not an actual person who can shoulder the blame.

After examining the Microsoft antitrust case, I don’t think that their practices could be considered unethical or immoral.  All that they did was install one of their products onto another one of their products; it’s their right, as owners of the OS, to install whatever software they produce onto a device of their own.  If Netscape had developed their own OS, then it would have totally been fine to pre-install Netscape on their own devices; making your own browser doesn’t immediately make a corporation a monopoly.

From my perspective, this whole situation is like calling McDonald’s or Burger King a monopoly for selling their own sides.  Both companies really specialize in burger-making (Operating Systems) but pair their products with French Fries or milkshakes (browsers) produced only by them.  If this was the case, then salesmen of those products should be outraged and claim that those places are monopolies; while I am not sure how many OS systems are out there, it would be ridiculous to limit a company to making a single product because they would be a monopoly otherwise.

If Microsoft did make it harder to install different browsers by putting up deliberate roadblocks in their software to slow their machines down, then I would consider the corporation a monopoly.  Also, if Microsoft had been forcing others to use a different version of Java, this behavior may have made them out to a monopoly as well, but if they had simply developed their own version of Java for their own use, I don’t think that this development would warrant the label of monopoly.  All of these ifs, however, are just ifs; if (again) they did not actually practice these deterrent action, hen how can you call Microsoft a monopoly?  Microsoft simply released their own version of a browser with another of their products, and in my opinion, there’s nothing monopolizing about that.

When discussing the line between a normal corporation and a monopoly, I think the main restriction should be placed based on the whether a corporation is actively working to hurt or prevent other companies from succeeding (the key word here being actively). As I stated previously, if corporations deliberately create scripts to slow down download times, prevent a user from purchasing and using another company’s software, or directly prevent a company from flourishing in any way, then that would be considered as ruthless behavior. However, competition does not equal monopoly. If two companies are making similar products and one is successful while the other is not, the struggling company cannot automatically claim its rival is a monopoly; clearly, if there is competition and a market for both companies, then one cannot be a monopoly.  While the business world can be ruthless, it is not necessarily oppressive.  It’s survival of the fittest, but tampering with success actively is not fair.

While I’m not totally sure of my stance on corporate personhood, I think, based on my opinion on ruthless practices, that corporations do have a moral obligation in general to treat other workers and companies with respect. They also have to respect the practices of other corporations and treat other corporations in the way that they would want to be treated.  I unfortunately don’t have much to say about this topic because I think that the whole situation is fairly straight forward. If you treat a company like a person, then they are automatically given the responsibilities that an individual would have, for the most part.

And this opinion would be true in regards to Microsoft, if you safely assume that they did not actively slow down the installation of Netscape. Microsoft treated their rivals with respect while still working to better improve their business. Even after being convicted of being a monopoly, Microsoft accepted defeat and completed their given responsibilities, even issuing a version of Windows without Windows Explorer to Europe.  I believe that these actions are sound evidence that Microsoft is a morally sound corporation, even if people did not accept their apology and subsequent actions.

On a larger scale, I think that this quote by Cato Shapiro says it all concerning corporations’ moral integrity. "Nobody is saying that corporations are living, breathing entities, or that they have souls or anything like that," he says. "This is about protecting the rights of the individuals that associate in this way." For a corporation to behave morally and ethically, they should protect the rights of the individuals within their company and treat rival corporations with the same respect that is needed for their employees. Through this system of respect and care, corporations can behave morally and ethically.

(Sorry for the length…there were a lot of questions, and I had a lot to say for some of them.  But I’m not going to try to cut it at this point.)

Sunday, February 26, 2017

Why I Don't Have a Facebook Account

As much as I would have liked to include an anecdote about why I don’t have a Facebook account here, I’m very long-winded and am just barely under the word count. So, let’s just jump right in.
After reading and analyzing the given articles, I believe that technologically-orientated companies should not weaken encryption or implement backdoors for the express purpose of government surveillance alone.  I do believe that companies should allow the government to examine a singular, specific device during criminal investigations; for example, I believe that the government was right in trying to procure the San Bernardino shooters’ phone or should be allowed to view records of, as they mentioned countless times throughout articles, a person involved in child pornography.  If a search warrant is properly secured, then the government should be given access to that singular device’s contents, for searching through a phone is roughly equivalent to examining other personal items, such a diary, which could contain evidence useful for a conviction.

Smaller cases aside, government surveillance on a massive, generic scale, such as maintaining a database of driver’s licenses and secretly asking Google, Yahoo, and Facebook for browser history and other information, is an invasion of privacy. Accessing this data without any suspicion of criminal activity is unwarranted and would be unnecessary additional work for the computer scientists involved. In all of the gathered data, the likelihood of detecting criminal behavior given the data collected would be relatively small and could easily lead to a misidentification or biased assumptions that could, in turn, cause false accusations or arrests. Therefore, I believe that while government intrusion of someone’s “private information” would be acceptable in the context of a completed criminal act, mass surveillance to maybe catch a criminal is unnecessary and frightening.

There is a strong counter-argument to this argument, however, and that is that if someone broke the encryption on a phone in for one specific instance, then this person should continue to use this encryption program to its fullest potential and as a source of leverage.  This is exactly what the FBI did when they jailbroke an iPhone; because of their success, the FBI expanded their jurisdiction by requesting the ability to make overseas warrants to remotely hack and collecting more personal information from companies, including account numbers and login information.  In my opinion, this is definitely a violation of privacy, and if such successes continue, government officials could continue to demand more and more information, causing this problem to snowball out of control.

To prevent this mass surveillance, an extremely strict set of checks and balances could be enforced between both providers and the FBI concerning encryption and information distribution. Going behind Apple’s back to retrieve information from the San Bernardino shooters’ phones was wrong; the FBI abused the methods to retrieve information by continuing to expand and push further surveillance. Overall, both parties involved must be cautious and conscientious of the information they are handling and the powers they use to access such information.

Considering just the tech companies’ roles in this relationship, I believe that they must protect privacy while also, when possible, aiding the government in their investigations. When buying a phone or other device, the consumer trusts their personal device and its makers to a certain degree, agreeing to use it on the grounds that texts are private and their information is kept safe. Therefore, in regular, day-to-day circumstances, companies are obligated to provide this form of privacy. However, when crimes occur and search warrants are produced, tech companies should comply with the law if the device contains legitimate evidence or information crucial to solve a crime.

This approach of allowing limited investigations, I believe, does at least an acceptable job at balancing free-flowing information and extreme terrorism.  There is probably a better method of approaching this precarious situation, for my method does not actively “prevent” terrorist or criminal activities from happening in the first place. However, if the government starts to accuse people for crimes that, at one time, they may have considered but decided against doing, then that would inherently wrong. At the same time, though, preventing injuries from ever occurring would be amazing.  To once again push back on that point, texts and metadata are not always indicative of final behavior. Based on this paragraph alone, I believe there is no absolute way to balance freedom with absolute security, but through limited investigations, the government and tech companies can take a step in the right direction to promote these two concepts equally.

As I have stated before, one of the flaws in my “plan” is that I do not provide any assurance or way that crimes can be preemptively prevented. This is because if government surveillance becomes too all-encompassing and invasive, the terrorists and those who wish to commit crimes would find a way to do it even without the internet or tech companies’ devices. If such mass censoring were to occur, fear and distrust would grow in citizens’ hearts, and those with malicious intent would be even more encouraged to operate in even more subtle, undetected ways. By telling society that if they have nothing to hide, they have nothing to fear, the government would also continue to cultivate the fear of an official falsely interpreting posts and actions, deciding that a person is hiding something, and then taking appropriate measures to eliminate the threat. In the end, the government would make society more dangerous as terrorists are pushed out of the more public channels out of fear and into places where evidence on them would be even harder to collect.

Thursday, February 23, 2017

Hidden Figures and My Role Model: The Engineering Camp

After watching Hidden Figures, it donned on me that minority and women engineers are not really celebrated within their field or within history, despite the emerging push for such groups to join the industry in recent years.  I never see ads that promote historical, engineering minorities, and we have never discussed them in high school or college. This movie, however, brought such historical figures to the forefront, but even though these three women were able to break the mold and become successful engineers, I have never heard of such amazing figures before this film. That's probably why the film was given such a title; these women were truly Hidden Figures until this movie told their remarkable stories.

As Jacob Kassman and I discussed in our podcast, I think one of the reasons why these figures are not celebrated is because, to a multitude of people still alive and still within the industry, this isn't exactly "history."  Being a millennial, it feels like such blatant discrimination and segregation took place millennia ago, but in reality, these events occurred within the past 56 years. There are still plenty of people who lived and worked during this time who are alive today, and the backbone of our society, our workplaces, now rests on shoulders.

As much as we would like to believe that is has disappeared within the hearts of these people, racism and discrimination still works subtly through actions of people without them even realizing it occasionally.  This is covered beautifully within the movie as a supervisor tells Dorothy Vaughan that she has nothing against "y'all" or "black people."  Mrs. Vaughan responds, saying that "I know, I know you probably believe that."  This line really struck a chord with me as it powerfully showed that despite what we may say, racism and discrimination can still be buried in hearts, seen either blatantly or through microaggressions and slip-ups. The movie further proved this point through Mr. Johnson, who, upon first meeting Miss Goble, was shocked that they would let a woman like herself work on such complicated and sensitive projects.

After seeing the movie and these two examples, I believe that some of this inherit sexism and racism is very much alive today in these same manners, and overcoming these deeply-ingrained stereotypes and assumptions is probably the most difficult challenges that minorities and women can face.  They may encounter all of the same issues as these three women did, including working with a crowd of people who look and act in different manner and who were raised with different expectations, interests, and family situations. Also, they may struggle with having to prove, with more credentials than their counterparts, that they are worthy for the job not just because they are female or a minority or even (in females’ cases) struggling to find a women’s restroom within a reasonable distance to their desk.  Because these challenges exist, it is very reassuring to finally know of some role models who exist within the engineering field and who have rose above such issues.  I only wish I had known of them earlier.

As I have just explained, I see why role models are important.  There are people similar to you who have overcome all adversity to become successful in your field of study, and through them, you can become more inspired to achieve your life-long goals.  You could also find new goals or learn of new, interesting topics to pursue, and based off of the knowledge that "if they can do it, then you can," you also truly will believe that you can accomplish whatever your heart desires.

When I was younger, and even now, I was always embarrassed to admit that I didn't have a role model.  Whenever that question came up as an ice breaker or in a "get-to-know-you" situation, I never knew what to say; I just looked down at my feet and shuffled them around a bit before mumbling some generic answer, like Marie Curie or J. R. R. Tolkien, just to free myself from the awkwardness and embarrassment.  I've never had some figure, either living or dead, either a celebrity or close relative, that I've distinctly looked up to as a role model. I always believed that I was forging my own path, and because of the unique situation I was raised in (which, honestly wasn't very unique at all), there was no one in the past who was quite like me and could guide me to the path of success (wow, I was one arrogant child.  And maybe I am still that arrogant.). No, I figured that I could forge my own path through life without the influence of others; I didn't need to look up to someone to show me the way.  I already knew what I was doing and where I was going, and if you would have asked high school me what she was planning on studying in college, she would have said anything but math or engineering.

I was never interested in Engineering in high school; I hated math and just wanted to be an English major.  Because my brother (a year older than me) liked Engineering and because engineering fields paid well and actually have job opportunities, my mother forced me to come to an Introduction to Engineering camp hosted by Notre Dame the summer before my Senior year of high school.

Throughout the prior spring semester, I dreaded coming to Our Lady's University.  I believed that I was not cut-out to do Engineering and would find it boring and extremely difficult, but I was desperate to participate in anything that I would increase my chances to be accepted into Notre Dame. So I came, and saying I was nervous would be an understatement.  Over the course of two weeks, however, I came to realize that 1) I wasn’t as terrible at Engineering as I thought I would be and 2) I loved programming.  As I learned how to program the Notre Dame Fight Song note by note in LabVIEW, I had a revelation, an epiphany.  For me, software and programming didn’t have to be crunching numbers day after day and working on math-related projects.  No, with programming, I could create, I could make and write stories in code, illustrate and bring to life the worlds I had whirling around in my head for countless years.  Only through IEP would I have ever realized that I could like something so based in logic and computing, and because of that program, I’m sitting here, writing this blog post today.

Sunday, February 19, 2017

Arrogance and Ignorance Strike Again

Every time I write a blog for this class that is not about myself, I end up talking about how I see Computer Scientists: arrogant and ignorant but somehow loveable people who love technology.  While these adjectives may be stereotypical, they are stereotypes for a reason, and once again, they directly apply to the case at hand.

When discussing the Therac-25 incident, I think that Nancy Leveson and Clark S. Turner completely pinpoint the root causes of this disaster; in their article, they say that “accidents are seldom simple - they usually involve a complex web of interacting events with multiple contributing technical, human, and organizational factors.”  Breaking these events down, there were several technical problems that directly caused these accidents to occur. The major technical issue involved time, for when a variable controlling whether an x-rays or electrons was altered, the machine required 8 seconds to readjust its settings.  If the value was changed again during this 8 second period, the magnet would not recognize the second change and would display a Malfunction 54 message before overdosing a patient. Another error was discovered during machine set-up, for when a variable would roll over from 255 (its max) to 0, a portion of the machine would not be checked, causing a fault to be undetected. A few other issues, including a ghosting data table and the lack of hardware limiters, were found within the machine and its code; these issues, then, while technical in nature, were left unchecked and unaddressed by the machine’s human creators.

Within this network of causes, humans contributed their negligence and their arrogance to the failure that was Therac-25. Throughout the investigation process and the supposed improvement of their code, the AECL danced around the topic of fixing their fatal errors, for the statements with the article led me to the conclusion that the AECL did not actually test their code initially or after their implementing their improvements. This statement (a true testament to their arrogance) led me to believe this unwillingness: “the AECL representative, who was the quality assurance manager, responded that tests had been done on the CAP changes, but that the tests were not documented, and independent evaluation of the software "might not be possible.”  Also, because the code from the Therac-25 was built on top of old code (and written in Assembly), the coders working on this project were bound to run into more issues.  Their negligence, however, resulted in a final code that was even more convoluted and bugged than the original.

Considering the organizational side of this issue, the approach taken to fix this issue was extremely ineffective. After arguing that they could not reproduce the problem on their own machine, the AECL finally admitted to their mistakes, but when presenting the documents outlining the fixes they intended to implement, the AECL was incredibly vague. For example, the document “contained a few additions to the Revision 2 modifications, notably - changes to the software to eliminate the behavior leading to the latest Yakima accident.” Simply stating that the AECL intended to fix a problem does not provide enough detail within an official statement to ensure that the problem will be resolved, as the FDA said.  Because of their ambiguity and their arrogance towards admitting their mistakes, another accident occurred, and the AECL’s organization as a whole continued to flounder and suffer as they attempted to cover up their fatal errors.

To avoid such technical, human, and organizational errors while working on safety-critical systems, coders face the “common mistake in engineering, in this case and many others, of putting too much confidence in software.” Not placing any confidence within a program could also be detrimental to coders, however, as over-testing and striving for unattainable perfection could wear down programmers and result in sub-par programs.  To maintain the balance between over-confidence and worried perfectionism, a general testing guideline could be outlined for safety-critical systems to ease the strain of testing and concocting code. Programs that deal with people’s lives could also be inspected and thoroughly tested – with proper documentation – by people outside of the company to ensure quality control, for I know that from personal experience the value of fresh, unbiased sets of eyes looking over programs for flaws.

When approaching these type of situations, it’s also a challenge to ensure that coders realize that their for loops and if statements could save or kill others, depending on how their code is utilized. Therefore, coders working on a safety-critical system should be of morally sound character when working on such safety-critical systems, for apathy could accidently cause fatal, devastating, and / or unwanted consequences. Caution and dogged diligence are also necessary when facing such life-threatening challenges, as programmers armed with a passion rigorous testing and for life would be more willing to spend the time and effort to create bug-free code for such systems.

With all of this information in mind, I believe that when such accidents like the Therac-25 occur, coders should be held accountable for the mistakes and bugs present in their code. When software engineers accept a job that deals with human lives, they are accepting that their code could potentially kill or save other people, and alongside this knowledge, they also must be held responsible for what arises from their creations.  While some people do not view a bridge or structural failure in the same light as a software malfunction, have essentially the same consequences – the potential cost of human life.  Also, the users of the program are not at fault for typing too fast; no, the programmers should be held accountable for this accidental oversite.  Arrogance, negligence, and ignorance are not excuses for fatal errors; even if it was an accident based in these character traits, coders still must be responsible for the creation they willing set loose into a public setting.

Sunday, February 12, 2017

A Tangential Ramble about Go, Listening, and Conducts

Computer Scientists can have quite an ego.  They think they’re perfect – superheroes, even, armed with “powers” that the regular populace could never hope to acquire.

I think we’re just ignorant.  Too “book smart” for our own good.

The majority of my friends (and myself included) who are Computer Scientists tend to be not as “street smart” as they are “book smart” (emphasis on majority here. I’m sure that there are plenty of other coders who do not fit this description).  My friends struggle with making conversation that does not pertain to seg faults or their favorite sci-fi TV shows, with detecting emotions in other people, or realizing that their arguments can be seriously hurtful to the others involved in their debates.  Decision making is not in their repertoire, and navigating and planning events would just completely overwhelm them.

Because of my observations of an extremely small subset of Computer Scientists, I believe that Codes of Conduct are extremely necessary for any technology-oriented company.  Some people just don’t think about the ramifications their actions may have on other people or the life-changes consequences they might induce, so setting up a Code of Conduct would at least be a step in the right direction. Even if some people choose not to read or care about such a form, the existence of a Code of Conduct forces a set of rules on a group of people, and with this set of rules in place, people could not claim ignorance when penalized for their derogatory or morally wrong actions.  They also could not get away with stating that, as Linus Torvalds said, “I simply don't believe in being polite or politically correct.” Being a Computer Science major, I have also learned to follow a strict set of guidelines when creating code for assignment; therefore, with a Code of Conduct, people are more likely to conduct themselves properly if guidelines are formally laid out and actually exist.

Based on this reasoning, I believe that the Go community’s Code of Conduct is the most inclusive, detailed, and effective out of the other Code of Conducts that were shared. I appreciated its inclusion of example situations in which the Code of Conduct was violated, and unlike the Django and Ubuntu Codes (which contained nearly the same wordings and structure for their points), the Go Code of Conduct contained rules and information specifically for their own community, which I believe is an extremely important element in any Code of Conduct.  In contrast to the Linux Code, the Go Code of Conduct is also very specific and concise, avoiding, for the most part, flowery and frivolous language that does not actually mean to both the readers and the enforcers of these guidelines.  Also, the Linux code dictates that sufferers contact an ambiguous advisory board or work out the issue between the perpetrators. I believe that this is completely unreasonable, as just “settling it amongst the parties involved” never results in a fair, resolved situation.  Therefore, the Go Community’s Code of Conduct is more successful in that aspect, as they involve moderators (who are also held accountable for their actions) who are non-partisan judges of the conflict at hand.

While I believe that a Code of Conduct should exist for such communities, I do not believe that they should be treated as a strict law without any leeway for less extreme situations. It’s a hard balance between being too strict or too lax with something as sensitive as social issues, and it’s even hard for me to “figure out” a stance on Code of Conducts after reading these articles.  Everyone’s line between politically correct statements and slander is different, and every community and company is founded due to different visions and desires.  Therefore, there are probably some communities in which a Code of Conduct would not need to be as strict or all-encompassing. In the case of the Ruby Community, for example, maybe they need a Code of Conduct, but that set of guidelines could be tailored to fit the ideas of the community as a whole instead of the voices of SJWs.

This leads me into the Nodevember case.  While the articles did not provide that much detail on the case as a whole, the statements included from Crockford’s speeches in Adam Morgan’s blog did not seem offense to me whatsoever.  To others, however, or within other speeches not discussed within the articles, he could have said something more offensive that could have upset a multitude of people. Nodevember made the choice in the best interest of their community, but not giving an official statement, reason, or evidence as to why they decided to rescind their offer to Crockford does not seem to be fair to the other party involved. The evidence provided within Kas Perch’s article does not seem to be enough to justify removing Crockford from their list of speakers.  The whole case is surrounded in too much ambiguity for me to make a fair judgment as to what side was justified in this situation, but personally, I would be interested in hearing Crockford speak, especially after these accusations.

If there’s one thing in life that I do decently well, it’s that I like listening to other people. (I feel like this blog is getting to be a bit tangential)  I’ll listen to opposing views; I’d listen to Crockford because the “opposition” is just as interesting as the defense. Opposing views are what make life interesting and unique; without them, life would just be pretty boring if we all just agreed with each all of the time.  Therefore, I would go listen to him. I wouldn’t get offended, but I would just be more aware of the countless viewpoints people can have on a situation.

Saturday, January 28, 2017

Imposter

Before I begin, I would like to ask that you do not mention my name in association with this blogpost.  While I am fine with quotes from this post being read aloud in class, I request that you do not identify me as their author.
-----------------------------------------------------------------------------------------------
I am an imposter.

I am a fake, not a real programmer but a Computer Science major who masquerades as a “junior coder.” I am a person that Sam Phippen would refuse to hire, for I am just one of the “every final year CS student he currently knows” instead of a coding bootcamp graduate. I am wearing the mask of a true CS major, for I have squandered this absolutely amazing education by sliding through my Computer Science classes by the skin of my teeth. Even though I have passed all of my classes, I have learned so little about programming that, if asked, I could not program a linked list in any coding language. Sure, I could easily tell the basic theory concerning a linked list, but creating one, especially in a short time frame?  That would be impossible.

I really am just an imposter.

Because of this attitude, I have had nothing but terrifying and embarrassing experiences with the technical interview process. Too nervous and too busy trying to juggle schoolwork, activities, and applications to study, I have gone into interviews so nervous that I could not even code a simple function. All I felt afterward was embarrassment and shame, for I could discuss the time complexity of a function but struggled to write a simple class in C++.  Since that failure, I tried skimming through Cracking the Coding Interview and going to workshops about technical interviews, but this preparation did not bestow any confidence or optimism about obtaining another internship or a job. The prospect of continuing this search does not excite me whatsoever; I am terrified and disheartened because I know that immediate rejection is right around the corner.

Imposter.

 Jeff Atwood reaffirms my fears about the hiring process, saying that “it is better to reject a good applicant every single time than accidentally accept one single mediocre applicant.”  I am one of those mediocre applicants, destined to be rejected simply because of my nervousness and my inability to work quickly and accurately. Due to these fears and the troubles I have in technical interviews, I wish that tech companies would approach their hiring process in a more uniform manner.

While I would like to entirely rid the interview process of its technical components, I do understand its importance, as highlighted within the reading for this week and the quote in the previous paragraph. One aspect of this hiring process that I dislike, however, is the inconsistency of question difficulty across companies’ tech interviews. For example, one company my hire you, as Dan Kegel says, “if you can successfully write a loop that goes from 1 to 10 in every language on your resume, can do simple arithmetic without a calculator, and can use recursion to solve a real problem.” In other interviews for a similar position, however, you may be asked to write an algorithm on a whiteboard in 15 minutes or recall ideas and concepts from every single one of your past Computer Science classes. Also, it is a little concerning that other engineering disciplines are not quizzed in the same manner that Computer Scientists are, for I know Mechanical Engineers, for example, who have gone through countless interviews without once being questioned about their engineering knowledge.  While I realize that every field of practice, position, and company require different skills and that we should be able to adapt to these differences, the disparity between question complexities is tiring for interviewees.

Based on what was said in the articles, I believe that companies should adopt the job shadowing format for their hiring process. Interviewees would spend a day partner-programming with a senior in the company, and by the end of the day, the senior programmer would access the interviewee and decide whether or not to hire the junior coder.  In my opinion, this format would probably be very successful, as all other forms of interview do not accurately capture how a young graduate would operate as a member of the workforce. Quincy Larson agrees with me on this point, saying that “virtually every developer I’ve talked with agrees that one’s ability to write algorithms from memory on a whiteboard has almost nothing to do with real day-to-day developer work.”

Whiteboard questions force programmers into unnecessary perfection.  We are messy creators, where about 70 percent of our time is spent fixing and tweaking the little mistakes we make during the construction of our art.  Therefore, I believe, as Sam Phippen says, that whiteboard questioning and other forms of rapid but technical testing are “really not fair. Immediately you’re randomly filtering for people who are good at a kind of thinking which most programmers don’t encounter every day. Worse than that, you’re going to knock a lot of people’s confidence.”

This is true in my case; these interviews, as I have explained before, completely destroyed my confidence concerning technical interviews and obtaining a software position at a company.  In all honesty, I probably could be successful in such interviews if they were not daunting, complicated, and nerve-wreaking for a simple Computer Science graduate, but because of these factors, there is only the smallest of chances that I will obtain a job in Computer Science upon graduation.

Therefore, I truly am a Computer Science imposter.

Sunday, January 22, 2017

Hacker. Writer. Painter. Computer Scientist.

If you asked me, before this class, if I was a hacker, I probably would have laughed at you and would have inwardly scoffed at such a question. Many in the general populace believe that all computer scientists are hackers, secretive geniuses working at odd hours to steal information from businesses and governments.  At least, was my general opinion of hackers before I began studying Computer Science and a stereotype that I may have harbored for a while even as I started my career at Notre Dame. No, I would tell you, I am definitely not one of those criminals, as The Mentor insists all hackers are.

 I create code for “morally sound” causes, not for breaking into secure information, and I will continue to hold myself to such a standard when I leave Notre Dame. I’m not interested in tweaking or exploiting software for the sake of curiosity, discovery, or fame; I want to design my own beautiful code that, through its execution, tells a story. I am not a hacker, not a destroyer, but an artist who writes and paints in code, simply utilizing this tool as an end of achieving my means of sharing my stories with the world.

I am not a hacker.

According to the readings for this week, however, I just might be one.

In the essay Hackers and Painters, the author write that hackers “are trying to write interesting software, and for whom computers are just a medium of expression, as concrete is for architects or paint for painters.” This quote in particular struck me, as I have always considered myself, contradictory to most, a coder who simply uses this medium of communication to tell stories through art and gameplay. Therefore, this article partly convinced me that I was a hacker, an unconventional creator, but I still was not sold on this definition of a hacker. The definition of hacking, presented in The Word “Hacker” seemed to fit my original thoughts concerning this act; “it's called a hack when you do something in an ugly way. But when you do something so clever that you somehow beat the system, that's also called a hack.”

Based on these two modes of thought, I now believe that hackers are explorers, fueled by curiosity and potentially an adversity towards authority, who break the limits of code and share their discoveries with the rest of the world. The way they approach their explorations into code differ based on hacker; some may cheat the system with ugly code that works more on lucky coincidence than purposeful skill, but others may be able to produce aesthetically beautiful code that to them, is a form of art.

Based on this definition, there are a few fundamental ideas which encompass the hacker archetype. One such idea would be that of complete control over their medium; as stated in The Word “Hacker,” “"hacker" connotes mastery in the most literal sense: someone who can make a computer do what he wants—whether the computer wants to or not.”  I believe that hackers must exhibit a rare form of brazen confidence, for this belief would encourage them to explore the depths and reaches of a computer’s ability as well as other coders’ work.  Hackers also must be unruly, willingly able and enthused to dabble areas that they should not go and be excited to break into forbidden and secure systems. In tandem to their unruliness, hackers also must “carry an ethic of disdain towards systems that normally allow little agency on the part of ordinary individuals” (Scott, The Hacker Hacked).

Possessing a certain skill, creativeness, and grace is also essential for hackers’ arsenals, as their job demands that they create beautiful and subtle tweaks to code or scripts that cannot be easy detected by other users. To conclude this hacker archetype, these code artists must show passion and care for both their victims and those they hope to aid.  With this understanding of users, hackers can slip more easily into the cracks of code because they empathize with its creators and users.  As Hackers and Painters states: “empathy is probably the single most important difference between a good hacker and a great one.”

Overall, the idea of having a creative, painter-like hacker continues to strike me as odd; I always thought of hacking an ugly and destructive and not a work of art.  Also, despite being a harmful, illegal act, hackers were mostly portrayed in a positive light throughout the given articles. These code manipulators were almost praised for their efforts despite criminal nature of their works, which, for the most part, surprised me. Other than that, however, the definitions of hacker presented through these articles seemed fairly standard for someone coming from a computer science background; if they were written by someone outside the major, my reaction to their ideas would have probably been a mix of confusion and amusement.

I believe that Brent Scott in The Hacker Hacked presents one of these stereotypical, positive definitions of hackers; he says that “the emergent tech industry’s definition of ‘hacking’ as quirky-but-edgy innovation by optimistic entrepreneurs with a love of getting things done.” This description, in a way, fits me perfectly, as I love to work on “quirky” projects that allow me to shape code to fit my personal ideas. Alongside this definition, I also fall into many of the descriptions given in A Portrait of Random J. Hacker: wearing no makeup and work-out clothes even though I hate sports, reading in my spare time, and eating mounds of Chinese and Mexican food.  Therefore, based on my choice of degree, my creative outlook concerning code, and my personal preferences, you may truly say that I am a hacker.  Based on my personal definition of a hacker, however, I lack the disdain of authority, the rebelliousness and unruliness, and the sense of pure curiosity that all hackers must exhibit.

So, if you were to ask me again if I am hacker, I might look at you thoughtfully for a few seconds before saying one word.  No.

Thursday, January 19, 2017

A Cautionary Tale of Talents

Before I begin, I must say that it was a bit surreal and nostalgic to unearth this old blog from my first semester here at Notre Dame.  I initially decided to use this blog out of pure convenience (and beginning of the semester laziness), but looking back on my old posts, I am glad that this blog will serve one final purpose as I conclude my time at Our Lady’s University.

All nostalgia aside, my name is Lauren Kuta, and I am currently a senior Computer Science and English major. Upon graduation, my dream careers include becoming a novelist or a digital animator, producing elements and works for either movies or video games.  After taking this class, I hope to gain a better understanding of the ethical and moral responsibilities that Computer Scientists have when constructing programs, and with this knowledge, I hope to be able to make the right decisions when faced with a moral dilemma in the workplace.

Based on the interactions that I have witnessed online, I believe that one of the most pressing issues that Computer Scientists face could concern copyright and fair use laws surrounding both code and other resources.  Copyright laws can be confusing and ambiguous, and ignorant coders can easily fall victim to plagiarism if they do not carefully analyze these regulations.  Looking at the topics listed on the schedule for this course, though, I am interested in discussing outsourcing and Visas in the context of Computer Science, for I did not initially consider the impact that these elements would have on my chosen field.

The Parable of Talents, despite its age, addresses this impact that humans and other elements constantly have upon fields of study and society’s development.  As human beings, it is our duty to cultivate and share our talents with the world, instead of burying them out of fear or supposed, false necessity. If people use what they have learned to benefit society, then, their gifts will not only aid those directly involved but will also reward the giver as well.

Personally, this story relates to me because in being exposed to Computer Science at a volatile age, I was given the opportunity to learn and appreciate a skill that others will never begin to understand. I was lucky; my parents could afford to send me to a special summer camp that, without it, I would have never dreamed that I could understand Computer Science.  Because of this opportunity and my relative success in the pursuit of my talents, I now have an innate duty to use Computer Science for the benefit of society, just as it is my duty to pursue my other talents with the goal of bettering the human race.

However, to quote Uncle Ben from Spiderman, “with great power comes great responsibility.”  As a Computer Scientist, I cannot haphazardly toss my work out into the world; before exposing society to my work, I first must ensure that my work can be called “a talent,” a helpful product that would not cause harm to anyone in any foreseeable circumstance. Bill Sourour discusses this issue of understanding in his article “The Code I’m still Ashamed of”; blindly programming a “harmless” quiz concerning prescriptions for young girls, Sourour indirectly caused the death of a teenager because of the ethical flaws in his code. Society as a whole is also not educated on the harm that coding can inflict, as Benedida mentions in his blogpost “The Responsibility We have as Software Engineers.”  He states that “most people have a pretty good idea of the trust they’re placing in their doctor, while they have almost no idea that every time they install an app, enter some personal data, or share a private thought in a private electronic conversation, they’re trusting a set of software engineers who have very little in the form of ethical guidelines.” Because of society’s inherent ignorance toward software, my duty as a Computer Scientist also encompasses the moral implications that code carries; if I ignored such ethics, my talents would quickly become lost and wasted.


Upon worrying about the potential ethical consequences of publishing code, however, people may become too afraid, too concerned about hurting others, to openly give of their talents.  They are then similar to the man who buried his talents in the soil out of fear, leaving his gift to rot and receiving nothing upon his master’s return. Therefore, Computer Scientists must overcome their fears of imperfection and use their talents tactfully and thoughtfully, understanding the weight of even the smallest line of code.  With proper caution, Computer Scientists, including myself, can utilize our talents and passions to create code that only aids the technological advancement of our society.