Saturday, May 13, 2017

Driving Me Crazy

People are just lazy, and because we want to be lazy, we are somehow motivated to innovate and advance today’s society just on the hope of someday being lazy. That, I believe, is the primary reason behind the development of self-driving cars.

All joking aside, the two motivations for researching and constructing self-driving cars appears to be safety and convenience. Accidents involving cars are one of the leading cause of death in the United States, and taking control from humans and placing it into the hands of software would take incapacitated or illegal drivers off of the road. Even good human drivers can cause accidents due to the briefest of distractions or being tired; in comparison, programs cannot become distracted, as code, when told, will continually gather information to keep its passengers safe.

In addition to saving lives, the convenience of self-driving cars is a major motivator in their production.  A multitude of people, including myself, dislike driving or become incredibly nervous while driving; a self-driving car would ease the anxiety that I have and ensure that my condition would not harm others. Also, self-driving cars would allow people to be more produce as they travel or allow them to get the sleep or rest that few people receive on a daily basis.  In the same vein as convenience, shipping costs for both consumers and distributors would decrease, for companies would no longer have to pay truck drivers, who are the main transporters of goods in the country.

Even with these perks, people are vehemently against the concept of self-driving cars.  On a surface level, people enjoy the freedom and control they exert while driving, and the action brews a passion in some hearts that would be missed if cars were controlled by computers. Also, hackers are a serious threat to self-driving cars, for even cars today, with their limited computer-driven systems, can be hacked to stop abruptly or swerve into traffic, endangering the passengers inside. Truck drivers and those working within the market of transportation would lose their jobs, and because transportation is one of the leading industries in America, thousands of people would be without employment.

Personal opinions aside, the main question concerning these autonomous cars is this: would they truly make our roads safer? One article states that to prove that self-driving cars are safe, they would have to drive 275 million miles without an accident, because about only about one person is killed per hundred million miles driven. To complete this test, it would take about 12 ½ years with a fleet of 100 vehicles driving 24 hours a day.  This testing, however, would pay off in the long run, for the safety benefits that come with these cars, as discussed earlier, far outweigh the slippery slope arguments of hacking and the eventual robot takeover of earth. Tesla summed up this idea nicely, saying that “self-driving vehicles will play a crucial role in improving transportation safety and accelerating the world’s transition to a sustainable future.”

In the same vein of safety, the question arises as to whom to keep safe in a dilemma; programmers, when designing the car’s software would need to devise a solution to the infamous “Trolley Problem”, deciding whether to keep the passenger or those outside of the car safe.  I don’t think that there is a clear-cut answer to this issue, but I think that minimizing the loss of life would be the only fair way to program the driving software.  Taking into account age, personality, and “importance” are too convoluted and complex issues to incorporate into a program at this time; therefore, minimizing the loss of life would be the only objective of way of dealing with complex crashes.

In addition to life-and-death situations, another question that emerges from this topic is who would be liable in a fatal accident.  In my opinion, if the user is told by the manufacturer that the car is 100 percent safe and an accident occurs, then the manufacturer should be liable for the crash.  If, however, the user has some sort of control over the vehicle, i.e. not fully autonomous, then the user should be held responsible for the crash.  I don’t think that the programmers should necessarily held responsible in any case; while they were the ones who developed the software, the company who created the car in the first place is responsible for performing the proper safety checks to make the car road-safe.

Self-driving cars would not only impact manufacturers and users of cars, but normalizing the autonomous vehicles would also have social and economic ramifications. On the social front, people could become even more physically and socially connected, for self-driving cars would allow users to safety contact friends and family during their drives and could allow for safer travel to remote locations.  Economically speaking, Lyft believes that people won’t have a need to purchase cars anymore, for the pricing of taking a taxi would be cheap enough that people would simply rely on their services.  In addition, truck drivers and the transportation business would be completely uprooted, as I have explained before.

The government would also be impacted by the universalization of self-driving cars, but honestly, I don’t think that the government would have to do much in terms of regulating them.  I believe that the regulation should be done by the manufacturers, and the government, in turn, should just use the same regulations they have put in place. However, a major government change would be the production of licenses. In today’s society, licenses are commonly used as IDs, and without the need to license people to drive, the government would have to rethink their strategy concerning proofs of identification.

With all of these facts in mind, I would probably want a self-driving car.  While I don’t like driving, I understand it as a necessity and don’t mind driving now, so I’m fairly indifferent on the subject of autonomous vehicles. I do very much appreciate the safety benefits they provide, and overall, I think they are a smart investment.  If they would help save lives and would benefit society as a whole, then I’m totally on board.

He's a Pirate

(The last blog will be coming later today)

I think at some point in my life, I wanted to be a pirate, but nowadays, pirates are pretty lame, especially those who haunt the Internet, stealing from content creators and others.  The government acknowledged the lameness of pirates by enacting the DMCA; Robert S. Boynton defines this law by saying that it is “designed to protect copyrighted material on the Web. The act makes it possible for an Internet service provider to be liable for the material posted by its users.”

He goes on to explain that if an Internet service provider is being sued for the content of a subscriber’s Web site, that Internet provider can simply remove the material in order to avoid any legal action. This is called a “safe harbor” provision, where service providers can use a “notice and takedown” technique to quickly disable the distribution of this illegal content.  Through this provision, intermediary sites, where users separate from the site can post information, can execute a “notice and takedown” if the copyright holder flags the video as infringement. This idea, alongside other rules defined throughout the law, made it possible for websites to not live in constant fear of being sued into oblivion (David Kravets).

But what constitutes as piracy?  Unfortunately, there are no clear-rules defining how much work must be stolen in order for it to be considered as piracy.  Some of the factors in determining this, David Kravets explains, is whether the material was commercially used, whether it hurt the original’s market, and whether that work is a parody. While this rule may seem fair on the surface, the DMCA also enforces the idea that consumers cannot lawfully copy DVDs that they have lawfully purchased; therefore, people claim that the law negates consumers’ rights and violates fair use laws.

In my opinion, I do not believe that it is ethical or moral for users to download and share copyrighted material.  Content creators poured their time, energy, and unique skillsets in order to create something beautiful to share with the world, and by sharing it without a cost, people are robbing creators of the money they rightfully earned.  Without paid entertainment, the entertainment business would cease to exist; people who make their livelihood off of movies, TV, music, and artwork are unrightfully losing money to these pirates.

The morality of sharing copyright material becomes a little obscured when discussing when a person owns a version of the material or just wishes to sample the material. If a person owns a copy of the material rightfully, I believe that it is ok to copy that material as long as he or she does not distribute that copy to anyone else.  This action would be difficult to police, however, and counting on someone to not distribute that copy may be harrowing.

As for sampling or testing the material before purchasing, I believe that this action is moral or ethical as long as the person is not testing or sampling the copyrighted material outside of its intended purpose.  If a person was interested in buying something and wanted a preview before investing, then I believe that it is moral for him or her to test the copyrighted material with an intent to buy.  However, if a person is solely utilizing the material for testing, then he or she must delete that material if he or she no longer has a use for it; keeping “unwanted” testing material around could quickly and easily lead to copyright infringement.

Even though I advocate for not sharing in copyrighted material, I have participated in this sharing in one way or another (please don’t turn me into the authorities). I would rather not go into the details, but I justified my actions by believing that the information which I was gathering could not be obtained in any other manner; I could not, after several hours, find and purchase this information legally, as it was not available to purchase or view on the company’s website or through a legal third party.  I still regret my actions and wish to support them financially and legally, but when only illegal sources make that information available, I would rather obtain that information than not have access to it at all.

In my “innocent” world, I would like to believe that many people follow the same thought process that I do; they make this information free and available on the Internet for those who either cannot afford to obtain the copyrighted material or simply do not have a way of obtaining it legally. According to some of the articles, however, people are hoarders; they want to collect as much information as possible, without any intention of using it, for the sake of having a wealthy stash of information. The Internet easily enables this behavior, and through pirating, this information can easily be obtained without breaking the bank.  Others, too, want to make information free and available for everyone, eliminating any sort of boundaries that would prevent people from having access to any sort of information. Others still may not even realize that these acts of viewing and obtaining copyrighted material from the Internet are illegal.

With streaming services such as Netflix and Spotify, it definitely has become easier to watch and view copyrighted material legally.  With Netflix being only a few dollars a month and Spotify being free, both of these services are inexpensive, legal ways of dealing with copyrighted material that has staunched and will continue to discourage the act of pirating.  However, I think that pirates will always be around. For example, Netflix and Spotify cannot buy the rights to every single TV, movie, and song in existence; there will always be content that cannot be obtained any way besides through pirating. While these services will be sufficient for the general public, there will always be people interested in content that cannot be obtained by large, legal companies.

Based on this fact, I don’t think that piracy is a solvable problem. There will always be pirates out there, and the more the law cracks down on them, the craftier and more skilled they will become.  I think that streaming services and the idea of making content easier to access legally will prevent the creation of some new pirates, but there will always be content out there that businesses to do not make legally available online. I do, however, believe that it is a real problem; industries and content creators are continuously losing money to pirates and their illegal actions.

And I have contributed to this problem.

I guess I’m pirate after all.

Friday, May 5, 2017

Warning: Exposure to Coding Could Change Your Life

I apologize once again for the length of this piece; this idea of computer science for all is something that, given my background and upbringing, I am fairly passionate about compared to some of the other topics we’ve covered in class.

On the subject of coding as the new literacy, one article mentioned that until about two centuries ago, most people could not read and write in comparison to today’s society. Coding, according to this same article, is following a similar trend. However, because most people will not be exposed to raw code in their day to day lives, it is difficult to become code literate today; we’re, as the article said, still operating in the scribal age of coding. In general, I pretty much agree with this course of history.  I think that some people, at this point in time, should know code, but is not essential because our technology does not currently require that knowledge. In the future, my opinion might change, but given the technology’s current progress, I don’t think that code is necessary the new literacy. I do believe, however, that is an important skill to learn nonetheless.

Speaking from my own experiences, I believe that everyone should at least be exposed to Computer Science and coding. In my high school, Computer Science classes consisted of learning how to use the Microsoft Office Suite and playing typing games; my teachers did not know how to code whatsoever. If my parents hadn’t forced me to attend a coding camp during high school, I doubt that I would have decided to make coding my profession, and plenty of my classmates could have become successful programmers if they had been introduced to the concept before college. Because of our school’s ignorance, I was one of only four students to who decided to study anything vaguely related to computers.

A study claims, however, that exposing high school students would be a waste of time, for only a few people have an aptitude to understand code’s logical nature. I disagree with this study and its logic and instead agree with the words of Mark Guzdial, who states that students who have a fixed mindset, those believe that their abilities and talents are inherited, will struggle with coding if they do not have a natural tendency to understand it. A student with a growth mindset will instead be more successful, learning to accept failures as opportunities to grow and work harder.  Because a majority of the coding process is failing and correcting errors, I believe that maintaining a growth mindset is critical to success in Computer Science if a student is not naturally gifted in computer science.

Through a growth mindset, I believe that anyone can learn Computer Science, and I am living proof of this concept. I was definitely not “born” or graced with an aptitude for programming; while I kept up with the class for the first few weeks of Fundamentals of Computing I, I definitely have struggled my way through almost all of my other Computer Science classes. Through hard work, though, I was able to overcome my inabilities and learn how to successfully code. Based on my experience, I believe that while not everyone want to put in the effort, hard work would allow anyone to understand code; it just may take some people longer than others to internalize basic concepts.

This hard work, too, can pay off. Companies in every field imaginable are looking for computer scientists, but because most students are not being exposed to coding in high school, people develop an interest in coding too late in their careers or not at all.  Therefore, high school need to require students to take a fundamental computing class. Currently, our lives are run by technology. The average person has at least three computers on their person at a time, and it is important to have at least a high school level of understanding concerning those devices. People in nearly any industry (nurses, retailers, businessmen) encounter machines that run on code on a daily basis, and people in those positions should fundamentally know how code works in order to run the computers that help them save and improve lives.

There are some complications, however, of bringing Computer Science to high schools across the country.  The biggest argument against this movement is that there are not enough teachers trained to instruct high schoolers on the subject; most Computer Scientists end up industry because of higher-paying opportunities. Also, poorer schools could not keep up with the changing technology; replacing computers and other devices every few years is incredibly expensive for most school systems. In addition, parents and others believe that coding is not currently useful in most people’s day-to-day activities, and it is not currently worth it to invest in such a “specialized” field.

While I believe that these concerns for CS4All are valid, I still heartily believe that Computer Science courses should be added as a requirement to high schoolers’ curriculum. I don’t think that teaching grade schoolers how to code would be effective overall; based on experience, younger children are usually more interested in visual changes instead of code itself and the experience of coding. Younger children also do not know what they want in life and are bound to change their career goals despite exposure to code during school. It may be pertinent to make coding optional for grade school students or encourage parents to follow-up with their children, but overall, I think that coding should be introduced at the high school level.

By offering Computer Science classes to all high schoolers, children who never believed that they could be programmers could be inspired to code, and just being exposed to the logical thinking needed to code would be beneficial to students in any profession.  Also, I do not believe that a programming class should be an option in place of other language classes. Learning how to code requires a completely different mindset than what’s needed to understand a foreign language; therefore, I believe that it would be erroneous to group these subjects together. While I understand that it’s difficult to add more courses to a required curriculum, I believe that given the direction of society, exposure to coding in high school is critical in the formation of more Computer Scientists.

I think that the key to teaching Computer Science is computational thinking because even if students will never write a single line of code after class, they may end up using programs that rely on block-based coding or will need to know the basics of coding to be hired. Focusing on the visual aspects of coding as well as the final product is also critical in the early stages of coders’ development; learning that I could make anything with code was an important stepping stone in my career as a programmer. Too much emphasis, however, on this aspect may be detrimental and may cause students to believe that they have been “tricked”, so it’s also important to show the nitty-gritty of coding as well.  Therefore, I believe that the optimal high school computer science program would focus on these three ideas – computational thinking, visualization, and hard coding – in order to give students a well-rounded view what coding truly entails.

Anyone can learn how to program. It’s just like any other action in this world.  Some people will have an innate talent for it; some people will not.  Some people will want to code, and others could care less about it. Those with both talent and drive will perform successfully, but those who work hard and dream of being successful will also become coders.  It just might take some people more time to learn than others. In coding, practice also makes perfect; the more you write and learn to think computationally, the better a coder will become. I think that test discussed earlier is bogus; all you need is drive and the ability to accept failure in order to be successful.

While I believe that anyone can learn how to program, I don’t think that everyone necessarily should learn the gory details of coding. As Basel Farag said, “the line between learning to code and getting paid to program as a profession is not an easy line to cross.” I agree with this idea and believe that everyone should cross the “learning to code” line, but not everyone is destined to become a paid, coding professional.  Being exposed to coding high school is important to our development as a society and to people’s daily lives, but requiring anything beyond a semester in high school is unnecessary. The majority of people will still not use detailed coding information in order to function in their jobs and daily lives, and there are a multitude of jobs that require intense specialization in another subject other than coding. Without these other jobs, society would cease to function; therefore, I believe that it is not completely necessary for people to learn how to code.  Exposure to coding, however, is definitely necessary, for without an introduction into the world of code, people would not realize coding’s true potential.

I do plan on turning in the past two blogs by the end of Finals Week.  Life got the best of me these past few months, and even if you cannot give them a grade, I would still like to finish them because I really do care about my performance in this class.

Sunday, April 2, 2017

Busting open Vault 7: Project 3 Individual Reflection

Link to Podcast: https://drive.google.com/a/nd.edu/file/d/0BwX2A7FfvP-ONHpRMWhsdWpTV2M/view?usp=sharing

Vault 7, as exposed by WikiLeaks recently, details programs which can be utilized by the CIA to hack into almost any device imaginable to use them as glorified wiretaps.  One such example, as I discussed in our podcast, was “The Weeping Angel,” which would turn a Samsung Smart TV into a microphone by causing the user to believe that it has been turned off. It is hacks like these that continue to reaffirm my fears of encompassing government surveillance, and while I understand, as Matt mentioned in our podcast, that such a program cannot be installed remotely, the mere proof that such programs are being developed frightens me as someone who values their privacy.

Because of my fears, I believe that it right for WikiLeaks to bring information like Vault 7 to the public’s attention; however, the website should exhibit more caution than have been when releasing such information. Ignoring the problems that WikiLeaks addresses is not bliss as this ignorance perpetuates the problem. Therefore, telling the public of these morally grey actions is extremely important, but those involved with such information need to be protected from its release’s potential negative consequences.

Protecting and/or separating the “message” from the “messenger” is easier said than done, though.  If one is tactful and is not in any way bringing attention to themselves in regards to the information, then one may be able to separate themselves from their message. However, digital footprints can, now more than ever, form the connection between message and messenger, and this seems to be especially true if this message is posted on WikiLeaks. Reflecting Julian Assange’s inability to separate himself from his beloved website, I believe that Wikileaks itself is too controversial of a website for whistleblowers to post anonymously.

Instead of revealing such information through WikiLeaks, I believe that posting through media outlets such as The New York Times would allow whistleblowers to better hide their identity.  Also, trusting the government to reveal such information would, in most cases, be unwise; while they would be less likely to damage national security when disclosing previously-private information, the government seems as if they would be more likely to never inform the public. Therefore, in a whistleblowing situation, I believe that the messenger cannot rely on the government to eventually make this information public, but that messenger also must be wary of leaking information to places such as WikiLeaks, as their information and identity may not be kept private.

In terms of the release of such information, I believe it is nearly impossible to objectively decide whether whistleblower-material should be released to the public. Withholding information concerning behavior that harmed civilians or could potentially harm societal functions is ethically wrong, but releasing information that endangers US security or put lives unnecessarily at risk should not be revealed to the public. In general, however, I believe that whistleblowing is a commendable act, as it uncovers unethical behavior and promotes change and the prevention of such actions.

Concerning whistleblowing, I also do not believe that the messenger needs to be fully transparent or needs to be transparent by revealing certain information to the public. While I am unsure of all of the connotations “transparency” hold in this context, forcing a person into being transparent by causing them to reveal information against his or her will is morally wrong. The whistleblower has the right to be as transparent as he or she wishes when disclosing sensitive materials. While transparency may seem ideal in most situation, everyone has their own secrets, and completely transparent methods of releasing information is not always in the best interests of the messengers or the receivers.


Not Man vs. Machine, but Man with Machine

In Science Fiction movies, one of the most stereotypical plots within the genre is that of man vs. machine, where machines, in one way or another, try to eradicate humans, surpass human intelligence, or transcend human beings all together. While there are plenty of exceptions to this stereotype, this struggle between man and machine is the first idea that pops into minds when discussing artificial intelligence.  However, I believe that within the real world, this struggle is not something developers should be concerned about given today’s technology.

Based on the provided readings, today’s artificial intelligence seems to be more focused on analyzing large data sets than taking over the world, and it is through this analysis that artificial intelligence differs from human intelligence.  For example, AlphaGo developed from its examination of over 150,000 human players’ games, a feat impossible for a single human to complete. Artificial intelligence also emerges from copious amounts of repetition; for example, neural networks rely on massive amounts of trial and error to solve levels of Super Mario Bros. Through these repetitions that often result in failure or even through playing itself repeatedly, the computer “learns” from its mistakes and forms a sort of intelligence from its discoveries.

While we humans also learn from our mistakes, the manner in which we learn is fairly different from this analysis approach. Humans learn mostly from intuition and experience; because of our more complex neural networks, we are to pick up and understand complex ideas through limited examples. One case to prove this concept is that Lee Sedol learned how become the best Go players in the world through basic practice, not through thousands or millions of played games.  Humans are also able to more easily distinguish between physical objects; one article showed how current computers grossly misidentify objects within an image, such as mistaking a road sign for a refrigerator.  In summary, I believe that this article stated this relationship between human and artificial intelligence well; “Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.”

Based on these comparisons of artificial and human intelligence, I think that software such as AlphaGo, DeepBlue, and Watson are steps in the right direction to producing humanlike AI.  While they are not currently proof of the viability of strong artificial intelligence, they or similar programs may become examples in the not-so-distant future. These AIs are definitely not simply gimmicks; we can learn a great deal from their creation and their testing to aid in the future development of more advanced systems. Therefore, current AIs, in my opinion, are programs that, while not achieving full human intelligence, are stepping stones to achieving even more humanlike machines.

In testing such systems’ intelligence, the first thing that comes to mind is the Turing test, which, since the 1950s, has operated as the main document discussing whether machines could truly think.  However, the Chinese Room is a counterargument to Turing’s Test that was established in the 1980’s, but Turing does briefly address this issue within his paper. Within “The Argument from Consciousness,” Turing outlines an objection to his argument which states that until a computer can produce a sonnet through its own emotions and not through preprogramming, then it cannot be considered as a thinking machine. This is essentially the Chinese Room’s argument as well, that a machine cannot “think” if it constructs sentences off of a set of rules instead of through intuition and thought.

Turing’s counterargument to this claim, in my opinion, is fairly weak; he states that if you do not accept the validity of his test, then the only way to prove that the machine has the ability to think is to become that particular computer. He later goes onto say that this question should be explored deeper when technology is more advanced.

Based on this response, I believe that the Chinese Room is a strong counterargument to the Turing Test, for how do we really know if the machine is truly thinking, generating these ideas on its own volition? I think that Turing Test sets strong initial guidelines for testing AI, but recently, I continually to see strong proofs that this test does not truly prove whether a machine can “think.”

Returning to my opening statement with the previous information in mind, I have come to the conclusion that the paranoia surrounding the power and dangers of future AI should be acknowledged but should not be an active concern. Comparing the AI we have now to those seen in major Sci-fi films, humanity not even close to producing something on HAL’s (2001: A Space Odyssey) or Ava’s (Ex Machina) level.  It has taken us quite a while just to reach this level of artificial intelligence, and because of this, I don’t think that AI will suddenly develop in massive, unexpected leaps that would cause the development of human intelligence in computers.

While I don’t think we need to currently worry about computers destroying the human race, I believe that some of the concerns raised about AI are warranted. For example, giving complete control of weapons systems to an AI is rather frightening to me (this fear could easily be unwarranted, however). While humans could program safeguards into such systems, giving AI the ability to control weapons could be dangerous.

In addition, the fear of AIs replacing within the workplace, I believe, is extremely warranted. While I understand that humans perform some menial tasks more efficiently than computers, there are plenty of tasks for which AIs would have a stronger résumé than their human counterparts, such as driving, calculating, or even running hotels.  However, I do not think that humanity needs to stress over these scenarios just yet, for AI’s intelligence level currently cannot shatter our society. This opinion, however, may rapidly change in coming years as technology continues to advance.

Along those same lines, I also do not believe that artificial intelligence could be considered similar to the human mind; however, AIs could potentially have their own form of mind in the future.  The movie Her explores this idea of AIs being minds, for at the end of the movie (spoilers), the AIs present in the movie transcend their code and “physical containers” into a higher state of being.  While this idea is far-fetched, AI could potentially develop into a hybrid that is not necessarily humanlike or logically-based, but something entirely new. A unique mind.

On the flip side, humans cannot be called biological computers either; while humans have been labeled as computers and have the capacity operate like glorified machines, we are more than breathing calculators. To claim that we are all simple computers would be to label us as uniform and artificial.  This naming would not only disregard our creative and additional cognitive abilities but would also take away our humanity, and making this assumption could pose threats to the rights, respect, and dignity that all humans inherently deserve. Calling us computers would take away our individuality, the spark within us that fuels creativity, and originality which separates us from all other beings on this earth.

Sunday, March 26, 2017

Is This the Real Life? Is This Just Fantasy?

I guess a more apt title would be “Why I don’t have a Facebook Account Part II,” but I think that this title fits the overall topic of this blog better.

While I am probably the only person who has a Twitter but not a Facebook account, I do not see or look for news whenever I log onto the social media. I also do not have friends or family willing to let me peek at their accounts, so I have never been directly exposed to this mysterious thing called “Fake News.”  Fake News has never and does not directly affect me, but after reading these articles, I think I have some understanding now of this wide-spread phenomenon.

Fake News are news articles, usually published online, that dish out false information for the purpose of “clickbait,” or for simply making money from people clicking and viewing the story. They are usually filled with false and sensationalized information about a specific political topic, and while they are occasionally written in hopes of persecuting other political parties, these articles are simply ploys to gain revenue from website advertisements. Fake News is also highly sensationalized but is written in such a way that readers may believe that is true if they do not fact-check this source.

Overall, I think that the concept of fake news is fascinating; I wonder who came up with this idea of writing such articles to generate views and revenue.  I’m not really offended by the existence of Fake News; I think that they can be very entertaining, like The Onion, as long as readers know beforehand that such news is fake.  This is, however, where the problem of Fake News arises, for these articles are written in such a way that it is almost impossible to tell the difference between reality and fantasy. If certain fake news is wildly spread and believed to be true, lies can influence readers’ opinions in a false manner, and that’s a pretty scary prospect in my opinion.

As scary as that possibility may seem, I don’t think that Fake News should be totally censored, however. I believe that not only do people have a right to publish what they wish on the internet to a certain extent, these articles seem to have some entertainment value as well. Also, free speech would allow for their publication.  In the case of displaying links to these articles in other websites’ news sections, private companies have the ability to choose whether they decide to promote such links or not.  Removing these articles from the internet entirely, however, would be wrong.

Trusting social media to pick the proper news to display on their website, fake or not, is a trickier situation. On this topic, I believe that Michael Munez captures my opinion quite well; he states that “imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”  On principal, I do not have a problem with private companies deciding which news content to display, but if a company decides to include a nonbiased news section on their page, then it needs to be fact-checked and filtered for fake news. For example, I believe that the Facebook employees who chose articles for the website’s trending news section based on bias were acting immorally and against the idea of nonbiased, reliable news.  I also believe, however, that it would be acceptable for a website to prefer “biased” news if that is what the corporation or organization believes and expressly states that fact alongside such articles.

While I believe that deciding what type of news content should be displayed is up to the individual company, I’m leery of the idea of Facebook and Twitter deciding the accuracy and truth of an article.  Because of the aforementioned case of personal bias, I believe that such social media should enlist the help of third party resources to ensure the article’s accuracy and save the time and energy of creating a separate team to complete this task.  However, I do believe that is it imperative that companies mark the nature of the news source they are promoting in order to properly inform their users.

Leaving this clarification out of websites is the most dangerous aspect of Fake News; Max Read states in his article that “many of those stories were lies, or “parodies,” but their appearance and placement in a news feed were no different from those of any publisher with a commitment to, you know, not lying.” Luckily, Timothy B. Lee has the perfect solution to this terrifying prospect: “one way to help address these concerns is by being transparent. Facebook could provide users with a lot more information about why the news feed algorithm chose the particular stories it did.”
I believe that transparency in this situation is key. While companies have a right to promote whatever articles they wish, both they and their users need to be acutely aware of the article’s nature before reading.  Companies have an innate responsibility to provide accurate news depending on their goals and beliefs, and social media and aggregators of information have this same responsibility of truthfully reporting the type of news before users read it.

The type of news specifically reported to me was all the same until I left Nebraska; everyone around me had the same views on religion, politics, the weather, anything. It wasn’t until I came to Notre Dame and started having intense conversations about these topics that I realized that people held such different viewpoints on every debatable subject. Therefore, being in a bubble and breaking out of it has allowed me to experience and understand various viewpoints on different subjects.  I think this bubble bursting could occur through the internet as well, if users decided to receive information from multiple sources and be open to the existence of varying opinions.

Even though the threat of a “post-fact” world looms if people did not exhibit this behavior regarding other’s opinions, I think that people always have naturally gravitated toward more sensationalized news that triggers emotions, and there were then always truth seekers, willing to look past the emotions to find accuracy within articles. Facebook is currently pushing this search for truth as well, dispelling the “post-fact” news by eliminating fake news from their trending section.

Fake news will now always exist in today’s society it will always exist, and that’s not a bad thing. However, as people become more aware of its dangers as I have, I think that the truth will prevail despite the threat of a “post-fact” world.


Tuesday, March 21, 2017

Come On, Let's Play Monopoly

(What the title is referencing)

On the topic of corporate personhood, I believe that Kent Greenfield defined this topic very well. He stated in his article that “understand that “corporate personhood” simply expresses the idea that the corporation has a legal identity separate from its shareholders.”  However, I believe that corporate personhood is a step beyond this idea; it is, instead, granting businesses the ability to have similar rights to a person, including the ability to spend campaign money or practice a particular set of religious beliefs.

At least, that’s what I believe it to be, based off the readings.  This whole concept is extremely confusing, especially for someone who is trying to comprehend this while motion sick in an airplane.  Therefore, I would like to blame any misconceptions of this topic on the fact that this blog is being completed in an airplane. (Thanks a lot for scheduling a blog over break.  I really appreciate it.)

(That was sarcasm, if you couldn’t tell.)

The ramifications of the concept manifest themselves, in one way, in the debate as to where the line should be drawn regarding freedom of speech; because corporations singularly do not have a conscience, how can they decide on a whole which religion they decide to support? The views of the corporation may not always represent the views of everyone involved within the corporation; therefore, adapting a religion for an entire company can cause issues and divides between workers and administrators.

On the other hand, treating the corporate person as a separate entity allows people who caused an accident to avoid the blame and incrimination; the corporation will instead be blamed. But again, this easily becomes a double-edged sword. By claiming that the corporation is separate and by placing the blame all on one invisible person, corporations as a whole can more easily execute immoral acts. However, if one person causes an issue, then everyone in the company is punished in a way because there is not an actual person who can shoulder the blame.

After examining the Microsoft antitrust case, I don’t think that their practices could be considered unethical or immoral.  All that they did was install one of their products onto another one of their products; it’s their right, as owners of the OS, to install whatever software they produce onto a device of their own.  If Netscape had developed their own OS, then it would have totally been fine to pre-install Netscape on their own devices; making your own browser doesn’t immediately make a corporation a monopoly.

From my perspective, this whole situation is like calling McDonald’s or Burger King a monopoly for selling their own sides.  Both companies really specialize in burger-making (Operating Systems) but pair their products with French Fries or milkshakes (browsers) produced only by them.  If this was the case, then salesmen of those products should be outraged and claim that those places are monopolies; while I am not sure how many OS systems are out there, it would be ridiculous to limit a company to making a single product because they would be a monopoly otherwise.

If Microsoft did make it harder to install different browsers by putting up deliberate roadblocks in their software to slow their machines down, then I would consider the corporation a monopoly.  Also, if Microsoft had been forcing others to use a different version of Java, this behavior may have made them out to a monopoly as well, but if they had simply developed their own version of Java for their own use, I don’t think that this development would warrant the label of monopoly.  All of these ifs, however, are just ifs; if (again) they did not actually practice these deterrent action, hen how can you call Microsoft a monopoly?  Microsoft simply released their own version of a browser with another of their products, and in my opinion, there’s nothing monopolizing about that.

When discussing the line between a normal corporation and a monopoly, I think the main restriction should be placed based on the whether a corporation is actively working to hurt or prevent other companies from succeeding (the key word here being actively). As I stated previously, if corporations deliberately create scripts to slow down download times, prevent a user from purchasing and using another company’s software, or directly prevent a company from flourishing in any way, then that would be considered as ruthless behavior. However, competition does not equal monopoly. If two companies are making similar products and one is successful while the other is not, the struggling company cannot automatically claim its rival is a monopoly; clearly, if there is competition and a market for both companies, then one cannot be a monopoly.  While the business world can be ruthless, it is not necessarily oppressive.  It’s survival of the fittest, but tampering with success actively is not fair.

While I’m not totally sure of my stance on corporate personhood, I think, based on my opinion on ruthless practices, that corporations do have a moral obligation in general to treat other workers and companies with respect. They also have to respect the practices of other corporations and treat other corporations in the way that they would want to be treated.  I unfortunately don’t have much to say about this topic because I think that the whole situation is fairly straight forward. If you treat a company like a person, then they are automatically given the responsibilities that an individual would have, for the most part.

And this opinion would be true in regards to Microsoft, if you safely assume that they did not actively slow down the installation of Netscape. Microsoft treated their rivals with respect while still working to better improve their business. Even after being convicted of being a monopoly, Microsoft accepted defeat and completed their given responsibilities, even issuing a version of Windows without Windows Explorer to Europe.  I believe that these actions are sound evidence that Microsoft is a morally sound corporation, even if people did not accept their apology and subsequent actions.

On a larger scale, I think that this quote by Cato Shapiro says it all concerning corporations’ moral integrity. "Nobody is saying that corporations are living, breathing entities, or that they have souls or anything like that," he says. "This is about protecting the rights of the individuals that associate in this way." For a corporation to behave morally and ethically, they should protect the rights of the individuals within their company and treat rival corporations with the same respect that is needed for their employees. Through this system of respect and care, corporations can behave morally and ethically.

(Sorry for the length…there were a lot of questions, and I had a lot to say for some of them.  But I’m not going to try to cut it at this point.)