Sunday, February 26, 2017

Why I Don't Have a Facebook Account

As much as I would have liked to include an anecdote about why I don’t have a Facebook account here, I’m very long-winded and am just barely under the word count. So, let’s just jump right in.
After reading and analyzing the given articles, I believe that technologically-orientated companies should not weaken encryption or implement backdoors for the express purpose of government surveillance alone.  I do believe that companies should allow the government to examine a singular, specific device during criminal investigations; for example, I believe that the government was right in trying to procure the San Bernardino shooters’ phone or should be allowed to view records of, as they mentioned countless times throughout articles, a person involved in child pornography.  If a search warrant is properly secured, then the government should be given access to that singular device’s contents, for searching through a phone is roughly equivalent to examining other personal items, such a diary, which could contain evidence useful for a conviction.

Smaller cases aside, government surveillance on a massive, generic scale, such as maintaining a database of driver’s licenses and secretly asking Google, Yahoo, and Facebook for browser history and other information, is an invasion of privacy. Accessing this data without any suspicion of criminal activity is unwarranted and would be unnecessary additional work for the computer scientists involved. In all of the gathered data, the likelihood of detecting criminal behavior given the data collected would be relatively small and could easily lead to a misidentification or biased assumptions that could, in turn, cause false accusations or arrests. Therefore, I believe that while government intrusion of someone’s “private information” would be acceptable in the context of a completed criminal act, mass surveillance to maybe catch a criminal is unnecessary and frightening.

There is a strong counter-argument to this argument, however, and that is that if someone broke the encryption on a phone in for one specific instance, then this person should continue to use this encryption program to its fullest potential and as a source of leverage.  This is exactly what the FBI did when they jailbroke an iPhone; because of their success, the FBI expanded their jurisdiction by requesting the ability to make overseas warrants to remotely hack and collecting more personal information from companies, including account numbers and login information.  In my opinion, this is definitely a violation of privacy, and if such successes continue, government officials could continue to demand more and more information, causing this problem to snowball out of control.

To prevent this mass surveillance, an extremely strict set of checks and balances could be enforced between both providers and the FBI concerning encryption and information distribution. Going behind Apple’s back to retrieve information from the San Bernardino shooters’ phones was wrong; the FBI abused the methods to retrieve information by continuing to expand and push further surveillance. Overall, both parties involved must be cautious and conscientious of the information they are handling and the powers they use to access such information.

Considering just the tech companies’ roles in this relationship, I believe that they must protect privacy while also, when possible, aiding the government in their investigations. When buying a phone or other device, the consumer trusts their personal device and its makers to a certain degree, agreeing to use it on the grounds that texts are private and their information is kept safe. Therefore, in regular, day-to-day circumstances, companies are obligated to provide this form of privacy. However, when crimes occur and search warrants are produced, tech companies should comply with the law if the device contains legitimate evidence or information crucial to solve a crime.

This approach of allowing limited investigations, I believe, does at least an acceptable job at balancing free-flowing information and extreme terrorism.  There is probably a better method of approaching this precarious situation, for my method does not actively “prevent” terrorist or criminal activities from happening in the first place. However, if the government starts to accuse people for crimes that, at one time, they may have considered but decided against doing, then that would inherently wrong. At the same time, though, preventing injuries from ever occurring would be amazing.  To once again push back on that point, texts and metadata are not always indicative of final behavior. Based on this paragraph alone, I believe there is no absolute way to balance freedom with absolute security, but through limited investigations, the government and tech companies can take a step in the right direction to promote these two concepts equally.

As I have stated before, one of the flaws in my “plan” is that I do not provide any assurance or way that crimes can be preemptively prevented. This is because if government surveillance becomes too all-encompassing and invasive, the terrorists and those who wish to commit crimes would find a way to do it even without the internet or tech companies’ devices. If such mass censoring were to occur, fear and distrust would grow in citizens’ hearts, and those with malicious intent would be even more encouraged to operate in even more subtle, undetected ways. By telling society that if they have nothing to hide, they have nothing to fear, the government would also continue to cultivate the fear of an official falsely interpreting posts and actions, deciding that a person is hiding something, and then taking appropriate measures to eliminate the threat. In the end, the government would make society more dangerous as terrorists are pushed out of the more public channels out of fear and into places where evidence on them would be even harder to collect.

Thursday, February 23, 2017

Hidden Figures and My Role Model: The Engineering Camp

After watching Hidden Figures, it donned on me that minority and women engineers are not really celebrated within their field or within history, despite the emerging push for such groups to join the industry in recent years.  I never see ads that promote historical, engineering minorities, and we have never discussed them in high school or college. This movie, however, brought such historical figures to the forefront, but even though these three women were able to break the mold and become successful engineers, I have never heard of such amazing figures before this film. That's probably why the film was given such a title; these women were truly Hidden Figures until this movie told their remarkable stories.

As Jacob Kassman and I discussed in our podcast, I think one of the reasons why these figures are not celebrated is because, to a multitude of people still alive and still within the industry, this isn't exactly "history."  Being a millennial, it feels like such blatant discrimination and segregation took place millennia ago, but in reality, these events occurred within the past 56 years. There are still plenty of people who lived and worked during this time who are alive today, and the backbone of our society, our workplaces, now rests on shoulders.

As much as we would like to believe that is has disappeared within the hearts of these people, racism and discrimination still works subtly through actions of people without them even realizing it occasionally.  This is covered beautifully within the movie as a supervisor tells Dorothy Vaughan that she has nothing against "y'all" or "black people."  Mrs. Vaughan responds, saying that "I know, I know you probably believe that."  This line really struck a chord with me as it powerfully showed that despite what we may say, racism and discrimination can still be buried in hearts, seen either blatantly or through microaggressions and slip-ups. The movie further proved this point through Mr. Johnson, who, upon first meeting Miss Goble, was shocked that they would let a woman like herself work on such complicated and sensitive projects.

After seeing the movie and these two examples, I believe that some of this inherit sexism and racism is very much alive today in these same manners, and overcoming these deeply-ingrained stereotypes and assumptions is probably the most difficult challenges that minorities and women can face.  They may encounter all of the same issues as these three women did, including working with a crowd of people who look and act in different manner and who were raised with different expectations, interests, and family situations. Also, they may struggle with having to prove, with more credentials than their counterparts, that they are worthy for the job not just because they are female or a minority or even (in females’ cases) struggling to find a women’s restroom within a reasonable distance to their desk.  Because these challenges exist, it is very reassuring to finally know of some role models who exist within the engineering field and who have rose above such issues.  I only wish I had known of them earlier.

As I have just explained, I see why role models are important.  There are people similar to you who have overcome all adversity to become successful in your field of study, and through them, you can become more inspired to achieve your life-long goals.  You could also find new goals or learn of new, interesting topics to pursue, and based off of the knowledge that "if they can do it, then you can," you also truly will believe that you can accomplish whatever your heart desires.

When I was younger, and even now, I was always embarrassed to admit that I didn't have a role model.  Whenever that question came up as an ice breaker or in a "get-to-know-you" situation, I never knew what to say; I just looked down at my feet and shuffled them around a bit before mumbling some generic answer, like Marie Curie or J. R. R. Tolkien, just to free myself from the awkwardness and embarrassment.  I've never had some figure, either living or dead, either a celebrity or close relative, that I've distinctly looked up to as a role model. I always believed that I was forging my own path, and because of the unique situation I was raised in (which, honestly wasn't very unique at all), there was no one in the past who was quite like me and could guide me to the path of success (wow, I was one arrogant child.  And maybe I am still that arrogant.). No, I figured that I could forge my own path through life without the influence of others; I didn't need to look up to someone to show me the way.  I already knew what I was doing and where I was going, and if you would have asked high school me what she was planning on studying in college, she would have said anything but math or engineering.

I was never interested in Engineering in high school; I hated math and just wanted to be an English major.  Because my brother (a year older than me) liked Engineering and because engineering fields paid well and actually have job opportunities, my mother forced me to come to an Introduction to Engineering camp hosted by Notre Dame the summer before my Senior year of high school.

Throughout the prior spring semester, I dreaded coming to Our Lady's University.  I believed that I was not cut-out to do Engineering and would find it boring and extremely difficult, but I was desperate to participate in anything that I would increase my chances to be accepted into Notre Dame. So I came, and saying I was nervous would be an understatement.  Over the course of two weeks, however, I came to realize that 1) I wasn’t as terrible at Engineering as I thought I would be and 2) I loved programming.  As I learned how to program the Notre Dame Fight Song note by note in LabVIEW, I had a revelation, an epiphany.  For me, software and programming didn’t have to be crunching numbers day after day and working on math-related projects.  No, with programming, I could create, I could make and write stories in code, illustrate and bring to life the worlds I had whirling around in my head for countless years.  Only through IEP would I have ever realized that I could like something so based in logic and computing, and because of that program, I’m sitting here, writing this blog post today.

Sunday, February 19, 2017

Arrogance and Ignorance Strike Again

Every time I write a blog for this class that is not about myself, I end up talking about how I see Computer Scientists: arrogant and ignorant but somehow loveable people who love technology.  While these adjectives may be stereotypical, they are stereotypes for a reason, and once again, they directly apply to the case at hand.

When discussing the Therac-25 incident, I think that Nancy Leveson and Clark S. Turner completely pinpoint the root causes of this disaster; in their article, they say that “accidents are seldom simple - they usually involve a complex web of interacting events with multiple contributing technical, human, and organizational factors.”  Breaking these events down, there were several technical problems that directly caused these accidents to occur. The major technical issue involved time, for when a variable controlling whether an x-rays or electrons was altered, the machine required 8 seconds to readjust its settings.  If the value was changed again during this 8 second period, the magnet would not recognize the second change and would display a Malfunction 54 message before overdosing a patient. Another error was discovered during machine set-up, for when a variable would roll over from 255 (its max) to 0, a portion of the machine would not be checked, causing a fault to be undetected. A few other issues, including a ghosting data table and the lack of hardware limiters, were found within the machine and its code; these issues, then, while technical in nature, were left unchecked and unaddressed by the machine’s human creators.

Within this network of causes, humans contributed their negligence and their arrogance to the failure that was Therac-25. Throughout the investigation process and the supposed improvement of their code, the AECL danced around the topic of fixing their fatal errors, for the statements with the article led me to the conclusion that the AECL did not actually test their code initially or after their implementing their improvements. This statement (a true testament to their arrogance) led me to believe this unwillingness: “the AECL representative, who was the quality assurance manager, responded that tests had been done on the CAP changes, but that the tests were not documented, and independent evaluation of the software "might not be possible.”  Also, because the code from the Therac-25 was built on top of old code (and written in Assembly), the coders working on this project were bound to run into more issues.  Their negligence, however, resulted in a final code that was even more convoluted and bugged than the original.

Considering the organizational side of this issue, the approach taken to fix this issue was extremely ineffective. After arguing that they could not reproduce the problem on their own machine, the AECL finally admitted to their mistakes, but when presenting the documents outlining the fixes they intended to implement, the AECL was incredibly vague. For example, the document “contained a few additions to the Revision 2 modifications, notably - changes to the software to eliminate the behavior leading to the latest Yakima accident.” Simply stating that the AECL intended to fix a problem does not provide enough detail within an official statement to ensure that the problem will be resolved, as the FDA said.  Because of their ambiguity and their arrogance towards admitting their mistakes, another accident occurred, and the AECL’s organization as a whole continued to flounder and suffer as they attempted to cover up their fatal errors.

To avoid such technical, human, and organizational errors while working on safety-critical systems, coders face the “common mistake in engineering, in this case and many others, of putting too much confidence in software.” Not placing any confidence within a program could also be detrimental to coders, however, as over-testing and striving for unattainable perfection could wear down programmers and result in sub-par programs.  To maintain the balance between over-confidence and worried perfectionism, a general testing guideline could be outlined for safety-critical systems to ease the strain of testing and concocting code. Programs that deal with people’s lives could also be inspected and thoroughly tested – with proper documentation – by people outside of the company to ensure quality control, for I know that from personal experience the value of fresh, unbiased sets of eyes looking over programs for flaws.

When approaching these type of situations, it’s also a challenge to ensure that coders realize that their for loops and if statements could save or kill others, depending on how their code is utilized. Therefore, coders working on a safety-critical system should be of morally sound character when working on such safety-critical systems, for apathy could accidently cause fatal, devastating, and / or unwanted consequences. Caution and dogged diligence are also necessary when facing such life-threatening challenges, as programmers armed with a passion rigorous testing and for life would be more willing to spend the time and effort to create bug-free code for such systems.

With all of this information in mind, I believe that when such accidents like the Therac-25 occur, coders should be held accountable for the mistakes and bugs present in their code. When software engineers accept a job that deals with human lives, they are accepting that their code could potentially kill or save other people, and alongside this knowledge, they also must be held responsible for what arises from their creations.  While some people do not view a bridge or structural failure in the same light as a software malfunction, have essentially the same consequences – the potential cost of human life.  Also, the users of the program are not at fault for typing too fast; no, the programmers should be held accountable for this accidental oversite.  Arrogance, negligence, and ignorance are not excuses for fatal errors; even if it was an accident based in these character traits, coders still must be responsible for the creation they willing set loose into a public setting.

Sunday, February 12, 2017

A Tangential Ramble about Go, Listening, and Conducts

Computer Scientists can have quite an ego.  They think they’re perfect – superheroes, even, armed with “powers” that the regular populace could never hope to acquire.

I think we’re just ignorant.  Too “book smart” for our own good.

The majority of my friends (and myself included) who are Computer Scientists tend to be not as “street smart” as they are “book smart” (emphasis on majority here. I’m sure that there are plenty of other coders who do not fit this description).  My friends struggle with making conversation that does not pertain to seg faults or their favorite sci-fi TV shows, with detecting emotions in other people, or realizing that their arguments can be seriously hurtful to the others involved in their debates.  Decision making is not in their repertoire, and navigating and planning events would just completely overwhelm them.

Because of my observations of an extremely small subset of Computer Scientists, I believe that Codes of Conduct are extremely necessary for any technology-oriented company.  Some people just don’t think about the ramifications their actions may have on other people or the life-changes consequences they might induce, so setting up a Code of Conduct would at least be a step in the right direction. Even if some people choose not to read or care about such a form, the existence of a Code of Conduct forces a set of rules on a group of people, and with this set of rules in place, people could not claim ignorance when penalized for their derogatory or morally wrong actions.  They also could not get away with stating that, as Linus Torvalds said, “I simply don't believe in being polite or politically correct.” Being a Computer Science major, I have also learned to follow a strict set of guidelines when creating code for assignment; therefore, with a Code of Conduct, people are more likely to conduct themselves properly if guidelines are formally laid out and actually exist.

Based on this reasoning, I believe that the Go community’s Code of Conduct is the most inclusive, detailed, and effective out of the other Code of Conducts that were shared. I appreciated its inclusion of example situations in which the Code of Conduct was violated, and unlike the Django and Ubuntu Codes (which contained nearly the same wordings and structure for their points), the Go Code of Conduct contained rules and information specifically for their own community, which I believe is an extremely important element in any Code of Conduct.  In contrast to the Linux Code, the Go Code of Conduct is also very specific and concise, avoiding, for the most part, flowery and frivolous language that does not actually mean to both the readers and the enforcers of these guidelines.  Also, the Linux code dictates that sufferers contact an ambiguous advisory board or work out the issue between the perpetrators. I believe that this is completely unreasonable, as just “settling it amongst the parties involved” never results in a fair, resolved situation.  Therefore, the Go Community’s Code of Conduct is more successful in that aspect, as they involve moderators (who are also held accountable for their actions) who are non-partisan judges of the conflict at hand.

While I believe that a Code of Conduct should exist for such communities, I do not believe that they should be treated as a strict law without any leeway for less extreme situations. It’s a hard balance between being too strict or too lax with something as sensitive as social issues, and it’s even hard for me to “figure out” a stance on Code of Conducts after reading these articles.  Everyone’s line between politically correct statements and slander is different, and every community and company is founded due to different visions and desires.  Therefore, there are probably some communities in which a Code of Conduct would not need to be as strict or all-encompassing. In the case of the Ruby Community, for example, maybe they need a Code of Conduct, but that set of guidelines could be tailored to fit the ideas of the community as a whole instead of the voices of SJWs.

This leads me into the Nodevember case.  While the articles did not provide that much detail on the case as a whole, the statements included from Crockford’s speeches in Adam Morgan’s blog did not seem offense to me whatsoever.  To others, however, or within other speeches not discussed within the articles, he could have said something more offensive that could have upset a multitude of people. Nodevember made the choice in the best interest of their community, but not giving an official statement, reason, or evidence as to why they decided to rescind their offer to Crockford does not seem to be fair to the other party involved. The evidence provided within Kas Perch’s article does not seem to be enough to justify removing Crockford from their list of speakers.  The whole case is surrounded in too much ambiguity for me to make a fair judgment as to what side was justified in this situation, but personally, I would be interested in hearing Crockford speak, especially after these accusations.

If there’s one thing in life that I do decently well, it’s that I like listening to other people. (I feel like this blog is getting to be a bit tangential)  I’ll listen to opposing views; I’d listen to Crockford because the “opposition” is just as interesting as the defense. Opposing views are what make life interesting and unique; without them, life would just be pretty boring if we all just agreed with each all of the time.  Therefore, I would go listen to him. I wouldn’t get offended, but I would just be more aware of the countless viewpoints people can have on a situation.