Wednesday, December 14, 2016

Project 5 (Final)

Link to file:

http://mc7013.no-ip.biz:88/classes/cse40175/blog/project_05/project_05.m4a

Monday, December 5, 2016

Blog 13

I think that, although Notre Dame is not universally acclaimed as a paragon of Computer Science teaching, we have a very solid program here. It covers all the fundamentals of the field while also adhering to Notre Dame's vision of providing every undergraduate with a balanced education. I think this aspect of our university's mission makes STEM, and particularly CS, education difficult. CSE is such a rapidly developing field, and for a student to be competitive for jobs and internships, they need to have the required knowledge as quickly as possible. But, Notre Dame, unlike other universities, dedicates an entire year to pursuing a core-curriculum rather than letting students go right into their CS education. That said, our curriculum, as it is laid out, definitely matches up with ABET criteria and the CS 2013 guidelines. Notre Dame goes through ABET certification, and we have maintained that certification. I generally agree with the ABET and ACM guidelines for CS programs. However, although it seems counter intuitive, I think that they should focus more on open-source contribution. This experience is very helpful, in a practical sense, for computer science graduates.

I am in favor of bootcamp programs. They provide many people the opportunity to develop real skills that help them get jobs. Not everyone has the resources or time to commit to a whole four year Computer Science diploma. These bootcamp programs let individuals develop their skills and learn about what interests them. In no way do these bootcamp programs completely replace college degrees in terms of breadth of education and level of knowledge. But, the do provide real skills to people.

I do think that you need college experience to be a good computer scientist. But, being a good computer scientist is different than being a good programmer. College gives insight into many of the foundational issues in Computer Science that, although they aren't going to be, used every day in a practical environment, shape how to approach solving problems. And, after four years at Notre Dame, I feel like I have been trained to be an excellent computer scientist.

Thursday, December 1, 2016

Project 4

Here is the link to the artifact for my fourth project:
http://mc7013.no-ip.biz:88/classes/cse40175/blog/project_04/Project_4.m4a

Monday, November 28, 2016

Reading 12: Intellectual Property

Patents are guarantees from the government of one's right to their own intellectual property. There are a variety of reasons for granting patents, primarily ethical and economic. It makes sense that one should have a right to their own creations. No one should be able to claim your work as their own. Economically speaking, patents create a great incentive for invention. A patent generally guarantees an inventor 20 years of exclusive right to sell their product. So, if one invents a new, or ground-breaking, invention a patent makes sure that they will not have any competition, having the entire market share, for that product.

I think that patents have a place in society. The economic incentive they provide to companies creates an avenue for many inventions that make society better. I don't think they are necessary for society, but they are beneficial. But, the current structure of our patent system hinders progress. Companies end up repeatedly renewing their patents, preventing beneficial products from entering the market, instead keeping the prices extraordinarily high.

I don't think that patents on specific software should be granted. Patents should be more specific, and concrete. When one patents a machine, or other concrete invention, it is obvious when a competitor is attempting to illegally copy that invention - without prying into the specifics of the machine. However, with software, this is not obvious. If one tries to file a copyright claim on software, the claim necessitates that that software is examined. But, this examination means that the company must reveal all the specifics of that program - losing their intellectual property in the process. There is one case, however, in which I think a software patent should be awarded. You can already patent processes for solving tasks. So, I think if one develops a new algorithm, that can be implemented in code, they deserve a patent for that process.

The current structure of the DMCA, and the way in which some companies deal with copyright requests, illustrates the broken nature of the copyright system. There are many examples today of youtube channels being completely shut down due to these copyright notices. What happens is that, as soon as a video is posted, a copyright claim is posted against this video. Youtube's policy regarding these claims is to immediately remove the offending video, and force the user to prove the fact that their video does not violate that copyright. This policy completely prevents new content generation. These videos are generally posted under the "fair-use" clause of the DMCA - as they are reviews of new games and videos and contain selected samples of this content. However, I think this is more of a problem with the content-sites response to these claims, than with the copyright system itself.

Monday, November 14, 2016

Reading 11: Self-driving Cars

Each time we get on the road, we put ourselves in danger. Driving in rush hour, you are trusting hundreds of other drivers, impatient after a long day and just wanting to get home, to pilot their 3,000 pound missiles safely and responsibly. One mistake can threaten not only them, but everyone around them. This concern, safety, is one of the primary factors driving the self-driving car movement. People are unpredictable and, in the smartphone age, easily distracted. These characteristics are hardly ideal when your momentary distraction can endanger many other people. But, if cars could operate independently, this fear would go away. They would have programmed behavior and, if they are regulated properly, be aware of the behavior of all other cars around them.

In addition to the safety motivation, there is one other primary drivers for self-driving cars: time-efficiency. Imagine a world with 100% self-driving cars. Each of these cars is aware of all the cars around it, their speeds, and their future behaviors. Stop-lights and stop-signs would disappear. Cars could self-adjust speeds going through intersections so that they avoid all other cars going in all directions through the intersection. Traffic would become a thing of the past. With no distracted drivers causing accidents, all cars on the road could travel at very high speeds with minimal, although safe, amounts of space between them. How much time do you spend driving / stuck in traffic each year? With self-driving cars, all of this time is freed up to accomplish other things. You could essentially sleep through your early morning commute and just arrive at work.

I strongly believe in the utilitarian approach to programming these life-or-death scenarios into the cars driving logic. It is always the case that saving more lives is better. When an accident happens, it should be the company that developed the driving logic's fault. Presumably, with better sensing ability, the situation could have been avoided.

Self-driving cars, and automation in general, pose a large risk to our social and economic structure. Truck driving is the most popular profession in upwards of 20 states. A large number of these drivers are uneducated, and do not have many other skills to fall back on. But, if we implement self-driving cars, suddenly these tens of thousands of low-skill workers are out of the job. As a society, we need to anticipate the rise of automation and prepare for it. As we increase automation, we need to provide free training to these affected groups to give them other skills that will be beneficial for society in the future. But, it is a given that not all these people will be able to find jobs. As we slowly automate out these low-skill jobs, I think the viability of a universal living income increases. Our industries will benefit so much from not needing to employ these workers and our economy, as a result, will thrive. We can't leave these workers behind, though. With the money we gain as a country, providing a living wage would be possible. I think the government has a hand in regulating these cars. That is just because I think it would be the easiest for the government to be the one to standardize the driving logical guidelines for these cars. The most efficient system would be one in which all self-driving cars are aware of all other cars on the road, and can predict their behavior. This can only happen through standardization at the level of government regulation.

I am a hard-core utilitarian. I would definitely buy a self-driving car, even with logic that may kill me, in certain situations. (Hopefully we have better safety equipment by the time these cars hit the road)

Monday, November 7, 2016

Reading 10: Artificial Intelligence

Artificial Intelligence is a very broad field of Computer Science. But, basically, it is the pursuit of endowing computers with some of the abilities generally associated with human intelligence. Of course, this definition is not all-encompassing. There are three categories of AI: strong,weak, and in-between. These categories arise from the ability of the implemented AI to reveal information to ourselves, the humans, about our own intelligence. This intelligence is fundamentally different, in so far as it has been implemented currently, than human intelligence. AI, as we currently think about it, is implemented for a specific purpose. Recently, we have developed AI to play human games (GO and Jeopardy). But, human intelligence is different, more flexible. We can apply our own logic to changing situations and adapt much easier than the AI we currently develop.

I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.

I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.

I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.

Monday, October 31, 2016

Reading 09: Net Neutrality

The rise of the world-wide-web has brought the world much closer together. Now, people across continents can communicate instantly. And, in this communication, is the opportunity to understand other cultures and, hopefully, bring the world to a new level of prosperity. However, online censorship fundamentally threatens this. It is well recognized that humans have a set of fundamental rights. Among these is the right to freedom of expression/ speech. But, some countries, generally those under authoritarian rule, do not recognize this. Not only do they express the real speech of their citizens, through the media and in-person, they restrict online communication.

I think, in most cases, it is not ethical for a government to suppress online speech. Countries are able to grow and prosper by facilitating open-discussion and finding optimal solutions through compromise. But, by restricting what can be said and seen on the internet, countries are purposely keeping their citizens ignorant. But, I do believe that there are certain cases in which it is ethical to filter online speech.

I do not believe that people should have the ability to directly incite violence. This rule applies to all forms of speech.

Cases:
Is it ethical for companies to remove dissenting opinions for governments?

No, this case is not ethical. People should have the ability to criticize their governments, as long as they are not endangering anyone. In fact, this criticism is one of the cornerstones of democracy. 


Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?

Yes, in this case, I think it is ethical to not allow terrorist organizations. This is because it directly falls under the speech stipulation I posed above. You don't have the right to incite violence and endanger others through your own speech. 

Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?

In this case, it entirely depends on the context. It the questionable content is completely unwarranted, and is out-of-control, I think there is an argument to be made for removing it. That said, I do not believe that people have the right to not be offended on the internet.

Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?

I do not think this is ethical unless the policy is explicitly stated by the company. There is a place for one-sided biased information on the internet. As it is the internet, people who don't believe or agree with the information don't need to visit that site.

Thursday, October 27, 2016

Project 03: Encryption

Link to Ad:
http://mc7013.no-ip.biz:88/classes/cse40175/blog/project_03/Encryption_ad.m4a

I see online communication and activities as simply extensions of the day-to-day activities we participate in. Messaging on social media is just a modern extension of talking to a group. Banking online is just a modern extension of going to your bank. And, as extensions, these online activities should be provided the same security and freedoms as their in-person counterparts. Logically, from this, encryption should be a fundamental right. We would think it as a huge breach of privacy if the government could, at any time, listen to any conversation you had in a group.

Personally, encryption is not that big of an issue to me. I already accept that, after many years of browsing the internet, most of my personal data is out there somewhere. Additionally, I don't really care if the government wants to read the many stupid conversations I've had through social media (Good luck, though, because some of my group-chats have over 100,000 messages in them). But, I think I should probably take this issue more seriously into consideration. This is a huge breach of privacy and should be stopped before the government goes too far.

I think the struggle between national security and personal privacy will be unending. It will carry on, much like a sine wave, with the balance being the 0-axis. It will always fluctuate between sides. As the government attempts to take too much of our privacy, the public will fight back and gain ground.

Monday, October 24, 2016

Reading 08 - Electronic Voting

For months, coincidentally just as it appeared as he was losing, Donald "The Cheeto" Trump has been questioning the validity of the upcoming election. That said, voter fraud is a real concern in this country. According to a Department of Justice Study, as many as 40 cases of voter appeared out of the 197 million votes cast in federal elections between 2002 and 2004. This number becomes quite significant, almost a 20% of all votes cast, if you multiply it by one million.

For real, though, the primary concern over E-voting is twofold: the lack of a paper-trail and the potential for external agents to modify the results of the election. While the majority of these voting machines do leave a trail that can be analyzed, there are a few that are completely paperless. Additionally, as these are machines that run on software, there is always the potential for hacking. If one knows the details of the code the machine is running, it could be possible to find a way to access and make changes to the voting records.

But, despite these concerns, I have full confidence that the results, whatever they are, of the upcoming election will reflect the true will of the American public. My father works for the Justice Department as the Director of the Election Crimes division. He has literally written the book on election crimes in this country. Right now, he has told me, the Justice Department is entirely more worried about the very real cases of voter suppression occurring across the country due to Donald Trump's scare tactics and his recruitment of hostile poll watchers. The incidence of election fraud in this country is incredibly low. So low, that based on National Weather Service Data, you are more likely to be struck by lightning this year than for your vote to not be counted.

I'm not saying that our voting system is perfect, however. There are many ways in which our system could be improved. The first of which is through wide-spread investment in updating our voting systems. It is completely unacceptable that many districts across the country are using voting machines that are over 20 years old. At this point, though, the outdated software can be seen as a feature. There are so few people that still know how these systems work that it is becoming increasingly difficult to hack them.

I think that developing a voting system is fundamentally different than developing a normal application. Although there are data security concerns for both types of applications that developers must take into account. The severity of the former being hacked is much greater. If our national elections are compromised, and the public's trust in our voting system is lost, our Democracy is severely weakened.

I don't think we should ever have 100% trust in electronic systems. No matter how secure we make our online applications, there will always be people attempting to break into them. This is because of the great potential for gain if these systems are unlocked. This is the example I give to my friends whenever this topic is raised: We trust bridges. Bridges are built to be secure and last, even in many severe conditions. But, there generally aren't people trying to constantly knock the bridge down. If this was the case, I don't think anyone would trust our bridges. It's pretty much the same for our online applications. We may put in layers and layers of security. But, given enough time and effort, there will always be those who are able to gain some amount of information.

Monday, October 10, 2016

Ethical Advertising

We, as consumers of the internet, often take for granted the breadth of information and services available for free on the internet. We pay some (hopefully) fixed rate per month and, suddenly, have access to all the world's knowledge. This service has allowed us to become more informed, and more connected, than ever before in our history. But, as it turns out, this information is not strictly free. The, now thoroughly crispy, meme "if you're not paying for it, you're not the customer; you're the product" definitely has some hints of truth in it. Very few people make things that people access for free. After all, there are a variety of expenses one must go through to host e-content and, as the demand increases, the costs also explode. So, these providers capitalize on the one resource available to them, our data.

This process of data-collection, I think, is completely within the right of the e-provider. If you are using their service, they should have access to, within the bounds of their service, the data you generate. The problem arises when these content generators do not inform the content users of the extent of the data collection. This is it is extremely easy for a company's data collection to become unethical. Like I said, a provider should have access to the data you generate on their service. But when, for example, Facebook starts to track your location while the app is open so that it can give you ads about things in your area, Facebook has clearly overstepped the tacit agreement between user and provider. I believe it is the company's responsibility to let the user know when they attempt to gather data about them outside the normal bounds of operation of the service. When I use Facebook, I assume that they are collecting everything I type, looking at what I 'Like', and seeing where I check-in. This data is fair use for them, because I gave it to them through use of their service. But, if they want to track my location in real time, this is them spying on me, rather than me voluntarily giving them the information.

Monday, October 3, 2016

Back Doors and Encryption

It is self-evident that any company that wishes to do business in the United States of America must abide by the laws set by the government. No one challenges this notion. When companies fight against the government on this issue (weakening encryption / adding back doors), however, they are not necessarily doing anything wrong. There is a profound tension that exists between the government's responsibility to protect the people and a company's responsibility to protect the interests of its consumers. In some cases, such as the Apple's, there is no solution. Apple, by installing a back door to their operating system, automatically puts at risk all of their millions of customers. But, the government, by not having access to the data that becomes available from this back door, may also put at risk the lives of some citizens. So, in the end, someone will be unhappy. As this issue continues to develop, we must ask ourselves, which risk is greater? Do we prioritize our data and privacy, or do we prioritize our security and the security of others? For each person, there will be a different stance.

When a service is made, the creators must realize and prepare for the potential abuses of that service. It is a fundamental responsibility of the creator to prevent misuse of the invention. The creator should receive all the credit for the success of their creation, but also must take responsibility for any failures and abuses.

Personally, I do support, in a limited sense, government back doors. There are so many hoops the government must already jump through just to get access to our data. So, one can assume, that if they are granted this access, there is a pretty good chance that the data is important to the continued security of our country. But, as doors tend to do, these must also be open and shut cases. The government must take every step to make sure that the back doors they request do not become permanent mandates on that product. It must not put the application's continued existence at risk by undermining the entire point of the application (such as adding a backdoor to an encrypted chat app would). Many of the arguments against back doors quickly fall down a slippery-slope. It is unreasonable to attempt to link adding a back door, that would not be permanent, to an Iphone to the government's ability to extract, at any moment, all of your personal data. If there is a real risk, if this data could save multiple lives, or expose an imminent danger to many people, then I think there is a persuasive case to be made for giving up a small amount of our privacy for the sake of increased national security. After all, we are all allowed to live such privileged lives, free of the constant-fear that pervades many war-torn areas of the world, due to the strong reach and security provided by the government.

Thursday, September 29, 2016

Project 2

Link to artifact:
https://lucasunruh.wordpress.com/2016/09/30/women-in-stem-profile/

Women and minorities face large obstacles entering in to every STEM field. Computer Science, though, as the newest of the STEM fields, receives the most attention from the media for its inability to draw these groups in to their ranks. Despite the tech-field's self-proclaimed "meritocracy," women and minorities are not given an equal chance at high-paying tech jobs. This isn't necessarily the fault in every case of the companies that are hiring, however. In many instances, societal norms play a large role in pushing people away from Tech. People think that they are not welcome because the Tech industry is portrayed in a very negative light (at least towards women and minorities) in society. This, I think, becomes a self-fueling cycle. The media portrays the Tech industry as unwelcoming to women and minorities, fewer apply to these jobs or study the field in college, so fewer are accepted into the industry. This, however, is not the only obstacle that women and minorities face getting into the STEM. The fields are white-male dominated (some STEM fields more than others). Because the demographics have been this way for so long, it has made some of the members of the field feel entitled to their positions and comfortable with the status-quo. They feel threatened by changes and push back against anything that would upset their comfortable bubbles.

Role models play an important role, to many people, inspiring people to reach for long-term goals. Although I, myself, have never had a STEM role-model, I know many women who look up to the many women luminaries in the field today and use their examples to guide them going forward. After reading about Anita Borg, I have a new-found respect for her work in the field of computer science. She was inspiring not so much because she was a brilliant programmer, but because of her non-stop work promoting the field to other women. I want to become someone like her. Someone that, when they see a problem, they immediately can throw themselves at it in an attempt to fix it - no matter if there will be blow-back or if society does not necessarily recognize the issue.

Monday, September 26, 2016

Challenger Disaster

The Challenger disaster was caused by nothing other than gross negligence by the upper-management at NASA. The information we now have about the defective O-rings was just as well known by management the day of the launch. Yet, despite the knowledge of this risk, NASA management let the launch go through. It is true that they faced pressure from the government, from the public, and from many other sources to make sure the launch happened. But, when your mistakes may endanger human life, no amount of cautiousness is enough. Nothing, in that situation, could be classified as an "acceptable flight risk." The probability for failure may have been small, but the cost of failure, even in with the smallest probability, was entirely too high.

One of the things that angers me most about this whole incident was the punishment that Roger Boisjoly faced after the incident. Boisjoly was one of the primary whistleblowers in the case. He brought the O-ring issue to light and even, the night before launch, attempted to persuade management to abort the flight. But they didn't listen to his advice. Boisjoly was entirely ethical in sharing this issue with the public. And, the reason is, that his information revealed a system that did not prioritize human life. He revealed the gross negligence present in NASA that not only lead to the deaths of 7 astronauts, but also the traumatization of millions of children and a major set-back for the space program for years to come.

Technically speaking, depending on the language of his employment contract, the company was "justified" in retaliating against him. This is, of course, strictly in the legal sense of the word. If he agreed not to reveal any unauthorized information to the public, and then did, he is responsible for any backlash he faces. However, ethically speaking, there is not way in which the company is justified in punishing Boisjoly. Boisjoly, through his actions, potentially saved many more future lives and, while he may have made the company look bad, he also brought the opportunity to make real lasting change within the company's structure.

Whistleblowing is very important in society. It uncovers many of the troubling secrets that industry tries to bury. But, for many people, it is just not worth it. The potential for blowback is just too great a risk to assume for the benefits of revealing whatever information they have. That is exactly why we should value whistleblowers. They have the courage to put their own future at risk to potentially save the lives of many others. That being said, I do not think all whistleblowers are worthy of praise. I think that, primarily, whistleblowers should only reveal information that can actually help people. These secrets should be such that, by not revealing them, people are in danger. In cases like that of Chelsea Manning, who leaked sensitive information to WikiLeaks, the benefit is less clear. Of course, it uncovered a large NSA operation. But, it also put many undercover operatives at risk by putting  their personal information out into public domain.

Monday, September 19, 2016

Diversity in the Tech Industry

The lack of diversity in the tech-industry is one of the biggest issues we must face if we, as a society, want to advance into the technological age. The status-quo is actively preventing technological advancement in our country; when at least 50% of the population feels uncomfortable joining the industry, and many other segments of the population, too, feel isolated working in a white-washed environment, there is no opportunity for the infusion of new ideas and talent into the workplace.

Women and minorities face numerous obstacles when entering the tech-industry. Even if they are hired which, according to some of the articles we read, is not nearly at the same rate as white males, women and minorities are not comfortable in the current prevailing tech environment, which propagates a sort brogramming culture. Surrounded by this, they are pushed to the fringe of the office, going to work each day feeling like outsiders. Although some are able to deal with these difficulties, many leave, shown by the disproportionately high retention statistics for women and minorities in large tech companies. It is good that many companies are making a concerted effort to hire more women and minorities, but that is not enough. I believe that, for true change to take place, there must be a societal effort to change these social norms.

As we read, many of the current stereotypical norms associated with computer programmers were developed from the behavioral studies of many tech companies in the early years of the industry. But, the problem was that these studies really only focused on one demographic, white males. So, as can be expected, the portrait of the computer programmer was really just the portrait of the privileged white guy. Reinforced over the next few decades, this portrait is now the norm. Across the country, women and minorities hesitate to even attempt to take up computer science, and other engineering disciplines for that matter, as their focus. We need to change this stereotype. This can only come through early-childhood education about technology. Teach kids that anyone can code, and that it isn't just something you're born with. If we actually can get kids interested early, then the future of the industry is guaranteed. Imagine how much better off we would be now if, over the last 30 years, the talent pool had been expanded by 50%.

Monday, September 12, 2016

On Burnout

From both the readings and from my experience, it seems that burnout has a few, related, causes. For the most part, however, I think it comes from mental exhaustion. It seems like every few days I hear, either through someone I know or through the news, a mention of burnout, especially in the tech industry. And, each time I hear about it, it comes along with a mention of the severe hours put in by those experiencing the burnout. These people are simply exhausting their mental faculties. I know that, personally, when I start working, I go into a state of extreme focus. I tend to mentally prioritize the task on hand, pushing every external stimulus away. These periods of focus can go on for hours, if I do not consciously set timers to stop myself. I cannot imaging forcing myself into this state for 80 - 100 hours a week. We humans did not evolve to spend all of our time thinking about one single objective, or one single problem. We evolved to be able to read and react to our surroundings, to have short, intense, bursts of problem-solving followed by immediate follow through.

I experienced something like burnout near the end of this past summer at my internship. I remember the moment I noticed that I was burnt out. It was during the European soccer championships. I went to my desk like any normal day and turned on the games (we all had televisions in our cubicles for development purposes). The next thing I realized was that it was 2 pm and almost the end of the work day. I had spent almost the entire day staring at the tv screen, not really processing anything. But, I was able to recover, thankfully. I started adopting a cyclical work cycle. The main idea is to spend short bursts working, followed by short breaks. My cycle was as follows: work for 25 mins, break for 5 mins, after every four work periods, take one longer 30 minute break. This allowed me to actually get up and move around the office , converse with other interns, and not need to stare at my computer all day. It also helped to divide the day nicely (I had 1.5 cycles in the morning and 1.5 after lunch). Additionally, I began taking longer lunches and eating with the other interns. This helped to distract my mind from my work. I also found that, as I got to know the interns better, I wanted to go to work more. I hope that, as I enter the workforce, I can continue to apply these basic strategies to avoid burnout and have a reasonable work-life balance.

Thursday, September 8, 2016

Project 1







Manifesto and Portrait Link
Response:

I strongly identify with this manifest, especially the line, "I want to have stronger coffee." Because, really, the quality of the coffee in the dining halls is mediocre, at best. This manifesto presents the, often hidden, optimistic beliefs of every Notre Dame CSE major. We are all aware that we should be applying our skills to benefit society. But, there is a difficult decision we all face, between taking a job that will pay well (and help us to pay off our student debt) and taking a job that will let us contribute and give back to society the most. Despite this, I think that, even those who do not have the chance to immediately give back, we will have a positive impact on society. This manifesto is not a warcry, it is a reflection on the lingering desire of every student to actually apply their skills for good.

I identify, for the most part, with the portrait. I am a skinny, brown-haired, white male from a major metropolitan area (Washington D.C.). I definitely fit squarely into the niche prep contingent of the CSE undergrad population.

I think that, although I am not generally aware of them, stereotypes are very important to how I view the world. I recently took a Harvard Implicit Bias test, and it revealed that I do have a slight latent bias towards and against certain groups. This does not mean that I am racist, it means that stereotypes have biased my unconscious towards reacting differently to different people. Because of this, I am now much more aware of my interactions with others and how stereotypes play a role in my initial impression of others.

Monday, September 5, 2016

Interview Process

My interview process so far has been fairly uneventful. I've worked at the same company twice now, although in different roles. I do not anticipate re-applying to work there full time. The process to apply was fairly easy. The first summer, I didn't even have a technical interview for the role. This was, however, because the internship had me working in the learning resource center. It was not intended as a software-engineering role. However, the summer was partially saved by my ability to volunteer to lead the high-school intern team through their android application development project. For this past summer, the process was slightly more involved. I had one initial interview on the phone with a person in HR followed by a "technical" interview with both of my future managers. I say that the interview was "technical" only because it involved my managers asking me general knowledge questions such as, "what is a race condition?" In a way, I feel like the company had already decided to hire me before the interview even started - they did not even attempt to test my coding ability. This frustrated me. I expected to be required to do some actual coding, but it seemed as if my manager's did not really take the time to form challenging questions.

Currently, I am preparing for an interview with a large tech company. Their process is much more involved and has, overall, been very pleasant. What really has made the experience for me was that it was all initiated through a recruiter on Linkedin. At each step, the company has asked me which dates work the best for me, and accommodated my programming preferences.

Although I have had an (overall) good interview experience, I think that many companies do not do a good job with their overall process. They don't ask interviewees about their strengths and attempt to force antiquated or ineffective interview methods onto them. Because of this, they alienate many qualified candidates for the jobs that they so desperately need to fill.

Monday, August 29, 2016

Post 3: The Ethos of the Tech Industry

The tech industry has long vaunted it's self as a haven for the long-oppressed, fringe members of society. To many, it is assumed to be the perfect place to go for those with ability. As long as you can code just as well as everyone else, you are welcome. But, is this really the case. Recently, there have been a variety of news articles challenging this fact; they suggest that the tech industry does not provide a level playing field and that this hurts certain groups in particular, women and the economically disadvantaged.

Despite these points, I would argue that the prevailing ethos of the computing industry is that of a meritocracy. The industry, at least the idea of the industry, is to find the best, most capable, people, regardless of gender or social standing, and put them into the roles that best make use of their skill sets. There is no place in the industry for those the cannot fulfill these roles. But, this ethos of the industry does not translate well into practice, not for the fault of the tech companies themselves, but due to our society at large.

Recently, I listened to an upwards of thirty minute NPR interview with Lazlo Bock, Vice-president of Google's People Operations (Human Resources). In the interview, Lazlo discussed at length Google's attempts to discover those that have ability that is not traditionally found in Universities. Google strives to wring out every capable person from the folds of society. Because of this, they have adopted a variety of non-traditional hiring strategies to help correct the non-level initial playing field faced by many in entering the tech industry. Is it Google's fault that some are exposed to a better technical education than others? No, but it is their responsibility to take this into account during their hiring process. At the end of the day, there isn't reason for tech companies to abandon their meritocratic structure. But, it is, I think, in their best interest and their responsibility to work towards generating equal opportunity for everyone.

A few of the articles I have read discussed the problem with the lack of women at major tech companies and the lack of involvement they have in the industry. This isn't an industry problem, however, it is a societal problem. To many women are discouraged from taking on tech roles even before they have the chance to enter the industry. Looking at our higher-education systems in the US, the STEM fields are disproportionately made up of males. There is a lingering ethos in America that technical fields are more "suited" to men and that women should stick with less applied fields (writing, art, etc). I think that, if we want to fix this, there must be a large grassroots effort across the country to educate children, and their parents. that the engineering disciplines are not reserved for men. Because, in the end, there is so much left on the table when a segment of approximately 50% of the population is discouraged from taking up a discipline.

Wednesday, August 24, 2016

Post 2: On the Parable of Talents

Prompt: What is your interpretation of the Parable of the Talents? How does it apply to your life and your computing skills and talents?

The key to my understanding of the parable of talents is in the master's choice of distributing his wealth to his servants "according to [their] ability" (Matthew 25:14). The master gave his most gifted servant five talents, his middle servant two talents, and his least able servant only a single talent. As the parable continues, the first two servants who received five and two talents, respectively, make a return on the investment and are praised by the master. However, the final servant gives back only the investment to his master and is subsequently expelled from his master's service. Why was this servant expelled? It technically did not do the master any harm for the servant to only pay back what was invested in him. Why was the master, in this case, so angry? And, the answer to this, is that the master recognized the distinct abilities of his servants. He did not hoard his money, keeping it with him as he traveled. He dispersed it, although unequally, among his servants because he believed in them. He gave them no instructions. The first two servants performed as expected, and were praised. But, the final servant put forth no effort because he was afraid of the master, and of failing. 

What this is saying is that although not everyone is equal, everyone has ability. The worst thing one can do is squander that ability, being content to stagnate and not seek a return on our investments. This behavior is primarily caused by one of two things: fear or sloth. One that does not seek constant improvement is either afraid of finding out the limit of their abilities or just too lazy to make an attempt. This is a much greater sin than just trying and failing. I am sure that if the third servant came to the master and, after trying to seek a return on that one talent investment, admitted that he was not able to make a return (or even lost money) the master would not have been angry, or at least not expelled the servant. But, the servant was paralyzed by his own fear.

This parable is especially relevant to myself as a college student. I recognize that I am not the strongest programmer at Notre Dame. I am not the five talent servant. But, that does not stop me from seeking a return on the investment I have made in this University and in the investment that this University has made in me. I do not want to be afraid of failing, of seeking new experiences just because I am afraid of failing. I think the fear of failing is the biggest issue facing Notre Dame students. Because, let's face it, before coming to Notre Dame, many of us had never failed in anything. While impressive, the actual experience of failing is invaluable to learning about yourself and your abilities. 

Introduction

Hello everyone,

My name is Mitch Troy and I am a Senior at the University of Notre Dame. I am currently pursuing a Bachelor's Degree in Computer Science with a minor in Engineering Corporate Practice. I am not sure exactly what I want to do with my computer science degree once I graduate, but am weighing the possibility of getting my MBA. Outside of programming, I am very active in Notre Dame's Music community, playing in the saxophone in the marching band and various other ensembles.

In this class, I hope to learn more about how to respond to the various moral problems we will face in the workforce and how to approach them. I know that there is often never a completely correct solution to these problems (eg. is it morally wrong to write software that benefits our military capabilities) but it is always beneficial to at least be aware of these issues.

The issue that I am currently most interested in is data security. With so much data being collected from us every day both with and without our knowledge, I'd like to think more deeply about the tough choices those who parse and analyze that data face.