The rise of the world-wide-web has brought the world much closer together. Now, people across continents can communicate instantly. And, in this communication, is the opportunity to understand other cultures and, hopefully, bring the world to a new level of prosperity. However, online censorship fundamentally threatens this. It is well recognized that humans have a set of fundamental rights. Among these is the right to freedom of expression/ speech. But, some countries, generally those under authoritarian rule, do not recognize this. Not only do they express the real speech of their citizens, through the media and in-person, they restrict online communication.
I think, in most cases, it is not ethical for a government to suppress online speech. Countries are able to grow and prosper by facilitating open-discussion and finding optimal solutions through compromise. But, by restricting what can be said and seen on the internet, countries are purposely keeping their citizens ignorant. But, I do believe that there are certain cases in which it is ethical to filter online speech.
I do not believe that people should have the ability to directly incite violence. This rule applies to all forms of speech.
Cases:
Is it ethical for companies to remove dissenting opinions for governments?
No, this case is not ethical. People should have the ability to criticize their governments, as long as they are not endangering anyone. In fact, this criticism is one of the cornerstones of democracy.
Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?
Yes, in this case, I think it is ethical to not allow terrorist organizations. This is because it directly falls under the speech stipulation I posed above. You don't have the right to incite violence and endanger others through your own speech.
Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?
In this case, it entirely depends on the context. It the questionable content is completely unwarranted, and is out-of-control, I think there is an argument to be made for removing it. That said, I do not believe that people have the right to not be offended on the internet.
Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?
I do not think this is ethical unless the policy is explicitly stated by the company. There is a place for one-sided biased information on the internet. As it is the internet, people who don't believe or agree with the information don't need to visit that site.
Monday, October 31, 2016
Thursday, October 27, 2016
Project 03: Encryption
Link to Ad:
http://mc7013.no-ip.biz:88/classes/cse40175/blog/project_03/Encryption_ad.m4aI see online communication and activities as simply extensions of the day-to-day activities we participate in. Messaging on social media is just a modern extension of talking to a group. Banking online is just a modern extension of going to your bank. And, as extensions, these online activities should be provided the same security and freedoms as their in-person counterparts. Logically, from this, encryption should be a fundamental right. We would think it as a huge breach of privacy if the government could, at any time, listen to any conversation you had in a group.
Personally, encryption is not that big of an issue to me. I already accept that, after many years of browsing the internet, most of my personal data is out there somewhere. Additionally, I don't really care if the government wants to read the many stupid conversations I've had through social media (Good luck, though, because some of my group-chats have over 100,000 messages in them). But, I think I should probably take this issue more seriously into consideration. This is a huge breach of privacy and should be stopped before the government goes too far.
I think the struggle between national security and personal privacy will be unending. It will carry on, much like a sine wave, with the balance being the 0-axis. It will always fluctuate between sides. As the government attempts to take too much of our privacy, the public will fight back and gain ground.
Monday, October 24, 2016
Reading 08 - Electronic Voting
For months, coincidentally just as it appeared as he was losing, Donald "The Cheeto" Trump has been questioning the validity of the upcoming election. That said, voter fraud is a real concern in this country. According to a Department of Justice Study, as many as 40 cases of voter appeared out of the 197 million votes cast in federal elections between 2002 and 2004. This number becomes quite significant, almost a 20% of all votes cast, if you multiply it by one million.
For real, though, the primary concern over E-voting is twofold: the lack of a paper-trail and the potential for external agents to modify the results of the election. While the majority of these voting machines do leave a trail that can be analyzed, there are a few that are completely paperless. Additionally, as these are machines that run on software, there is always the potential for hacking. If one knows the details of the code the machine is running, it could be possible to find a way to access and make changes to the voting records.
But, despite these concerns, I have full confidence that the results, whatever they are, of the upcoming election will reflect the true will of the American public. My father works for the Justice Department as the Director of the Election Crimes division. He has literally written the book on election crimes in this country. Right now, he has told me, the Justice Department is entirely more worried about the very real cases of voter suppression occurring across the country due to Donald Trump's scare tactics and his recruitment of hostile poll watchers. The incidence of election fraud in this country is incredibly low. So low, that based on National Weather Service Data, you are more likely to be struck by lightning this year than for your vote to not be counted.
I'm not saying that our voting system is perfect, however. There are many ways in which our system could be improved. The first of which is through wide-spread investment in updating our voting systems. It is completely unacceptable that many districts across the country are using voting machines that are over 20 years old. At this point, though, the outdated software can be seen as a feature. There are so few people that still know how these systems work that it is becoming increasingly difficult to hack them.
I think that developing a voting system is fundamentally different than developing a normal application. Although there are data security concerns for both types of applications that developers must take into account. The severity of the former being hacked is much greater. If our national elections are compromised, and the public's trust in our voting system is lost, our Democracy is severely weakened.
I don't think we should ever have 100% trust in electronic systems. No matter how secure we make our online applications, there will always be people attempting to break into them. This is because of the great potential for gain if these systems are unlocked. This is the example I give to my friends whenever this topic is raised: We trust bridges. Bridges are built to be secure and last, even in many severe conditions. But, there generally aren't people trying to constantly knock the bridge down. If this was the case, I don't think anyone would trust our bridges. It's pretty much the same for our online applications. We may put in layers and layers of security. But, given enough time and effort, there will always be those who are able to gain some amount of information.
For real, though, the primary concern over E-voting is twofold: the lack of a paper-trail and the potential for external agents to modify the results of the election. While the majority of these voting machines do leave a trail that can be analyzed, there are a few that are completely paperless. Additionally, as these are machines that run on software, there is always the potential for hacking. If one knows the details of the code the machine is running, it could be possible to find a way to access and make changes to the voting records.
But, despite these concerns, I have full confidence that the results, whatever they are, of the upcoming election will reflect the true will of the American public. My father works for the Justice Department as the Director of the Election Crimes division. He has literally written the book on election crimes in this country. Right now, he has told me, the Justice Department is entirely more worried about the very real cases of voter suppression occurring across the country due to Donald Trump's scare tactics and his recruitment of hostile poll watchers. The incidence of election fraud in this country is incredibly low. So low, that based on National Weather Service Data, you are more likely to be struck by lightning this year than for your vote to not be counted.
I'm not saying that our voting system is perfect, however. There are many ways in which our system could be improved. The first of which is through wide-spread investment in updating our voting systems. It is completely unacceptable that many districts across the country are using voting machines that are over 20 years old. At this point, though, the outdated software can be seen as a feature. There are so few people that still know how these systems work that it is becoming increasingly difficult to hack them.
I think that developing a voting system is fundamentally different than developing a normal application. Although there are data security concerns for both types of applications that developers must take into account. The severity of the former being hacked is much greater. If our national elections are compromised, and the public's trust in our voting system is lost, our Democracy is severely weakened.
I don't think we should ever have 100% trust in electronic systems. No matter how secure we make our online applications, there will always be people attempting to break into them. This is because of the great potential for gain if these systems are unlocked. This is the example I give to my friends whenever this topic is raised: We trust bridges. Bridges are built to be secure and last, even in many severe conditions. But, there generally aren't people trying to constantly knock the bridge down. If this was the case, I don't think anyone would trust our bridges. It's pretty much the same for our online applications. We may put in layers and layers of security. But, given enough time and effort, there will always be those who are able to gain some amount of information.
Monday, October 10, 2016
Ethical Advertising
We, as consumers of the internet, often take for granted the breadth of information and services available for free on the internet. We pay some (hopefully) fixed rate per month and, suddenly, have access to all the world's knowledge. This service has allowed us to become more informed, and more connected, than ever before in our history. But, as it turns out, this information is not strictly free. The, now thoroughly crispy, meme "if you're not paying for it, you're not the customer; you're the product" definitely has some hints of truth in it. Very few people make things that people access for free. After all, there are a variety of expenses one must go through to host e-content and, as the demand increases, the costs also explode. So, these providers capitalize on the one resource available to them, our data.
This process of data-collection, I think, is completely within the right of the e-provider. If you are using their service, they should have access to, within the bounds of their service, the data you generate. The problem arises when these content generators do not inform the content users of the extent of the data collection. This is it is extremely easy for a company's data collection to become unethical. Like I said, a provider should have access to the data you generate on their service. But when, for example, Facebook starts to track your location while the app is open so that it can give you ads about things in your area, Facebook has clearly overstepped the tacit agreement between user and provider. I believe it is the company's responsibility to let the user know when they attempt to gather data about them outside the normal bounds of operation of the service. When I use Facebook, I assume that they are collecting everything I type, looking at what I 'Like', and seeing where I check-in. This data is fair use for them, because I gave it to them through use of their service. But, if they want to track my location in real time, this is them spying on me, rather than me voluntarily giving them the information.
This process of data-collection, I think, is completely within the right of the e-provider. If you are using their service, they should have access to, within the bounds of their service, the data you generate. The problem arises when these content generators do not inform the content users of the extent of the data collection. This is it is extremely easy for a company's data collection to become unethical. Like I said, a provider should have access to the data you generate on their service. But when, for example, Facebook starts to track your location while the app is open so that it can give you ads about things in your area, Facebook has clearly overstepped the tacit agreement between user and provider. I believe it is the company's responsibility to let the user know when they attempt to gather data about them outside the normal bounds of operation of the service. When I use Facebook, I assume that they are collecting everything I type, looking at what I 'Like', and seeing where I check-in. This data is fair use for them, because I gave it to them through use of their service. But, if they want to track my location in real time, this is them spying on me, rather than me voluntarily giving them the information.
Monday, October 3, 2016
Back Doors and Encryption
It is self-evident that any company that wishes to do business in the United States of America must abide by the laws set by the government. No one challenges this notion. When companies fight against the government on this issue (weakening encryption / adding back doors), however, they are not necessarily doing anything wrong. There is a profound tension that exists between the government's responsibility to protect the people and a company's responsibility to protect the interests of its consumers. In some cases, such as the Apple's, there is no solution. Apple, by installing a back door to their operating system, automatically puts at risk all of their millions of customers. But, the government, by not having access to the data that becomes available from this back door, may also put at risk the lives of some citizens. So, in the end, someone will be unhappy. As this issue continues to develop, we must ask ourselves, which risk is greater? Do we prioritize our data and privacy, or do we prioritize our security and the security of others? For each person, there will be a different stance.
When a service is made, the creators must realize and prepare for the potential abuses of that service. It is a fundamental responsibility of the creator to prevent misuse of the invention. The creator should receive all the credit for the success of their creation, but also must take responsibility for any failures and abuses.
Personally, I do support, in a limited sense, government back doors. There are so many hoops the government must already jump through just to get access to our data. So, one can assume, that if they are granted this access, there is a pretty good chance that the data is important to the continued security of our country. But, as doors tend to do, these must also be open and shut cases. The government must take every step to make sure that the back doors they request do not become permanent mandates on that product. It must not put the application's continued existence at risk by undermining the entire point of the application (such as adding a backdoor to an encrypted chat app would). Many of the arguments against back doors quickly fall down a slippery-slope. It is unreasonable to attempt to link adding a back door, that would not be permanent, to an Iphone to the government's ability to extract, at any moment, all of your personal data. If there is a real risk, if this data could save multiple lives, or expose an imminent danger to many people, then I think there is a persuasive case to be made for giving up a small amount of our privacy for the sake of increased national security. After all, we are all allowed to live such privileged lives, free of the constant-fear that pervades many war-torn areas of the world, due to the strong reach and security provided by the government.
When a service is made, the creators must realize and prepare for the potential abuses of that service. It is a fundamental responsibility of the creator to prevent misuse of the invention. The creator should receive all the credit for the success of their creation, but also must take responsibility for any failures and abuses.
Personally, I do support, in a limited sense, government back doors. There are so many hoops the government must already jump through just to get access to our data. So, one can assume, that if they are granted this access, there is a pretty good chance that the data is important to the continued security of our country. But, as doors tend to do, these must also be open and shut cases. The government must take every step to make sure that the back doors they request do not become permanent mandates on that product. It must not put the application's continued existence at risk by undermining the entire point of the application (such as adding a backdoor to an encrypted chat app would). Many of the arguments against back doors quickly fall down a slippery-slope. It is unreasonable to attempt to link adding a back door, that would not be permanent, to an Iphone to the government's ability to extract, at any moment, all of your personal data. If there is a real risk, if this data could save multiple lives, or expose an imminent danger to many people, then I think there is a persuasive case to be made for giving up a small amount of our privacy for the sake of increased national security. After all, we are all allowed to live such privileged lives, free of the constant-fear that pervades many war-torn areas of the world, due to the strong reach and security provided by the government.
Subscribe to:
Posts (Atom)