Patents are guarantees from the government of one's right to their own intellectual property. There are a variety of reasons for granting patents, primarily ethical and economic. It makes sense that one should have a right to their own creations. No one should be able to claim your work as their own. Economically speaking, patents create a great incentive for invention. A patent generally guarantees an inventor 20 years of exclusive right to sell their product. So, if one invents a new, or ground-breaking, invention a patent makes sure that they will not have any competition, having the entire market share, for that product.
I think that patents have a place in society. The economic incentive they provide to companies creates an avenue for many inventions that make society better. I don't think they are necessary for society, but they are beneficial. But, the current structure of our patent system hinders progress. Companies end up repeatedly renewing their patents, preventing beneficial products from entering the market, instead keeping the prices extraordinarily high.
I don't think that patents on specific software should be granted. Patents should be more specific, and concrete. When one patents a machine, or other concrete invention, it is obvious when a competitor is attempting to illegally copy that invention - without prying into the specifics of the machine. However, with software, this is not obvious. If one tries to file a copyright claim on software, the claim necessitates that that software is examined. But, this examination means that the company must reveal all the specifics of that program - losing their intellectual property in the process. There is one case, however, in which I think a software patent should be awarded. You can already patent processes for solving tasks. So, I think if one develops a new algorithm, that can be implemented in code, they deserve a patent for that process.
The current structure of the DMCA, and the way in which some companies deal with copyright requests, illustrates the broken nature of the copyright system. There are many examples today of youtube channels being completely shut down due to these copyright notices. What happens is that, as soon as a video is posted, a copyright claim is posted against this video. Youtube's policy regarding these claims is to immediately remove the offending video, and force the user to prove the fact that their video does not violate that copyright. This policy completely prevents new content generation. These videos are generally posted under the "fair-use" clause of the DMCA - as they are reviews of new games and videos and contain selected samples of this content. However, I think this is more of a problem with the content-sites response to these claims, than with the copyright system itself.
Monday, November 28, 2016
Monday, November 14, 2016
Reading 11: Self-driving Cars
Each time we get on the road, we put ourselves in danger. Driving in rush hour, you are trusting hundreds of other drivers, impatient after a long day and just wanting to get home, to pilot their 3,000 pound missiles safely and responsibly. One mistake can threaten not only them, but everyone around them. This concern, safety, is one of the primary factors driving the self-driving car movement. People are unpredictable and, in the smartphone age, easily distracted. These characteristics are hardly ideal when your momentary distraction can endanger many other people. But, if cars could operate independently, this fear would go away. They would have programmed behavior and, if they are regulated properly, be aware of the behavior of all other cars around them.
In addition to the safety motivation, there is one other primary drivers for self-driving cars: time-efficiency. Imagine a world with 100% self-driving cars. Each of these cars is aware of all the cars around it, their speeds, and their future behaviors. Stop-lights and stop-signs would disappear. Cars could self-adjust speeds going through intersections so that they avoid all other cars going in all directions through the intersection. Traffic would become a thing of the past. With no distracted drivers causing accidents, all cars on the road could travel at very high speeds with minimal, although safe, amounts of space between them. How much time do you spend driving / stuck in traffic each year? With self-driving cars, all of this time is freed up to accomplish other things. You could essentially sleep through your early morning commute and just arrive at work.
I strongly believe in the utilitarian approach to programming these life-or-death scenarios into the cars driving logic. It is always the case that saving more lives is better. When an accident happens, it should be the company that developed the driving logic's fault. Presumably, with better sensing ability, the situation could have been avoided.
Self-driving cars, and automation in general, pose a large risk to our social and economic structure. Truck driving is the most popular profession in upwards of 20 states. A large number of these drivers are uneducated, and do not have many other skills to fall back on. But, if we implement self-driving cars, suddenly these tens of thousands of low-skill workers are out of the job. As a society, we need to anticipate the rise of automation and prepare for it. As we increase automation, we need to provide free training to these affected groups to give them other skills that will be beneficial for society in the future. But, it is a given that not all these people will be able to find jobs. As we slowly automate out these low-skill jobs, I think the viability of a universal living income increases. Our industries will benefit so much from not needing to employ these workers and our economy, as a result, will thrive. We can't leave these workers behind, though. With the money we gain as a country, providing a living wage would be possible. I think the government has a hand in regulating these cars. That is just because I think it would be the easiest for the government to be the one to standardize the driving logical guidelines for these cars. The most efficient system would be one in which all self-driving cars are aware of all other cars on the road, and can predict their behavior. This can only happen through standardization at the level of government regulation.
I am a hard-core utilitarian. I would definitely buy a self-driving car, even with logic that may kill me, in certain situations. (Hopefully we have better safety equipment by the time these cars hit the road)
In addition to the safety motivation, there is one other primary drivers for self-driving cars: time-efficiency. Imagine a world with 100% self-driving cars. Each of these cars is aware of all the cars around it, their speeds, and their future behaviors. Stop-lights and stop-signs would disappear. Cars could self-adjust speeds going through intersections so that they avoid all other cars going in all directions through the intersection. Traffic would become a thing of the past. With no distracted drivers causing accidents, all cars on the road could travel at very high speeds with minimal, although safe, amounts of space between them. How much time do you spend driving / stuck in traffic each year? With self-driving cars, all of this time is freed up to accomplish other things. You could essentially sleep through your early morning commute and just arrive at work.
I strongly believe in the utilitarian approach to programming these life-or-death scenarios into the cars driving logic. It is always the case that saving more lives is better. When an accident happens, it should be the company that developed the driving logic's fault. Presumably, with better sensing ability, the situation could have been avoided.
Self-driving cars, and automation in general, pose a large risk to our social and economic structure. Truck driving is the most popular profession in upwards of 20 states. A large number of these drivers are uneducated, and do not have many other skills to fall back on. But, if we implement self-driving cars, suddenly these tens of thousands of low-skill workers are out of the job. As a society, we need to anticipate the rise of automation and prepare for it. As we increase automation, we need to provide free training to these affected groups to give them other skills that will be beneficial for society in the future. But, it is a given that not all these people will be able to find jobs. As we slowly automate out these low-skill jobs, I think the viability of a universal living income increases. Our industries will benefit so much from not needing to employ these workers and our economy, as a result, will thrive. We can't leave these workers behind, though. With the money we gain as a country, providing a living wage would be possible. I think the government has a hand in regulating these cars. That is just because I think it would be the easiest for the government to be the one to standardize the driving logical guidelines for these cars. The most efficient system would be one in which all self-driving cars are aware of all other cars on the road, and can predict their behavior. This can only happen through standardization at the level of government regulation.
I am a hard-core utilitarian. I would definitely buy a self-driving car, even with logic that may kill me, in certain situations. (Hopefully we have better safety equipment by the time these cars hit the road)
Monday, November 7, 2016
Reading 10: Artificial Intelligence
Artificial Intelligence is a very broad field of Computer Science. But, basically, it is the pursuit of endowing computers with some of the abilities generally associated with human intelligence. Of course, this definition is not all-encompassing. There are three categories of AI: strong,weak, and in-between. These categories arise from the ability of the implemented AI to reveal information to ourselves, the humans, about our own intelligence. This intelligence is fundamentally different, in so far as it has been implemented currently, than human intelligence. AI, as we currently think about it, is implemented for a specific purpose. Recently, we have developed AI to play human games (GO and Jeopardy). But, human intelligence is different, more flexible. We can apply our own logic to changing situations and adapt much easier than the AI we currently develop.
I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.
I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.
I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.
I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.
I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.
I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.
Monday, October 31, 2016
Reading 09: Net Neutrality
The rise of the world-wide-web has brought the world much closer together. Now, people across continents can communicate instantly. And, in this communication, is the opportunity to understand other cultures and, hopefully, bring the world to a new level of prosperity. However, online censorship fundamentally threatens this. It is well recognized that humans have a set of fundamental rights. Among these is the right to freedom of expression/ speech. But, some countries, generally those under authoritarian rule, do not recognize this. Not only do they express the real speech of their citizens, through the media and in-person, they restrict online communication.
I think, in most cases, it is not ethical for a government to suppress online speech. Countries are able to grow and prosper by facilitating open-discussion and finding optimal solutions through compromise. But, by restricting what can be said and seen on the internet, countries are purposely keeping their citizens ignorant. But, I do believe that there are certain cases in which it is ethical to filter online speech.
I do not believe that people should have the ability to directly incite violence. This rule applies to all forms of speech.
Cases:
Is it ethical for companies to remove dissenting opinions for governments?
No, this case is not ethical. People should have the ability to criticize their governments, as long as they are not endangering anyone. In fact, this criticism is one of the cornerstones of democracy.
Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?
Yes, in this case, I think it is ethical to not allow terrorist organizations. This is because it directly falls under the speech stipulation I posed above. You don't have the right to incite violence and endanger others through your own speech.
Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?
In this case, it entirely depends on the context. It the questionable content is completely unwarranted, and is out-of-control, I think there is an argument to be made for removing it. That said, I do not believe that people have the right to not be offended on the internet.
Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?
I do not think this is ethical unless the policy is explicitly stated by the company. There is a place for one-sided biased information on the internet. As it is the internet, people who don't believe or agree with the information don't need to visit that site.
I think, in most cases, it is not ethical for a government to suppress online speech. Countries are able to grow and prosper by facilitating open-discussion and finding optimal solutions through compromise. But, by restricting what can be said and seen on the internet, countries are purposely keeping their citizens ignorant. But, I do believe that there are certain cases in which it is ethical to filter online speech.
I do not believe that people should have the ability to directly incite violence. This rule applies to all forms of speech.
Cases:
Is it ethical for companies to remove dissenting opinions for governments?
No, this case is not ethical. People should have the ability to criticize their governments, as long as they are not endangering anyone. In fact, this criticism is one of the cornerstones of democracy.
Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?
Yes, in this case, I think it is ethical to not allow terrorist organizations. This is because it directly falls under the speech stipulation I posed above. You don't have the right to incite violence and endanger others through your own speech.
Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?
In this case, it entirely depends on the context. It the questionable content is completely unwarranted, and is out-of-control, I think there is an argument to be made for removing it. That said, I do not believe that people have the right to not be offended on the internet.
Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?
I do not think this is ethical unless the policy is explicitly stated by the company. There is a place for one-sided biased information on the internet. As it is the internet, people who don't believe or agree with the information don't need to visit that site.
Thursday, October 27, 2016
Project 03: Encryption
Link to Ad:
http://mc7013.no-ip.biz:88/classes/cse40175/blog/project_03/Encryption_ad.m4aI see online communication and activities as simply extensions of the day-to-day activities we participate in. Messaging on social media is just a modern extension of talking to a group. Banking online is just a modern extension of going to your bank. And, as extensions, these online activities should be provided the same security and freedoms as their in-person counterparts. Logically, from this, encryption should be a fundamental right. We would think it as a huge breach of privacy if the government could, at any time, listen to any conversation you had in a group.
Personally, encryption is not that big of an issue to me. I already accept that, after many years of browsing the internet, most of my personal data is out there somewhere. Additionally, I don't really care if the government wants to read the many stupid conversations I've had through social media (Good luck, though, because some of my group-chats have over 100,000 messages in them). But, I think I should probably take this issue more seriously into consideration. This is a huge breach of privacy and should be stopped before the government goes too far.
I think the struggle between national security and personal privacy will be unending. It will carry on, much like a sine wave, with the balance being the 0-axis. It will always fluctuate between sides. As the government attempts to take too much of our privacy, the public will fight back and gain ground.
Monday, October 24, 2016
Reading 08 - Electronic Voting
For months, coincidentally just as it appeared as he was losing, Donald "The Cheeto" Trump has been questioning the validity of the upcoming election. That said, voter fraud is a real concern in this country. According to a Department of Justice Study, as many as 40 cases of voter appeared out of the 197 million votes cast in federal elections between 2002 and 2004. This number becomes quite significant, almost a 20% of all votes cast, if you multiply it by one million.
For real, though, the primary concern over E-voting is twofold: the lack of a paper-trail and the potential for external agents to modify the results of the election. While the majority of these voting machines do leave a trail that can be analyzed, there are a few that are completely paperless. Additionally, as these are machines that run on software, there is always the potential for hacking. If one knows the details of the code the machine is running, it could be possible to find a way to access and make changes to the voting records.
But, despite these concerns, I have full confidence that the results, whatever they are, of the upcoming election will reflect the true will of the American public. My father works for the Justice Department as the Director of the Election Crimes division. He has literally written the book on election crimes in this country. Right now, he has told me, the Justice Department is entirely more worried about the very real cases of voter suppression occurring across the country due to Donald Trump's scare tactics and his recruitment of hostile poll watchers. The incidence of election fraud in this country is incredibly low. So low, that based on National Weather Service Data, you are more likely to be struck by lightning this year than for your vote to not be counted.
I'm not saying that our voting system is perfect, however. There are many ways in which our system could be improved. The first of which is through wide-spread investment in updating our voting systems. It is completely unacceptable that many districts across the country are using voting machines that are over 20 years old. At this point, though, the outdated software can be seen as a feature. There are so few people that still know how these systems work that it is becoming increasingly difficult to hack them.
I think that developing a voting system is fundamentally different than developing a normal application. Although there are data security concerns for both types of applications that developers must take into account. The severity of the former being hacked is much greater. If our national elections are compromised, and the public's trust in our voting system is lost, our Democracy is severely weakened.
I don't think we should ever have 100% trust in electronic systems. No matter how secure we make our online applications, there will always be people attempting to break into them. This is because of the great potential for gain if these systems are unlocked. This is the example I give to my friends whenever this topic is raised: We trust bridges. Bridges are built to be secure and last, even in many severe conditions. But, there generally aren't people trying to constantly knock the bridge down. If this was the case, I don't think anyone would trust our bridges. It's pretty much the same for our online applications. We may put in layers and layers of security. But, given enough time and effort, there will always be those who are able to gain some amount of information.
For real, though, the primary concern over E-voting is twofold: the lack of a paper-trail and the potential for external agents to modify the results of the election. While the majority of these voting machines do leave a trail that can be analyzed, there are a few that are completely paperless. Additionally, as these are machines that run on software, there is always the potential for hacking. If one knows the details of the code the machine is running, it could be possible to find a way to access and make changes to the voting records.
But, despite these concerns, I have full confidence that the results, whatever they are, of the upcoming election will reflect the true will of the American public. My father works for the Justice Department as the Director of the Election Crimes division. He has literally written the book on election crimes in this country. Right now, he has told me, the Justice Department is entirely more worried about the very real cases of voter suppression occurring across the country due to Donald Trump's scare tactics and his recruitment of hostile poll watchers. The incidence of election fraud in this country is incredibly low. So low, that based on National Weather Service Data, you are more likely to be struck by lightning this year than for your vote to not be counted.
I'm not saying that our voting system is perfect, however. There are many ways in which our system could be improved. The first of which is through wide-spread investment in updating our voting systems. It is completely unacceptable that many districts across the country are using voting machines that are over 20 years old. At this point, though, the outdated software can be seen as a feature. There are so few people that still know how these systems work that it is becoming increasingly difficult to hack them.
I think that developing a voting system is fundamentally different than developing a normal application. Although there are data security concerns for both types of applications that developers must take into account. The severity of the former being hacked is much greater. If our national elections are compromised, and the public's trust in our voting system is lost, our Democracy is severely weakened.
I don't think we should ever have 100% trust in electronic systems. No matter how secure we make our online applications, there will always be people attempting to break into them. This is because of the great potential for gain if these systems are unlocked. This is the example I give to my friends whenever this topic is raised: We trust bridges. Bridges are built to be secure and last, even in many severe conditions. But, there generally aren't people trying to constantly knock the bridge down. If this was the case, I don't think anyone would trust our bridges. It's pretty much the same for our online applications. We may put in layers and layers of security. But, given enough time and effort, there will always be those who are able to gain some amount of information.
Monday, October 10, 2016
Ethical Advertising
We, as consumers of the internet, often take for granted the breadth of information and services available for free on the internet. We pay some (hopefully) fixed rate per month and, suddenly, have access to all the world's knowledge. This service has allowed us to become more informed, and more connected, than ever before in our history. But, as it turns out, this information is not strictly free. The, now thoroughly crispy, meme "if you're not paying for it, you're not the customer; you're the product" definitely has some hints of truth in it. Very few people make things that people access for free. After all, there are a variety of expenses one must go through to host e-content and, as the demand increases, the costs also explode. So, these providers capitalize on the one resource available to them, our data.
This process of data-collection, I think, is completely within the right of the e-provider. If you are using their service, they should have access to, within the bounds of their service, the data you generate. The problem arises when these content generators do not inform the content users of the extent of the data collection. This is it is extremely easy for a company's data collection to become unethical. Like I said, a provider should have access to the data you generate on their service. But when, for example, Facebook starts to track your location while the app is open so that it can give you ads about things in your area, Facebook has clearly overstepped the tacit agreement between user and provider. I believe it is the company's responsibility to let the user know when they attempt to gather data about them outside the normal bounds of operation of the service. When I use Facebook, I assume that they are collecting everything I type, looking at what I 'Like', and seeing where I check-in. This data is fair use for them, because I gave it to them through use of their service. But, if they want to track my location in real time, this is them spying on me, rather than me voluntarily giving them the information.
This process of data-collection, I think, is completely within the right of the e-provider. If you are using their service, they should have access to, within the bounds of their service, the data you generate. The problem arises when these content generators do not inform the content users of the extent of the data collection. This is it is extremely easy for a company's data collection to become unethical. Like I said, a provider should have access to the data you generate on their service. But when, for example, Facebook starts to track your location while the app is open so that it can give you ads about things in your area, Facebook has clearly overstepped the tacit agreement between user and provider. I believe it is the company's responsibility to let the user know when they attempt to gather data about them outside the normal bounds of operation of the service. When I use Facebook, I assume that they are collecting everything I type, looking at what I 'Like', and seeing where I check-in. This data is fair use for them, because I gave it to them through use of their service. But, if they want to track my location in real time, this is them spying on me, rather than me voluntarily giving them the information.
Subscribe to:
Posts (Atom)