Patents are guarantees from the government of one's right to their own intellectual property. There are a variety of reasons for granting patents, primarily ethical and economic. It makes sense that one should have a right to their own creations. No one should be able to claim your work as their own. Economically speaking, patents create a great incentive for invention. A patent generally guarantees an inventor 20 years of exclusive right to sell their product. So, if one invents a new, or ground-breaking, invention a patent makes sure that they will not have any competition, having the entire market share, for that product.
I think that patents have a place in society. The economic incentive they provide to companies creates an avenue for many inventions that make society better. I don't think they are necessary for society, but they are beneficial. But, the current structure of our patent system hinders progress. Companies end up repeatedly renewing their patents, preventing beneficial products from entering the market, instead keeping the prices extraordinarily high.
I don't think that patents on specific software should be granted. Patents should be more specific, and concrete. When one patents a machine, or other concrete invention, it is obvious when a competitor is attempting to illegally copy that invention - without prying into the specifics of the machine. However, with software, this is not obvious. If one tries to file a copyright claim on software, the claim necessitates that that software is examined. But, this examination means that the company must reveal all the specifics of that program - losing their intellectual property in the process. There is one case, however, in which I think a software patent should be awarded. You can already patent processes for solving tasks. So, I think if one develops a new algorithm, that can be implemented in code, they deserve a patent for that process.
The current structure of the DMCA, and the way in which some companies deal with copyright requests, illustrates the broken nature of the copyright system. There are many examples today of youtube channels being completely shut down due to these copyright notices. What happens is that, as soon as a video is posted, a copyright claim is posted against this video. Youtube's policy regarding these claims is to immediately remove the offending video, and force the user to prove the fact that their video does not violate that copyright. This policy completely prevents new content generation. These videos are generally posted under the "fair-use" clause of the DMCA - as they are reviews of new games and videos and contain selected samples of this content. However, I think this is more of a problem with the content-sites response to these claims, than with the copyright system itself.
Monday, November 28, 2016
Monday, November 14, 2016
Reading 11: Self-driving Cars
Each time we get on the road, we put ourselves in danger. Driving in rush hour, you are trusting hundreds of other drivers, impatient after a long day and just wanting to get home, to pilot their 3,000 pound missiles safely and responsibly. One mistake can threaten not only them, but everyone around them. This concern, safety, is one of the primary factors driving the self-driving car movement. People are unpredictable and, in the smartphone age, easily distracted. These characteristics are hardly ideal when your momentary distraction can endanger many other people. But, if cars could operate independently, this fear would go away. They would have programmed behavior and, if they are regulated properly, be aware of the behavior of all other cars around them.
In addition to the safety motivation, there is one other primary drivers for self-driving cars: time-efficiency. Imagine a world with 100% self-driving cars. Each of these cars is aware of all the cars around it, their speeds, and their future behaviors. Stop-lights and stop-signs would disappear. Cars could self-adjust speeds going through intersections so that they avoid all other cars going in all directions through the intersection. Traffic would become a thing of the past. With no distracted drivers causing accidents, all cars on the road could travel at very high speeds with minimal, although safe, amounts of space between them. How much time do you spend driving / stuck in traffic each year? With self-driving cars, all of this time is freed up to accomplish other things. You could essentially sleep through your early morning commute and just arrive at work.
I strongly believe in the utilitarian approach to programming these life-or-death scenarios into the cars driving logic. It is always the case that saving more lives is better. When an accident happens, it should be the company that developed the driving logic's fault. Presumably, with better sensing ability, the situation could have been avoided.
Self-driving cars, and automation in general, pose a large risk to our social and economic structure. Truck driving is the most popular profession in upwards of 20 states. A large number of these drivers are uneducated, and do not have many other skills to fall back on. But, if we implement self-driving cars, suddenly these tens of thousands of low-skill workers are out of the job. As a society, we need to anticipate the rise of automation and prepare for it. As we increase automation, we need to provide free training to these affected groups to give them other skills that will be beneficial for society in the future. But, it is a given that not all these people will be able to find jobs. As we slowly automate out these low-skill jobs, I think the viability of a universal living income increases. Our industries will benefit so much from not needing to employ these workers and our economy, as a result, will thrive. We can't leave these workers behind, though. With the money we gain as a country, providing a living wage would be possible. I think the government has a hand in regulating these cars. That is just because I think it would be the easiest for the government to be the one to standardize the driving logical guidelines for these cars. The most efficient system would be one in which all self-driving cars are aware of all other cars on the road, and can predict their behavior. This can only happen through standardization at the level of government regulation.
I am a hard-core utilitarian. I would definitely buy a self-driving car, even with logic that may kill me, in certain situations. (Hopefully we have better safety equipment by the time these cars hit the road)
In addition to the safety motivation, there is one other primary drivers for self-driving cars: time-efficiency. Imagine a world with 100% self-driving cars. Each of these cars is aware of all the cars around it, their speeds, and their future behaviors. Stop-lights and stop-signs would disappear. Cars could self-adjust speeds going through intersections so that they avoid all other cars going in all directions through the intersection. Traffic would become a thing of the past. With no distracted drivers causing accidents, all cars on the road could travel at very high speeds with minimal, although safe, amounts of space between them. How much time do you spend driving / stuck in traffic each year? With self-driving cars, all of this time is freed up to accomplish other things. You could essentially sleep through your early morning commute and just arrive at work.
I strongly believe in the utilitarian approach to programming these life-or-death scenarios into the cars driving logic. It is always the case that saving more lives is better. When an accident happens, it should be the company that developed the driving logic's fault. Presumably, with better sensing ability, the situation could have been avoided.
Self-driving cars, and automation in general, pose a large risk to our social and economic structure. Truck driving is the most popular profession in upwards of 20 states. A large number of these drivers are uneducated, and do not have many other skills to fall back on. But, if we implement self-driving cars, suddenly these tens of thousands of low-skill workers are out of the job. As a society, we need to anticipate the rise of automation and prepare for it. As we increase automation, we need to provide free training to these affected groups to give them other skills that will be beneficial for society in the future. But, it is a given that not all these people will be able to find jobs. As we slowly automate out these low-skill jobs, I think the viability of a universal living income increases. Our industries will benefit so much from not needing to employ these workers and our economy, as a result, will thrive. We can't leave these workers behind, though. With the money we gain as a country, providing a living wage would be possible. I think the government has a hand in regulating these cars. That is just because I think it would be the easiest for the government to be the one to standardize the driving logical guidelines for these cars. The most efficient system would be one in which all self-driving cars are aware of all other cars on the road, and can predict their behavior. This can only happen through standardization at the level of government regulation.
I am a hard-core utilitarian. I would definitely buy a self-driving car, even with logic that may kill me, in certain situations. (Hopefully we have better safety equipment by the time these cars hit the road)
Monday, November 7, 2016
Reading 10: Artificial Intelligence
Artificial Intelligence is a very broad field of Computer Science. But, basically, it is the pursuit of endowing computers with some of the abilities generally associated with human intelligence. Of course, this definition is not all-encompassing. There are three categories of AI: strong,weak, and in-between. These categories arise from the ability of the implemented AI to reveal information to ourselves, the humans, about our own intelligence. This intelligence is fundamentally different, in so far as it has been implemented currently, than human intelligence. AI, as we currently think about it, is implemented for a specific purpose. Recently, we have developed AI to play human games (GO and Jeopardy). But, human intelligence is different, more flexible. We can apply our own logic to changing situations and adapt much easier than the AI we currently develop.
I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.
I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.
I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.
I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.
I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.
I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.
Subscribe to:
Posts (Atom)