Artificial Intelligence is a very broad field of Computer Science. But, basically, it is the pursuit of endowing computers with some of the abilities generally associated with human intelligence. Of course, this definition is not all-encompassing. There are three categories of AI: strong,weak, and in-between. These categories arise from the ability of the implemented AI to reveal information to ourselves, the humans, about our own intelligence. This intelligence is fundamentally different, in so far as it has been implemented currently, than human intelligence. AI, as we currently think about it, is implemented for a specific purpose. Recently, we have developed AI to play human games (GO and Jeopardy). But, human intelligence is different, more flexible. We can apply our own logic to changing situations and adapt much easier than the AI we currently develop.
I do not think that applications such as AlphaGO, Deep Blue, and Watson do not completely demonstrate the viability of AI as a whole. These are applications developed to learn a specific task really well. Alpha GO was designed to learn how to play GO better than any human can. If anything, they prove that, currently, we are getting pretty close to having viable "weak" artificial intelligence (AlphaGO and Deep Blue) and "in-between" artificial intelligence (Watson). But, I think to truly demonstrate the viability of AI, we need to get closer on developing "strong" artificial intelligence. Until then, these AI examples will seem gimmicky. But, each of these examples is a step in the right direction.
I think that the Chinese Room argument provides a good counter argument to the viability of the Turing test. I think that it is true that, when we provide AI to a machine, we are not really teaching the machine to think, at least not in the same way that we think. We are providing a concrete set of rules for the machine, in the form of code and executable instructions. These rules then give the machine the ability to "think." Through the execution of this code, the computer is able to, in many cases, simulate intelligence and thought.
I do not think the concern of AI in our lives is completely warranted. But, this is because we have not yet succeeded in developing "strong" AI. There is comparatively little potential harm from weak and in-between AI. AI assisting our every-day activities, I think, is only helpful. We develop these AI's for specific purposes. I think there is probably little chance of our self-driving cars coordinating a revolt and causing the extinction of our species. This said, we need to be careful with its implementation. It may, for example, not be a good idea to implement an AI system to control our missile defense systems.
No comments:
Post a Comment