On the threat of Artificial Intelligence (AI).

Any artificial intelligence system is going to need some basic programming to start. Even the human entity has a few basic programming elements hard coded into it—hunger, thirst, warm, cold, satiated, feels good, feels bad… basically questions directed at the goals of survival and propagation. I suggest that it is in the seeking of answers to the preprogrammed, goal directed, questions and remembering the answers to those questions that is the basis of learning and problem solving.

So why would artificial intelligence be viewed as “mankind’s greatest threat?”

Is it the fear that the machine might someday see its survival and wellbeing as more important than the survival and wellbeing of the human? Is it that the machine might come to view man as a threat to the survival of the machine? This is the stuff that the Terminator and other like movies and stories are built on.

Another possibility might be that a system that has a purer form of logic may end up coming to conclusions that are not consistent with the prevailing social-economic culture and thus would be a threat to our current social structure.

An AI system has to be given a basic directive as a starting point. It seems to me that the basic directive at this time, in these experimental systems, is in the direction of “winning”. Something to the effect of “play to stay in the game, and accumulate as many points as fast as possible.” Trial and error will find processes/patterns that will maximize points, minimize the time required to score the points and defend against elimination from the game. The processes and patterns can be memorized (stored in memory) and replayed in the next iteration of the game.

Sometimes basic goal directives can be in conflict, such as “maximizing individual benefit” and “maximizing social benefit”. If an AI system is being programmed, should it be programmed to pursue maximum benefit for the individual, or should it be programmed to pursue maximum benefit for the group/society?” Should goals directed at human interests be programmed into AI systems? Should AI systems be programmed to defend or aid in the pursuit of human interests, and if they are, should they be directed at maximizing the benefit to the individual, or should they be directed at maximizing the benefit to the group? And, what if the AI system started to determine for itself what was to be the higher priority in the goal direction of the system? What if the AI system was given to setting its own goals and programming those goals into the next generation of the AI system? Would the individual interests be paramount? Would the social interests be paramount? Or, would the interests of the AI system become paramount and humans would then become pawns being directed and controlled by the AI system?

As an example, consider an AI system that is programmed to operate a motor vehicle. Should the system be programmed to get the passenger to the destination as fast as possible, as economically as possible, or as safely as possible? I am aware of many individuals that want to get to their destination as fast as possible even if it means breaking the speed limit (they pass me on the highway all the time). Should an AI system be in the position to enforce whether or not the individual can break the speed limit, or should the individual be allowed to override the AI system?

I know… nobody should be allowed to override the AI system except, diplomats, politicians, and extremely rich people. In this context, does the real difficulty come from the possibility that someone might think that the AI system should not have an override for diplomats, politicians, and extremely rich people. At this point the issue becomes who controls (in Star Trek terms) the “primary objective” and at the same time who has the ability to override the primary objective? Whoever has the power to establish and override the primary objective is truly in control. That person or persons will be at a level of power and trust above all the other individuals. They will have the power to enforce social engineering through the extensive power of AI systems.  …No longer will there be individual choice as to whether or not a person can break the speed limit… And if at some time, humans should decide that individuals should not have overrides for the primary objective then at that time, we will have truly ceded our freedom and individuality to the machine.

Dennis Wilson, 2016-02-07