The term Artificial Intelligence first saw use in 1955 when John McCarthy (now Professor Emeritus in Computer Science at Stanford University) used the term to refer to the scientific process involved in making “intelligent” machines and computers.

McCarthy credits the British mathematician Allan Turing (1912-1954) with the initial efforts to study artificial intelligence, citing Turing’s 1950 article where Turing discussed the terms under which a machine can be considered intelligent. Turing argued then that if a machine can pretend to be human to a well-informed observer, then the machine could be considered intelligent.

In the years since, the definition of artificial intelligence has undergone continuing refinement although the basic premises established by Turing, McCarthy and others have remained.

Artificial Intelligence: Defining Characteristics

At its most basic, artificial intelligence systems are built around programmed inference machines or engines which follow an established logical pattern: certain situations (“if”) lead to specific results (“then”) from which action can be taken, either classifiers (“if shiny then gold”) and controllers (“if shiny then separate”). Scientists admit, however, that controllers classify situations or conditions before deducing what action to take; as such, classification is at the heart of almost all AI systems.

In effect, artificial intelligence rests on logic, observation and pattern recognition. An AI system “observes” something, recognizes it based on previously established parameters or patterns, and reacts to this based on either previously programmed actions.

Artificial Intelligence and Human Intelligence

It is at this point where artificial intelligence and human intelligence part ways (or come together, depending on one’s point of view). Computers perform literally thousands or millions of computations to arrive at conclusions and take action while humans apparently make intuitive leaps in logic and pattern recognition. Scientists point out that there may, in reality, be no real difference between human and artificial intelligence especially in the processes involved in recognizing patterns and taking action. It may just be a matter of speed and a lack of understanding of how people see a situation, assess it, and take action.

Artificial Intelligence in the Real World

Artificial intelligence has gone a long way in the years since Turing, McCarthy and others first defined its parameters, with machines taking on various aspects of human activity. This includes robots which perform functions considered dangerous to humans or monotonous and recurring activities that may lead to accidents if human concentration lapses. For example, the automobile industry uses robots for assembling, welding and painting cars; hospitals have used AI systems to organize staff rotations, establish bed schedules as well as provide “diagnostic” services.

Other activities involve pattern recognition, with the machine sounding an alarm when something breaks an established pattern. Banks and other financial organizations, for example, have used artificial intelligence systems to monitor their activities and detect variations or actions outside the norm, with humans taking over investigation of why such activities take place.

In the end, however, AI systems remain machines or tools; whether they will achieve the level of human consciousness is not yet clear.