In the last 70 years the development of Artificial Intelligence (AI) saw many stages with both optimism and despair at the same time. The initial foot prints can be found in 1940s, but the development of AI made a major breakthrough in the Dartmouth Conference in 1956.

The evolution of AI can be divided into 10 stages. According to Stuart Russell and Peter Norvig the ten stages are: The gestation of AI (1943-1955), the birth of AI (1956), early enthusiasm, great expectations (1952-1969), a dose of reality (1966-1973), knowledge-based systems: the key to power? (1969-1979), AI becomes an industry (1980-present), the return of neural networks (1986-present), AI adopts the scientific method (1987-present), the emergence of intelligent agents (1995-present), the availability of very large data sets (2001-present).

The first stage is based on a network of artificial neurons. Each artificial neuron in the web of other artificial neurons turns off and on according to the stimulation process and automation, and can be updated according to Hebbian Learning. A lot of work had been done during the early 1940s, but it was Alan Turing who became most influential in this regard. His article ‘Computing Machinery and Intelligence’ was characterized by the Turing test, machine learning, genetic algorithm and reinforcement learning.

The second stage proved to be the cornerstone for AI development. John McCarthy was the architect of it. McCarthy had the curiosity and an unparalleled quest for the development of AI in an organized manner. For this purpose, he left Princeton and Stanford and went to Dartmouth College. It was he who brought researchers together like Minsky, Claude Shannon, and Nathaniel Rochester. They organized a two-month workshop at Dartmouth in 1956. The workshop comprised of ten researchers including Trenchard More from Princeton, Arthur Samuel from IBM, and Ray Solomonoff and Oliver Selfridge from MIT. Core themes of the conference were automata theory, neural nets and the study of intelligence. Herbert Simon and Allen Newell came with a reasoning program that stole the show. The Logic Theorist (LT), about which Simon claimed, ‘we have invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind—body problem,’ was rejected by the experts, but it paved the way for future research of non-numeric computation. Certainly, the Dartmouth workshop could not bring a major breakthrough, but it did introduce all the main figures to each other. For the next two decades, the field was dominated by these people and their students and colleagues at MIT, CMU, Stanford, and IBM.

The Logic Theorist (LT), about which Simon claimed, ‘we have invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind—body problem,’ was rejected by the experts, but it paved the way for future research of non-numeric computation.

The third stage constitutes of four elements. The list of X’s, footprints of GPS, the micro worlds or block worlds and the Minsky-McCarthy contributions. The early computers were arithmetic task oriented but with the passage of time they started to perform more than just arithmetic tasks. It was astonishing whenever a computer did anything remotely clever. Meanwhile, the intellectual elite propagated that ‘a machine can never do X’. It was natural for researchers to respond and demonstrate one X after another. This period was referred by McCarthy as ‘Look, Ma, No hands!’ The General Problem Solver of Newell and Simon was designed to imitate human problem-solving approach which after having some puzzle classes embodied human approach in it. This was incorporated in GPS, thus GPS emerged as the first to embody human thinking approach. Another important development was the introduction of micro world or blocks world. It was consisted of a set of blocks placed on a table and the task was to rearrange the blocks in a certain way by using a robot hand to pick the blocks. This program provided the base to the Vision project of Huffman and the perceptron theorem. In the second half of 1950s, John McCarthy and Marvin Minsky moved to MIT where they worked for a long time supervising many students who chose a limited problem which required intelligence to solve. Notable limited domains were James Slagle’s SAINT program which was able to solve closed-form calculus integration problems, Tom Evans’s ANALOGY program solved geometric analogy problems that appear in the IQ test and Daniel Bobrow’s STUDENT program solved algebra story problems. These limited domains were known as microworlds.

AI researchers, started to predict their successes with different interpretations. Hence, the term ‘Visible Future’ emerged. Simon predicted that within 10 years a computer would be a chess champion, and a significant mathematical theorem would be proved by the machine. These predictions came true (or approximately true) within 40 years rather than 10. Simon’s overconfidence was due to the promising performance of early Al systems on simple examples in almost all cases. However, these early systems turned out to fail miserably when tried out on wider selections of problems and on more difficult problems.

During this stage AI’s development faced three kinds of difficulties. The first difficulty was that early AI programs knew nothing about subject matters rather simple syntactic manipulations. For instance, machine translation could not succeed due to this reason when Americans wanted to get the exact translation of Russian conversations at the time of Sputnik launch. Absence of background knowledge was the primary reason of failure. The famous retranslation of ‘the spirit is willing, but the flesh is weak’ as ‘the vodka is good, but the meat is rotten’ illustrates the difficulties encountered.

This failure resulted in the introduction of knowledge-based systems. Up to this stage, the nature of problem solving was general-purpose (weak methods). The alternative to this approach was that more-powerful and domain-specific knowledge was required in order to handle complexed tasks. For this purpose, knowledge-based system like DENDRAL (detecting molecular structure from the information provided by a mass spectrometer) and MYCIN (calculating certainty of uncertainty in medical diagnosis) were introduced.

The introduction of knowledge-based systems led to a more specific expert-based system. DEC’s (Digital Equipment Corporation) developed more than 100 expert-based systems for commercial purposes because it saved $40 million per year. Within limited time, nearly every major US corporation had its own AI group and was either using or investigating expert systems. In 1981, the Japanese introduced ‘Fifth Generation’ a ten-year project to develop an intelligent computer. In response to this, the US formed the Microelectronics and Computer Technology Corporation (MCC). Both had the major elements of AI systems. Overall, during this stage, hundreds of companies were building expert-systems, vision systems and robots based on AI. But most companies stepped back because they failed to develop what they promised, and this period was hence known as AI winter. Due to AI winter, in the next two stages, scientific methods were applied on neural networks or perceptron and agents of AI but could not bring any meaningful breakthrough.

This failure resulted in the introduction of knowledge-based systems. Up to this stage, the nature of problem solving was general-purpose (weak methods). The alternative to this approach was that more-powerful and domain-specific knowledge was required in order to handle complexed tasks.

Throughout the history of AI and computers, the major focus was on algorithm, but in recent years data study has gained more importance. Yarowsky, Hays and Efros consider that large data study is more important than algorithm in order to bring cognition and intelligence in machine’s behavior. For this purpose, they conducted an experiment in which a hole in a photograph was to be filled correctly. Suppose one uses Photoshop to mask out an ex-friend from a group photo, but now one needs to fill in the masked area with something that matches the background. Hays and Efros defined an algorithm that searches through a collection of photos to find something that will match. They found the performance of their algorithm was poor when they used a collection of only ten thousand photos but crossed a threshold into excellent performance when they grew the collection to two million photos.

All these stages of evolution of AI suggest that cognition and intelligence in machines is no more a problem but the quality and extent of cognition is uncertain. Time will tell the possibility and probability of the occurrence of this.

Leave a Comment

Login

Welcome! Login in to your account

Remember me Lost your password?

Lost Password