Artificial Intelligence (AI) is officially born in 1956, the year of a highly notable summer conference held at Dartmouth College in Hanover, New Hampshire, USA. On this occasion this new discipline was founded.
The scientific community agrees on this date, as it was the first time the terms “Intelligence” and “Artificial” were coined together. Yet the origins of AI can also be traced to the contributions of researchers in previous years.
When we talk about the history of AI we must refer to the 1940s, the Second World War and the advent of the first computers. It was at that time that the idea of the human-machine analogy was born, that human intelligence could be simulated through the use of machines, an essential condition for the development of AI. In this period, one of the greatest thinkers to explore this theme was Alan Turing.
The precious contribution of Alan Turing
If in previous centuries the conditions for the study of human intelligence and its possible artificializations were created with scientific and mathematical studies, it was only with the advent of the first electronic computers in the 1940s that this interest took a concrete path. A further decisive step was taken thanks to the work of Alan Turing.
The British scientist Alan Turing is considered one of the fathers of modern computer science and was among the first to take an interest in the subject. His work was not only theoretical, but had very important applications especially in the field of cryptography. In fact, Turing made a fundamental contribution to the success of the allies in Second World War: thanks to his collaboration with the British government, the code behind the Enigma device and the German military transmissions were cracked.
With “On Computable Numbers, With An Application To The Entscheidungsproblem”, written in 1936, he laid the foundation for concepts such as calculability, computability and the Turing machine. The Turing machine is not a physical machine but only a theoretical model of a machine capable of executing any type of computable sequence. For the first time in history, today’s concept of software was introduced.
In 1950, Turing wrote the article “Computing machinery and intelligence”, in which he described what would become known as the “Turing Test”. The purpose of this test, also known as the “Imitation game”, was to evaluate the presence or absence of “human” intelligence in a machine. The test involved the presence of a judge in front of a terminal, through which he could communicate with two entities: a man and a computer. If the judge could not distinguish the man from the machine, then the computer had passed the test and could be defined as “intelligent”.
The work done by Turing (who died in the meantime in 1954) played a fundamental role in the Dartmouth Conference.
The Dartmouth Conference
In August 1955 a document, known as the “Dartmouth proposal”, was drawn up by four academics. It examines some main themes of fields of research of the period, including neural networks, the theory of computability, creativity and natural language processing and recognition. These topics would be discussed the following summer.
The document begins with the statement:
“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. “
The following summer, the group of scholars met at “Dartmouth College for the Dartmouth Summer Research Project on Artificial Intelligence”. 1956 thus marks the official beginning of a new field of research that the mathematician John McCarthy, a professor at Dartmouth and major promoter of this event, proposed to call Artificial Intelligence.
Among the other organizers were all great personalities of the time: Marvin Minsky, a researcher in mathematics and neurology at Harvard; Nathaniel Rochester, director of information research at an IBM research center; Claude Shannon, a mathematician already famous for information theory, then at the Bell telephone laboratories.
The event, funded by the Rockefeller Foundation, had the characteristics of brainstorming, that is an open debate with little structure, with the purpose of understanding whether it was possible to for machines to perform like humans. The challenge that pushed the group was precisely the attempt to demonstrate that every aspect of learning and characteristic of human intelligence could be simulated by a machine.
What significance should we attribute to the Dartmouth Conference?
Many different questions and ideological positions emerged from the common discussions: on the one hand a materialistic vision of the human mind, on the other a conception that attempts to highlight the peculiar and unavoidable traits of man compared to any machine.
Another problem was related to the machinery of the time, which did not have adequate computational capacity. As a result, the conference did not go as planned and the results were somewhat inconclusive. Furthermore, several names indicated by McCarthy did not show up and, in the end, there were only ten participants.
Still, the historical importance of the conference is unquestionable. Many experts met in Dartmouth for the first time and this was a highly inspiring meeting. In fact, for a whole subsequent period, the main achievements in the field of AI have been obtained by these same scientists or by their students.
It was therefore after the Dartmouth Conference that AI became in effect an intellectual research field, even if controversial, and from that moment it began to progress more and more rapidly.