hought-capable artificial beings appeared as storytelling devices in antiquity,24 and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel ?apek’s R.
U.R. (Rossum’s Universal Robots).
25 These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.19The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.26 Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”.
27 The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.28The field of AI research was born at a workshop at Dartmouth College in 1956.29 Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.30 They and their students produced programs that the press described as “astonishing”:31 computers were learning checkers strategies (c. 1954)32 (and by 1959 were reportedly playing better than the average human),33 solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.
34 By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense35 and laboratories had been established around the world.36 AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation ..
. the problem of creating ‘artificial intelligence’ will substantially be solved”.7They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill37 and ongoing pressure from the US Congress to fund more productive projects, both the U.
S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,9 a period when obtaining funding for AI projects was difficult