AI (AI) has made its way into the public consciousness thanks to the emergence of powerful new AI chatbots and image generators. But the area has a long history that stretches back to the dawn of computers. Given how fundamental AI could be in changing the way we live in the coming years, understanding the roots of this fast-growing field is critical. Here are 12 of the most important milestones in AI history.
1950 – Alan Turing’s seminal paper on AI
Famous British computer scientist Alan Turing published paper titled “Computing machines and intelligence”, which was one of the first detailed studies of the question “Can machines think?”.
Answering this question requires first tackling the challenge of defining “machine” and “thinking.” So instead, he proposed a game: An observer would observe a conversation between a machine and a human and try to determine which was which. If they couldn’t do it reliably, the machine would win the game. Although it doesn’t prove that a machine “thinks,” the Turing Test—as it’s come to be known—has been an important yardstick for AI progress ever since.
1956 — The Dartmouth Workshop
AI as a scientific discipline can trace its roots back to Dartmouth Summer Research Project on Artificial Intelligence, held at Dartmouth College in 1956. Participants included a who’s who of influential computer scientists, including John McCarthy, Marvin Minsky, and Claude Shannon. It was the first time the term “artificial intelligence” was used, as the group spent almost two months discussing how machines could simulate learning and intelligence. The meeting spurred serious research into artificial intelligence and laid the groundwork for many of the discoveries that came in the following decades.
1966 — The first AI chatbot
MIT researcher Joseph Weisenbaum introduced the first-ever AI chatbot known as ELIZA. The basic software was rudimentary and returned canned responses based on the keywords found in the prompt. Nevertheless, when Weisenbaum programmed ELIZA to act as a psychotherapist, people were amazed at how convincing the conversations were. Work drives growth interest in natural language processingincluding from the US Defense Advanced Research Projects Agency (DARPA), which provided significant funding for early AI research.
1974-1980 — The First “AI Winter”
It wasn’t long before the early enthusiasm for AI began to fade. The 1950s and 1960s were a fertile time for the field, but in their enthusiasm, leading experts made bold claims about what machines would be able to do in the near future. The failure of technology to live up to these expectations has led to growing discontent. A highly critical report in the field by British mathematician James Lighthill caused the UK government to freeze almost all funding for AI research. DARPA also drastically cut funding around this time, leading to what would become known as the first “AI winter.”
1980 – “Expert Systems” craze
Despite disillusionment with AI in many places, research continued — and in the early 1980s, the technology began to attract the attention of the private sector. In 1980, researchers at Carnegie Mellon University built An AI system called R1 for Digital Equipment Corporation. The program was an “expert system,” an approach to AI that researchers had been experimenting with since the 1960s. These systems use logical rules to reason through large databases of specialized knowledge. The program saved the company millions of dollars a year and started a boom in the industrial implementation of expert systems.
1986 — Fundamentals of Deep Learning
Most research to date has focused on “symbolic” AI, which relied on hand-crafted logic and knowledge databases. But since the birth of the field, there has also been a competing stream of research on “connectionist” approaches that have been inspired by the brain. This continued quietly in the background and finally came to light in the 1980s. Instead of programming systems by hand, these techniques involve persuading “artificial neural networks” to learn rules by training on data. In theory, this would lead to a more flexible AI that isn’t limited by the manufacturer’s preconceptions, but training neural networks has proven challenging. In 1986, Geoffrey Hinton, who would later be called one of the “godfathers of deep learning,” published paper promoting “backpropagation” – the learning technique that underpins most AI systems today.
1987-1993 — Second AI winter
Following up on their experience in the 1970s, Minsky and fellow AI researcher Roger Shank warned that the hype over AI had reached unsustainable levels and the field was in danger of another retreat. They coined the term “AI winter” in panel discussion at the Association for the Advancement of Artificial Intelligence meeting in 1984. Their warning proved prescient, and by the late 1980s the limitations of expert systems and their specialized AI hardware began to become apparent. Industry spending on AI has plummeted and most AI startups have gone bankrupt.
1997 – Deep Blue defeated Garry Kasparov
Despite repeated booms and busts, AI research made steady progress in the 1990s, largely out of the public eye. That changed in 1997, when Deep Blue—an expert system created by IBM—beat world chess champion Garry Kasparov in six game streak. Ability in complex play has long been regarded by AI researchers as a key marker of progress. Therefore, beating the best human player in the world was seen as an important milestone and made headlines around the world.
2012 — AlexNet ushers in the era of deep learning
Despite a wealth of academic work, neural networks were considered impractical for real-world applications. To be useful, they had to have many layers of neurons, but implementing large networks on conventional computer hardware was prohibitively efficient. In 2012, Alex Kryzewski, a Hinton PhD student, won the ImageNet computer vision competition by a wide margin with a deep learning model called AlexNet. The secret was to use specialized chips called graphics processing units (GPUs), which can efficiently handle much deeper networks. This started the deep learning revolution that has driven most AI advances since then.
2016 — The defeat of AlphaGo by Lee Sedol
While AI had already left chess in the rearview mirror, the much more complex Chinese board game Go remained a challenge. But in 2016, Google DeepMind AlphaGo defeated Lee Sedol, one of the best Go players in the world, in a five-game series. Experts had assumed that such an achievement was years away, so the result has led to growing excitement around the advancement of AI. This is partly due to the general-purpose nature of the algorithms underlying AlphaGo, which rely on an approach called “reinforcement learning.” In this technique, AI systems effectively learn through trial and error. DeepMind later expanded and improved the creation approach AlphaZerowhich can be taught to play a wide variety of games.
2017 — Invention of Transformer Architecture
Despite significant advances in computer vision and gaming, deep learning has been slower to progress with language tasks. Then in 2017, Google researchers published a new neural network architecture called “transformer” that can ingest massive amounts of data and make connections between distant data points. This has proven particularly useful for the complex task of language modeling and has made it possible to create AIs that can simultaneously handle different tasks, such as translation, text generation, and document summarization. All of today’s leading AI models rely on this architecture, including image generators such as OpenAI DALL-Eas well as Google DeepMind’s revolutionary model of protein folding AlphaFold 2.
2022 – Launch of ChatGPT
On November 30, 2022, OpenAI released a chatbot powered by its GPT-3 large language model. Known as “ChatGPT,” the tool became a global sensation, amassing more than a million users in less than a week and 100 million by the next month. It was the first time members of the public could interact with the latest AI models—and most were overwhelmed .. The service is the cause of the AI boom, which has seen billions of dollars invested and spawned numerous copycats from big tech companies and startups open letter by prominent tech leaders calling for a pause in AI research to allow time to assess the technology’s implications.