Today, no matter where we look, AI is everywhere, and it seems like everyone is talking about it. From shaping our content feeds to help us take better photos or find information, AI has really grown into a massive industry with the help of countless engineers, programmers and data scientists. But it wasn’t always like this. Like every story, AI has its beginnings.

The history of contemporary AI can be traced to classical philosophers' attempts to understand human thinking as a symbolic, mechanistic system. However, the field of AI wasn't formally founded up until 1956, at a conference at Dartmouth College, where the term "artificial intelligence" was coined.

The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".

And so the golden age of AI began. The years 1956–1974 saw many milestones: computers started solving algebra problems, proving theorems in geometry and learning to speak English. Many of these projects at the time were funded by government programs and universities.

In 1964 the MIT Artificial Intelligence Laboratory began developing ELIZA – worlds’ first chatbot which could carry out seemingly intelligent conversations via text with users. ELIZA was a success, and its developers noted at the time how surprised they were that users were so willing to communicate with a machine in this way. In fact, many users of ELIZA could not believe that a machine and not a person was responding to them.

Following a period of rapid technological innovation and research, the first AI winter came upon the field. In the 70s, due to the endless optimism of AI researchers and largely inflated expectations, the field of AI was heavily criticized and experienced financial setbacks.  When the promised results failed to become a reality, funding for AI slowly disappeared.

This all changed once again in the 80s when the business world started showing increased interest in AI’s potential to manage knowledge. A form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI research seeking to create first truly intelligent supercomputers.

Despite future setbacks, criticisms, and doubts, the field continued to make advances. During the late 90s, web crawlers and other AI-based information extraction programs become essential in widespread use and adoption of the World Wide Web. And among one of the most famous examples of AI’s success is the case when an IBM supercomputer called Deep Blue won a chess match against the world chess champion Garry Kasparov in 1997.

Another exceptional example of game playing AI is Google‘s AlphaGo – a computer program that plays the board game Go. In 2015, AlphaGo became the first Go program to beat a human professional Go player. This achievement is remarkable in the history of AI due to the incredible mathematical complexity of Go. For comparison, a single position in a chess game has about 20 possible moves, however, in Go, this number is closer to 200 and unlike chess, Go is based more on intuition than logic.

The start of the current decade saw the mainstream adoption of AI in consumer technology. Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014), AI voice assistants, took the world by storm and are still being continuously developed and perfected, introducing new features and upgrades.

Today AI is being used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery, and toys. The huge success of this incredible technology means that eventually, we will see it spread into every aspect of our lives and truly, there is no telling what exciting things the future will bring.

Other Articles