The History of Artificial Intelligence

Artificial intelligence is no longer a dream. It’s a technological marvel that’s now reshaping the world in such a way that we couldn’t even imagine before.
It took time for the field to mature enough. The real impact of AI (artificial intelligence) in the industry really took off a few years ago due to modern computing power. Now, AI is the driving energy of major forces all over the world.
For example, the all-powerful search engine Google, virtual assistants (Siri, Alexa, Cortana, and others), showing suggested news and information (Facebook news feed, YouTube’s “Recommended” videos etc.), data science, autonomous driving cars etc.
It’s an amazing achievement for us, humans, to have such immense implementation of artificial intelligence. With all the ongoing researches and advancements of AI, it’s guaranteed to make more appearance in the future.
How about checking out the background of AI? How did it all begin? What’s the current situation? Where are we headed?
A long long time ago…
The concept of a machine doing works just the same as a human isn’t brand new. In fact, even before the modern days, philosophers thought of such beings for a very long time. For example, the ancient Greek myths incorporated the idea of intelligent robots like Talos and artificial beings like Galatea and Pandora etc.
Even various sacred mechanical statues built in Egypt and Greece were believed to have wisdom and emotion.
The precursor to Artificial Intelligence (1950)
Alan Turing was an important person in the years before Artificial Intelligence was even a concept. He was known for developing techniques to decipher cryptic messages the Germans used during the second world war.
In 1950 he developed the Turing Test. The test consists of trying to determine if they are chatting to a person or a computer in written form. The test is repeated multiple times. If the person guesses wrong 50% or less, then the Turing Test has passed. It is then concluded that the machine has a form of Artificial Intelligence.
Things are getting serious (1950s to 1970s)
In 1951, the first machine that successfully mastered checkers using an algorithm emerged by the name of Ferranti Mark 1. Allen Newell, J. C. Shaw, and Herbert Simon also developed another algorithm (General Problem Solver) that would solve various mathematical problems. Using the algorithm, they developed the program Logic Theorist. The project was funded by Research and Development (RAND) Corporation. Logic Theorist is often considered as one of the first AI programs of the early days.
An important event took place in 1956. The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), hosted by Marvin Minsky and John McCarthy, this is where Logic Theorist was presented. This major event contributed to shaping the upcoming 20 years of AI research and further development.
During the 1960s, the focus of the researchers was to develop algorithms for solving mathematical problems and geometrical theorems. During the late 1960s, scientists started working on other sectors like machine vision learning. They also worked on integrating machine learning in robots. The first “intelligent” robot WABOT-1 was developed in Japan in 1972.
AI Winters (1970s to 1990s)
Despite all the global effort and well-funded projects running for decades, the creation of AI proved to be incredibly difficult. Part of the reason is the computers. At that time, computer technology wasn’t as strong as today’s world. Data storing and processing would require a huge amount of processing and storing power that was simply not available at that time. In the early 1950s, leasing a computer would cost up to $200,000 a month! Compared to today’s cost, that’s nothing anyone could afford. Thus, only the prestigious universities and giant tech companies could afford to work within the uncharted water.
That’s why starting from the mid-1970s up to mid-1990s, there was an acute shortage of funding for AI research. These years have become known as the “AI Winters”.
New beginning (late 1990s)
Start at 1957 all the way up to 1974, the progress of AI was unbelievable. Computers started having the ability of to store more data and processing them faster. Moreover, the hardware also started becoming cheaper and thus, more accessible to everyone. Thanks to continuous research, machine learning algorithms improved a lot. People started learning which algorithm to apply for which problem.
Early demos like the General Problem Solver and ELIZA by Joseph Weizenbaum offered a promising future for problem-solving and interpreting spoken language. Thankfully, leading researchers on the field (including the attendees of the DSRPAI) successfully persuaded government agencies like DARPA (Defense Advanced Research Projects Agency) for funding AI research at several institutions. The government was interested in machine learning mostly for transcribing and translating spoken languages and high throughput data processing.
At the end of the 1990s, American corporations became interested in AI once again. By that timeframe, the Japanese government unveiled its plans for the Fifth Generation Computer Project for advancing machine learning. The AI enthusiasts anticipated that computers would soon be able to carry conversations, interpret images, and translate languages and even reason like humans.
However, computer technology was still a major limiting factor. The storing and processing ability of the computers was still not fast enough for breaching the initial fog of AI. According to McCarthy, computers were still too weak for exhibiting intelligence.

Then, the picture changed. The computer techs were progressing pretty fast. That’s why without changing the coding of artificial intelligence, it was possible to meet the needs. The first formidable achievement was when IBM’s Deep Blue defeated Garry Kasparov, the then reigning world chess champion in 1997. Only recently, Google’s Alpha Go was able to defeat Chinese Go champion Kie Je.
Some AI funding dried up in the early 2000s because of the dot com bubble burst. Yet, AI and machine learning continued its progress through that, thanks to the advancement of computer hardware.
Current age (early 2000s to 2019)
Now, we have an exponential growth of computer’s processing capabilities along with immense storage ability. This allows storing a vast amount of data with ease and crunching them as much as necessary. Using all the facilities, we are now enjoying the AI more than anyone can ever imagine.
Let’s check out the places where AI is currently used.
Business
For the past decades, giant companies like Google, Amazon, Baidu and other companies leveraged machine learning for becoming the monster in the tech world. For example, the AI powering Google’s search engine will ensure that you get the right data you’re looking for. Amazon’s AI boosts their business like crazy. You can get an Echo device powered by Alexa as your personal assistant.
Using machine learning, it’s possible for such large businesses to identify their customers’ behavior pattern. Thus, these businesses can fine-tune their services to each individual customers.
AI can also dramatically reduce the cost of maintaining services. Through automation, there’s no longer the necessity of any human touch over complex operations. As the process is completely run by a program, it’s possible to reduce both the error and time for maintenance and other critical operations.
Using AI, it’s also possible to predict possible hardware failure. Any type of hardware failure in the giant industries can be a disaster. Involving the repair cost while adding the production downtime increases dramatically. With the help of machine learning, it’s possible to predict such failures and take necessary steps for preventing them.
Games
Who doesn’t love gaming? Fighting against all the enemies and defeating them with your superior skill and years of experience is a fantastic feeling. Games are the most effective way that boosts our brain. The gaming industry has boomed like crazy over the past decade, thanks to the power of AI.
Games have been using AI for quite a long time. For example, when you go for a match in League of Legends or Dota 2, you have the option of fighting against AI. Other offline games like Crysis, Call of Duty, Starcraft, Far Cry, and Need for Speed and every single game that incorporates smart enemy interactions; all use AI for all the opponent players.
However, the AI enemies in these games aren’t that difficult. But that doesn’t mean that AI is a weak opponent. In fact, it can easily beat the best of the best. For example, as previously mentioned, Google’s Alpha Go and IBM’s Deep Blue became the master at Go and chess respectively.
Take a note at the example of OpenAI. It created a huge ruckus in the gaming industry. It was the first AI that defeated the world champs in Dota 2. Dota 2 is a massively complex game that requires a huge amount of investment of both times and focuses for mastering. However, OpenAI developed their own Dota 2 bot that literally crushed the human team.
Medical field

Using the power of machine learning, the medical field is getting revolutionized. From medicine to cure for complex diseases, every single field part is getting benefit from AI.
Using machine learning, AI can easily diagnose a patient and figure out what disease and issue is there. With enough data and training, AI can successfully offer a cheaper and faster diagnosis of critical diseases.
In the case of surgery, AI-assisted robotic surgery is showing real promise. Robots are capable of analyzing data from pre-op medical records and guide the surgeon with necessary instructions. This process can lead to less surgical accidents and ultimately, a 21% reduction in a patient’s hospital stay. Such surgery is regarded as “minimally invasive”, meaning the least amount of incisions.
Other implementations
There are a lot of places AI is being used. Besides marketing, AI is powering entertainment sectors, banking, and medicals. In most cases, without changing the algorithm much, it’s possible to achieve the target result just by feeding the AI big data and massive computation. It’s a big sign of Moore’s law slowing down. Well, the current tech will let us build hardware so fast.
The future
So, what will be the future? Well, big things are coming around and we’re only doorsteps away. There is some pretty big stuff that is already underway.
For example, autonomous cars. Big companies like Apple, Tesla, BMW, Baidu, Samsung, Uber, Volkswagen etc. all are working on making driverless cars possible. Of course, there are some driverless cars out there. However, they’re still on a very early stage. For becoming a stable and trustable solution, we have to wait for a couple of years. Can you imagine yourself hiring a driverless taxi from Uber?
How about having a conversation with machines? There is already some bot for calling people and having a simple conversation. It’s possible for the tech to become more advanced in the future. You can chat and talk with your AI friends all you want, anytime you want.
The overall target of all the AI researches is achieving general intelligence – machines surpassing human cognitive abilities in all the tasks. Theoretically, that’s very much possible. In fact, all we need is just invest an immense amount of hardware power and enough time for the AI to get trained. However, do we need this? Ethically, that will cause a conflict. In reality, such AI would cause a problem with us, humans. Those who will control the AI will have the ability to do immense things. There are all the sci-fi films showing the AI taking over mankind. With all the possibility of AI, that’s not just sci-fi. In fact, that kind of scenario is very realistic.
Even if the capability is there, we need a serious conversation on how the machine policies will be and ethics (ironically, both of these are human subjects). For now, AI is safe to improvise and run amok in society.
2 Comments
Comments are closed.