Artificial intelligence is no longer a dream. It\u2019s a technological marvel that\u2019s now reshaping the world in such a way that we couldn\u2019t even imagine before. \n\n\n\nIt took time for the field to mature enough. The real impact of AI (artificial intelligence) in the industry really took off a few years ago due to modern computing power. Now, AI is the driving energy of major forces all over the world.\n\n\n\nFor example, the all-powerful search engine Google, virtual assistants (Siri, Alexa, Cortana, and others), showing suggested news and information (Facebook news feed, YouTube\u2019s \u201cRecommended\u201d videos etc.), data science, autonomous driving cars etc.\n\n\n\nIt\u2019s an amazing achievement for us, humans, to have such immense implementation of artificial intelligence. With all the ongoing researches and advancements of AI, it\u2019s guaranteed to make more appearance in the future.\n\n\n\nHow about checking out the background of AI? How did it all begin? What\u2019s the current situation? Where are we headed?\n\n\n\nA long long time ago...\n\n\n\nThe concept of a machine doing works just the same as a human isn\u2019t brand new. In fact, even before the modern days, philosophers thought of such beings for a very long time. For example, the ancient Greek myths incorporated the idea of intelligent robots like Talos and artificial beings like Galatea and Pandora etc.\n\n\n\nEven various sacred mechanical statues built in Egypt and Greece were believed to have wisdom and emotion.\n\n\n\nThe precursor to Artificial Intelligence (1950)\n\n\n\nAlan Turing was an important person in the years before Artificial Intelligence was even a concept. He was known for developing techniques to decipher cryptic messages the Germans used during the second world war.\n\n\n\nIn 1950 he developed the Turing Test. The test consists of trying to determine if they are chatting to a person or a computer in written form. The test is repeated multiple times. If the person guesses wrong 50% or less, then the Turing Test has passed. It is then concluded that the machine has a form of Artificial Intelligence.\n\n\n\nThings are getting serious (1950s to 1970s)\n\n\n\nIn 1951, the first machine that successfully mastered checkers using an algorithm emerged by the name of Ferranti Mark 1. Allen Newell, J. C. Shaw, and Herbert Simon also developed another algorithm (General Problem Solver) that would solve various mathematical problems. Using the algorithm, they developed the program Logic Theorist. The project was funded by Research and Development (RAND) Corporation. Logic Theorist is often considered as one of the first AI programs of the early days.\n\n\n\nAn important event took place in 1956. The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), hosted by Marvin Minsky and John McCarthy, this is where Logic Theorist was presented. This major event contributed to shaping the upcoming 20 years of AI research and further development.\n\n\n\nDuring the 1960s, the focus of the researchers was to develop algorithms for solving mathematical problems and geometrical theorems. During the late 1960s, scientists started working on other sectors like machine vision learning. They also worked on integrating machine learning in robots. The first \u201cintelligent\u201d robot WABOT-1 was developed in Japan in 1972.\n\n\n\nAI Winters (1970s to 1990s)\n\n\n\nDespite all the global effort and well-funded projects running for decades, the creation of AI proved to be incredibly difficult. Part of the reason is the computers. At that time, computer technology wasn\u2019t as strong as today\u2019s world. Data storing and processing would require a huge amount of processing and storing power that was simply not available at that time. In the early 1950s, leasing a computer would cost up to $200,000 a month! Compared to today\u2019s cost, that\u2019s nothing anyone could afford. Thus, only the prestigious universities and giant tech companies could afford to work within the uncharted water. \n\n\n\nThat\u2019s why starting from the mid-1970s up to mid-1990s, there was an acute shortage of funding for AI research. These years have become known as the \u201cAI Winters\u201d.\n\n\n\nNew beginning (late 1990s)\n\n\n\nStart at 1957 all the way up to 1974, the progress of AI was unbelievable. Computers started having the ability of to store more data and processing them faster. Moreover, the hardware also started becoming cheaper and thus, more accessible to everyone. Thanks to continuous research, machine learning algorithms improved a lot. People started learning which algorithm to apply for which problem.\n\n\n\nEarly demos like the General Problem Solver and ELIZA by Joseph Weizenbaum offered a promising future for problem-solving and interpreting spoken language. Thankfully, leading researchers on the field (including the attendees of the DSRPAI) successfully persuaded government agencies like DARPA (Defense Advanced Research Projects Agency) for funding AI research at several institutions. The government was interested in machine learning mostly for transcribing and translating spoken languages and high throughput data processing.\n\n\n\nAt the end of the 1990s, American corporations became interested in AI once again. By that timeframe, the Japanese government unveiled its plans for the Fifth Generation Computer Project for advancing machine learning. The AI enthusiasts anticipated that computers would soon be able to carry conversations, interpret images, and translate languages and even reason like humans.\n\n\n\nHowever, computer technology was still a major limiting factor. The storing and processing ability of the computers was still not fast enough for breaching the initial fog of AI. According to McCarthy, computers were still too weak for exhibiting intelligence.\n\n\n\n\n\n\n\nThen, the picture changed. The computer techs were progressing pretty fast. That\u2019s why without changing the coding of artificial intelligence, it was possible to meet the needs. The first formidable achievement was when IBM\u2019s Deep Blue defeated Garry Kasparov, the then reigning world chess champion in 1997. Only recently, Google\u2019s Alpha Go was able to defeat Chinese Go champion Kie Je.\n\n\n\nSome AI funding dried up in the early 2000s because of the dot com bubble burst. Yet, AI and machine learning continued its progress through that, thanks to the advancement of computer hardware.\n\n\n\nCurrent age (early 2000s to 2019)\n\n\n\nNow, we have an exponential growth of computer\u2019s processing capabilities along with immense storage ability. This allows storing a vast amount of data with ease and crunching them as much as necessary. Using all the facilities, we are now enjoying the AI more than anyone can ever imagine.\n\n\n\nLet\u2019s check out the places where AI is currently used.\n\n\n\nBusiness\n\n\n\nFor the past decades, giant companies like Google, Amazon, Baidu and other companies leveraged machine learning for becoming the monster in the tech world. For example, the AI powering Google\u2019s search engine will ensure that you get the right data you\u2019re looking for. Amazon\u2019s AI boosts their business like crazy. You can get an Echo device powered by Alexa as your personal assistant.\n\n\n\nUsing machine learning, it\u2019s possible for such large businesses to identify their customers\u2019 behavior pattern. Thus, these businesses can fine-tune their services to each individual customers.\n\n\n\nAI can also dramatically reduce the cost of maintaining services. Through automation, there\u2019s no longer the necessity of any human touch over complex operations. As the process is completely run by a program, it\u2019s possible to reduce both the error and time for maintenance and other critical operations.\n\n\n\nUsing AI, it\u2019s also possible to predict possible hardware failure. Any type of hardware failure in the giant industries can be a disaster. Involving the repair cost while adding the production downtime increases dramatically. With the help of machine learning, it\u2019s possible to predict such failures and take necessary steps for preventing them.\n\n\n\nGames\n\n\n\nWho doesn\u2019t love gaming? Fighting against all the enemies and defeating them with your superior skill and years of experience is a fantastic feeling. Games are the most effective way that boosts our brain. The gaming industry has boomed like crazy over the past decade, thanks to the power of AI.\n\n\n\nGames have been using AI for quite a long time. For example, when you go for a match in League of Legends or Dota 2, you have the option of fighting against AI. Other offline games like Crysis, Call of Duty, Starcraft, Far Cry, and Need for Speed and every single game that incorporates smart enemy interactions; all use AI for all the opponent players.\n\n\n\nHowever, the AI enemies in these games aren\u2019t that difficult. But that doesn\u2019t mean that AI is a weak opponent. In fact, it can easily beat the best of the best. For example, as previously mentioned, Google\u2019s Alpha Go and IBM\u2019s Deep Blue became the master at Go and chess respectively.\n\n\n\nTake a note at the example of OpenAI. It created a huge ruckus in the gaming industry. It was the first AI that defeated the world champs in Dota 2. Dota 2 is a massively complex game that requires a huge amount of investment of both times and focuses for mastering. However, OpenAI developed their own Dota 2 bot that literally crushed the human team.\n\n\n\nMedical field\n\n\n\n\n\n\n\nUsing the power of machine learning, the medical field is getting revolutionized. From medicine to cure for complex diseases, every single field part is getting benefit from AI.\n\n\n\nUsing machine learning, AI can easily diagnose a patient and figure out what disease and issue is there. With enough data and training, AI can successfully offer a cheaper and faster diagnosis of critical diseases.\n\n\n\nIn the case of surgery, AI-assisted robotic surgery is showing real promise. Robots are capable of analyzing data from pre-op medical records and guide the surgeon with necessary instructions. This process can lead to less surgical accidents and ultimately, a 21% reduction in a patient\u2019s hospital stay. Such surgery is regarded as \u201cminimally invasive\u201d, meaning the least amount of incisions.\n\n\n\nOther implementations\n\n\n\nThere are a lot of places AI is being used. Besides marketing, AI is powering entertainment sectors, banking, and medicals. In most cases, without changing the algorithm much, it\u2019s possible to achieve the target result just by feeding the AI big data and massive computation. It\u2019s a big sign of Moore\u2019s law slowing down. Well, the current tech will let us build hardware so fast.\n\n\n\nThe future\n\n\n\nSo, what will be the future? Well, big things are coming around and we\u2019re only doorsteps away. There is some pretty big stuff that is already underway.\n\n\n\nFor example, autonomous cars. Big companies like Apple, Tesla, BMW, Baidu, Samsung, Uber, Volkswagen etc. all are working on making driverless cars possible. Of course, there are some driverless cars out there. However, they\u2019re still on a very early stage. For becoming a stable and trustable solution, we have to wait for a couple of years. Can you imagine yourself hiring a driverless taxi from Uber?\n\n\n\nHow about having a conversation with machines? There is already some bot for calling people and having a simple conversation. It\u2019s possible for the tech to become more advanced in the future. You can chat and talk with your AI friends all you want, anytime you want.\n\n\n\nThe overall target of all the AI researches is achieving general intelligence \u2013 machines surpassing human cognitive abilities in all the tasks. Theoretically, that\u2019s very much possible. In fact, all we need is just invest an immense amount of hardware power and enough time for the AI to get trained. However, do we need this? Ethically, that will cause a conflict. In reality, such AI would cause a problem with us, humans. Those who will control the AI will have the ability to do immense things. There are all the sci-fi films showing the AI taking over mankind. With all the possibility of AI, that\u2019s not just sci-fi. In fact, that kind of scenario is very realistic.\n\n\n\nEven if the capability is there, we need a serious conversation on how the machine policies will be and ethics (ironically, both of these are human subjects). For now, AI is safe to improvise and run amok in society.