The A – Z of Artificial Intelligence
Artificial Intelligence. When people see these two words together, they commonly have two thoughts. The first is that computers and robots are going to take over the world. The second is usually another pair of words: Machine Learning. While the first is irrational and never going to happen (or will it?!) the second thought to some outside of the technology world is just a pair of buzzwords. Many people think that Artificial Intelligence or A.I. is just machine learning. Unfortunately, they would only be 1/4th correct. A.I. is made up of four parts: Reasoning, Natural Language Processing (NLP), Planning, and Machine Learning (ML). People do not commonly talk about these other parts as much. So, what are they? To answer this question and understand A.I. as a whole, we need a brief history of this entire idea and where it came from first.
In the 1950s, Alan Turing a mathematician and computer scientist changed the world with a paper titled “Computing Machinery and Intelligence”. It poses perhaps one of the greatest questions of the century in just three words: “can machines think?” This would lead to the infamous Turing Test. The test assesses a machine’s ability to exhibit behaviour that is deemed intelligent, equal, or indistinguishable from a human. The way the test is carried out is simple: A human acts as an interrogator and asks a series of questions to two respondents. One is a human while the other is a computer and the conversation is limited to a computer screen and not vocal. After a specific amount of time, the interrogator must determine which of the respondents is human and which is a computer-based on their answers to the questions.
Turing Test Illustration
If the interrogator can’t distinguish the computer from the human respondents would we consider the computer intelligent? The answer is still debated as there are claims some programs have passed the Turing Test while others argue nothing has. This is the rough idea of artificial intelligence.
Over the years from Alan Turing’s time to the present A.I. has been a hot area of research specifically in the category of ML. But what are the three other components of A.I.? The first is reasoning. Computer reasoning is very simple in definition but complex in how it is done. It can be defined as the machine making inferences using data, which is, in fact, a way in which humans reason. Early researchers in this field of A.I. had developed algorithms that even imitated simple step-by-step reasoning humans use to solve problems and make logical inferences. However, the problem is humans don’t only think this way and we solve most of our problems using fast, intuitive judgments, unlike these algorithms.
The second component is Natural Language Processing or NLP. This is giving the computer the ability to read and understand human languages. However, it requires a myriad of technologies such as dictionaries, ontologies, language models, etc.
Think of a human who speaks English naturally and learned Spanish when they were 15. The human when spoken to in Spanish will convert what they hear into their natural/native language (English) to understand, then think of a response and convert that back to and respond in Spanish. Of course, the best example of software used today is machine translation, specifically Google Translate.
The next component is planning. If someone is deemed intelligent, they must be able to set goals and accomplish them. People must be able to visualize the future and what actions they must take to change it. So of course, this must be taken into consideration when trying to create an artificially intelligent program. The computer must have the ability to act autonomously and flexibly to construct a sequence of actions to reach a final goal.
The final and most well-known part of Artificial Intelligence is Machine Learning (ML) which can be defined as the study of algorithms that improve automatically through experience (for the sake of simplicity, I will not discuss algorithms here but in a later post). Humans learn in different ways and so do machines, thus ML is divided into four subcategories: supervised and unsupervised learning, reinforced learning and deep learning.
Supervised learning can be defined simply as being given prior knowledge such as a data set with historic data and being trained with that data to make predictions for the future data points or sets. The machine uses older data essentially to find patterns and make inferences into future data. An example is if a hospital wants to find average wait times for days of the week in the next years. They might use a machine learning algorithm and train it off a variety of factors such as days of the week, patient info, and wait times of previous years to have the algorithm predict effectively future average wait times for a certain day of the week. It can be argued this is the simplest way of learning for a computer.
Unsupervised learning is when the machine is given raw unfiltered data, categorizes and creates classifications of data using patterns identified through different techniques such as computer vision, visual recognition, etc. A simple way to understand this is if a computer has no idea what trees and houses look like. However, the computer will categorize the trees and the houses itself by looking at patterns in shapes, colours, and sizes as well as looking at differences in the same categories. Thus, it can successfully classify houses and trees as being two separate entities in the same picture.
The next form of learning is reinforced learning. This is very similar to supervised learning, but the main difference is this: the machine does not learn off a data set but rather a series of trial and error. This is used with robot and game development and on each successful result, the machine learns to keep doing that first. Take, for example, a puzzle game in which you must click ‘a’ first before ‘b’. once the machine learns this it will always click ‘a’ before ‘b’.
The last learning form is something we as humans don’t 100% understand about ourselves and is called Deep Learning. Deep Learning uses neural networks like the human brain to help form and learn patterns in unstructured data. Remember the computer vision methods mentioned earlier? In unsupervised learning, we used basic algorithm forms of these techniques but powerful algorithms such as Instance Segmentation require deep neural networks. Why? Because we are not identifying one house vs one tree. In this learning, we are given hundreds of pictures with trees, people, cars, animals and whatever else and need to identify the growth of a crack in the sidewalk over time and compare it to cracks in a sidewalk in a different country to see if it was perhaps made of the same material! This, of course, is a very random specific question, but these techniques are needed to answer specific tough questions.
I hope this post was informative and not too boring with the technical jargon.