ADVERTISEMENT

AV Brains & Brawn: AI Types and Inner Workings

Published: 2023-11-21

Welcome aboard! This is the second leg of our artificial intelligence (AI) journey!

For those who have not read the first part of the series, I explored definitions from those who were instrumental in developing AI, as well as current industry leaders’ takes on that catch-all term. I then delved into some of the most salient historical events in the evolution toward where we are today. This second iteration is an attempt to synthesize how AI works along with the types of AI. In other words, I will explore the internal workings and fundamentally how it does what it does. Yes, it is a bit complex (big shock), but by understanding the component parts (at least at a high level) and seeing how each one contributes to their part of the AI mystery, we will see AI for what it is. In doing so, some of the awe and, in some instances, the fear will be lessened or removed.

AI: A Form of Discipline

AI is a field of computer science that tries to simulate how humans think. You feed information from data sources to an AI system, let the AI process it and create trained models that use the input data as a reference. The more data they have, the better AI systems can learn and from that expand the amount of information we have access to.

To key to how AI truly works is understanding that AI isn’t a single computer program or application, but an entire discipline or a science. AI systems utilize a whole series of techniques and processes, as well as a vast array of different technologies. One size and one application does not fit all! We will begin with some generalities and then work into the details and buzzwords of how AI works and the technologies that are driving it.

Types of AI

To put it in context, I want to begin by sharing four main types of AI as defined and explained by Arend Hintze, researcher and professor of integrative biology at Michigan State University. They are as follows:

1. Reactive Machines

Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. Machine-learning models tend to be reactive machines because they take customer data, such as purchase or search history, and use it to deliver recommendations to the same customers. This type of AI is reactive.

Reactive AI, for the most part, is reliable and works well in inventions like self-driving cars. It doesn’t have the ability to predict future outcomes unless it has been fed the appropriate information. Compare this to our human lives, where most of our actions are not reactive because we don’t have all the information we need to react upon, but we have the capability to remember and learn. Based on those successes or failures, we may act differently in the future if faced with a similar situation.

2. Limited Memory

The next type of AI in its evolution is limited memory. This algorithm imitates the way our brains’ neurons work together, meaning that it gets smarter as it receives more data to train on. Deep learning improves image recognition and other types of reinforcement learning.

Limited memory AI, unlike reactive machines, can look into the past and monitor specific objects or situations over time. Then, these observations are programmed into the AI so that its actions can perform based on both, past- and present-moment data. However, in limited memory, this data isn’t saved into the AI’s memory as experiences to learn from, which is the way humans might derive meaning from their successes and failures. The AI improves over time only if it gains training on more data.

3. Theory of Mind

The first two types of AI, reactive machines and limited memory, are types that currently exist. Theory of mind and self-awareness are AI types that will be built in the future. As such, there aren’t any real-world examples yet.

If it is developed, the theory-of-mind AI could have the potential to understand the world and how other entities have thoughts and emotions. In turn, this affects how they behave in relation to those around them. Humans understand how our own thoughts and emotions affect others, and how others affect us — this is the basis of our society’s human relationships. In the future, theory-of-mind AI machines could be able to understand intentions and predict behavior, as if to simulate human relationships.

4. Self-Awareness

The grand finale for the evolution of AI would be to design systems that have a sense of self — a conscious understanding of their existence. This type of AI does not exist yet. This goes a step beyond theory-of-mind AI and understanding emotions, to being aware of themselves, their state of being and being able to sense or predict others’ feelings. For example, “I’m hungry” becomes “I know I am hungry” or “I want to eat lasagna because it’s my favorite food.”

We are a long way from self-aware AI because there is still so much to uncover about the human brain’s intelligence and how memory, learning and decision-making work.

AI Categorization

Blue digital binary data on computer screen background

Artificial Narrow Intelligence is the most common form of AI in the market today. Image credits: Denis / stock.adobe.com.

Let’s explore the topic of AI types further. What follows are the three most commonly recognized terms when categorizing AI:

Artificial Narrow Intelligence (ANI)

This is the most common form of AI in the market now. These systems are designed to solve one single problem and can execute a single task really well. By definition, they have narrow capabilities. This is the only kind of AI that exists today. They’re able to come close to human functioning in very specific contexts, and even surpass them in many instances. However, they only excel in very controlled environments with a limited set of parameters.

Artificial General Intelligence (AGI)

AGI is still a theoretical concept in the research phase that has a human-level of cognitive function across a wide variety of domains such as language processing, image processing, computational functioning, reasoning and so on. We’re still a long way away from building a ubiquitous AGI system. An AGI system would need to comprise thousands of ANI systems working in tandem, communicating with each other to mimic human reasoning. This speaks to both the immense complexity and interconnectedness of the human brain, and to the magnitude of the challenge of building an AGI with our current resources.

Artificial Super Intelligence (ASI)

Today, this is still science fiction, but ASI is seen as the logical progression from AGI. An Artificial Super Intelligence (ASI) system would be able to surpass all human capabilities. This would include decision-making, taking rational decisions, and even includes other complex capabilities such as making better art and building emotional relationships.

As one subject matter expert on AI explained to me, “Once we achieve Artificial General Intelligence, AI systems would rapidly be able to improve their capabilities and advance into realms that we might not even have dreamed of. While the gap between AGI and ASI would be relatively narrow (some say as little as a nanosecond because that’s how fast AI would learn), the long journey ahead of us towards AGI itself makes this seem like a concept that lies far into the future.”

Delving Into The Workings of AI

AI may be easier to understand if we look at how AI works (at the 3,000-foot level) step by step. These steps are as follows:

  1. Data collection: The first step of AI is data collection for input. Data input can be text but can also be images or speech. No matter the source, the algorithms (explained later) must be able to read input data. It is also necessary to clearly define the context and the desired outcomes in this step.
  2. Data processing: Next is processing where AI takes the data and decides what to do with it. Here, AI interprets the pre-programmed data and uses the behaviors it has learned to recognize the same or similar behavior patterns in real-time data, depending upon the particular AI technology.
  3. Data outcomes: Data outcomes are then predicted after the AI technology has processed the data. This step determines if the data and its given predictions are a failure or a success.
  4. Data adjustments: Adjustments in algorithms take place if the data set produces a failure. AI technology can learn from the mistake and repeat the process differently to reflect a more desired outcome.
  5. Data assessment: The final step is assessment. This phase allows the technology to analyze the data and make inferences and predictions. It can also provide necessary, helpful feedback before running the algorithms again.

To the first order, it is all about data and computing power, and capacity. Data is the ‘fuel’ for AI systems. AI wouldn’t have any functionality without great data sets at its disposal. Companies use these data sets to train AI models. However, all data sets are not equal. A flaw or missing part in a data set reduces, or even negates, the value that can be extrapolated from the data set.

A noted AI subject matter expert tells us that good AI training data has several characteristics. These include complete; with no missing data; consistent with the AI system’s function; accurate; with no incorrect data; up to date; and with no outdated information. Think of the degree of difficulty this imposes, and in the end, “garbage in and garbage out, and vice versa” relative to the quality of what we receive.

Training AI Systems

Running the risk of granularity, there are also several types of data used to train AI systems, put into three categories: structured, unstructured and semi structured. Firstly, structured data has a predefined format and a standard format for every piece of data entered into an AI system. Meanwhile, unstructured data lacks any specific information and relies on the AI to find patterns in the data. There is also semi-structured data if you don’t have a predefined model, and this will give you the benefits of unstructured data sources as well as the structured ability to store your training data set more easily.

Machine Learning and AI

human hand, microchip., AI

As AI tools continue to evolve, they’re only going to becomes more powerful. GreenButterfly/stock.adobe.com

One of the buzzwords of AI is machine learning (ML). This is the foundation of how AI systems learn. The data sets you give machine learning tools help AI create data sets to learn how to make decisions and predictions without being programmed to perform specific tasks.

However, while machine learning allows AI systems to learn from data, they still need programming and algorithms to process that data and generate meaningful insights. Machine learning works with a large amount of data. You then process that data to create a mathematical model usable for handling AI tasks. Essentially, it allows AI apps to perform tasks as human beings do.

These apps are based on algorithms that are the backbone of AI. They are mathematical procedures that tell AI how to learn, improve decision-making, and handle problem-solving. Algorithms turn raw data into insights you can use every day. These algorithms work by taking the input data and feeding it into the algorithm. The more high-quality data (there it is again) you provide, the easier it is for algorithms to find patterns and transform them into actionable insights.

Neural Networks

Another buzzword you will run into is ‘neural networks’ that analyze data sets over and over again to find associations and interpret meaning from undefined data. Neural networks operate like networks of neurons in the human brain, allowing AI systems to take in large data sets, uncover patterns among the data and answer questions.

This is a type of machine-learning algorithm that allows you to process the information you create based on AI models. They’re made up of nodes (or artificial neurons) connected to each other. These nodes adapt based on the information coming into the neural network. This allows neural networks to find relationships and patterns in data. The nodes are arranged in several layers, each with its own function. For instance, the input layer receives the data, the hidden layer processes the data and the output layer produces the results.

Deep-Learning Algorithms

Deep learning is a type of neural network with multiple hidden layers, so it can learn more complex relationships in the data. To avoid confusion, we need to first understand that there is a difference between machine learning and deep learning. Machine learning is less complex and uses algorithms to parse and learn from structured data to predict outputs and discover patterns in that data.

On the other hand, deep learning structures the algorithms in multiple layers to create a highly complex “artificial neural network” that mimic the way a human brain works to detect patterns in large unstructured data sets. Deep learning is great for handling very detailed and complicated tasks, but it requires more data and computing power.

Technology Trends and AI

I feel that these are the basics relative to types, operation and terminology of AI; however, it also begs the question of what technologies are “driving” AI. As one expert recently noted, “the explosive growth of AI’s scale and value is closely related to recent technological improvements.” I have thus compiled a list of technological advancements that are now driving AI forward. The list includes the following:

  • Larger, more accessible data sets: AI thrives on data, and has grown in importance alongside the rapid increase of data, along with better access to data. Without developments like the Internet of Things (IoT), which produces a huge amount of data from connected devices, AI would have far fewer potential applications.
  • Graphical Processing Units: GPUs are one of the key enablers of AI’s rising value, as they are critical to providing AI systems with the power to perform millions of calculations needed for interactive processing. GPUs provide the computing power needed for AI to rapidly process and interpret big data.
  • Intelligent data processing: New and more advanced algorithms allow AI systems to analyze data faster and at multiple levels simultaneously, helping those systems analyze data sets far faster so they can better and more quickly understand complex systems and predict rare events.
  • Application Programming Interfaces: APIs allow AI functions to be added to traditional computer programs and software applications, essentially making those systems and programs smarter by enhancing their ability to identify and understand patterns in data.

This is a wrap on the second iteration of this three-part AI series. While it will take some time to absorb, I can guarantee you that it is well worth your time. Lastly, the concluding segment of the series will be the “good, bad and the ugly” of AI. In it, I will explore credibility of AI output and applications, as well as the hopes, concerns and warnings for the future.

ADVERTISEMENT
ADVERTISEMENT
B2B Marketing Exchange
B2B Marketing Exchange East