The Reality Of Artificial Intelligence

As you peruse trends in commercial AV and digital signage one common denominator is the frequent mention of artificial intelligence.

Alan C. Brawn Leave a Comment
The Reality Of Artificial Intelligence

For our regular readers, you just know that artificial intelligence is a call to action that I cannot ignore. I will try to explain the concept and it’s use, as we have previously with big data, IoT, analytics, and content management software.

Consider the following overview an attempt to break AI down into its basic parts and what it potentially means to us in real world applications.

One basic definition of AI is that artificial intelligence is the science of training computer systems to emulate human tasks through learning and automation. The key elements here are that an AI system requires learning, and only once it has been taught can it automate or replicate what it has ingested.

Read: AI May Be Smarter Than Ever Before…But It Can Still Confuse a Bald Head for a Ball

At the core is the capability of a machine to learn how to apply logic and reason to gain an understanding from complex data. It is not stand alone; AI requires initial data that it receives to teach it. It can then apply the algorithms developed by the learning process to patterns and relationships found in other sets of data.

Fundamentally, an AI is a set of algorithms inside a computer that are fed a set of known data to “learn” from and guided on how to interpret this data. It can then take what it has learned and perform tasks, further learning from each data set. This is where the “garbage in and garbage out” factor comes into play.

AI is only as good as the quality of the data it learns from, but quality data is not enough. Think of data as the food that now must be digested. Depending upon the application massive amounts of data can be ingested but now comes the analysis phase, the extraction of key data points, and finally writing the code that can make the output processes automated in some way.

Read: Facebook AI Can Teach Itself Without as Much Human Input

File all of this under the umbrella of still requiring humans to make it all work. Without data and the analytics of what to do with it, AI is just a couple of words that sound impressive. At a very high level, artificial intelligence can be split into two broad types: narrow AI and general AI.

  • Narrow AI represents the vast majority of what we encounter. This is what we see all around us with computers containing intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so. A good example might be one of the virtual assistants that so many of us rely on. Keep in mind though that unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.
  • General AI is very different and is the type of adaptable intellect found in humans. It is a flexible form of intelligence capable of learning how to carry out vastly different tasks based on the data from its accumulated experience. This is the sort of AI more commonly seen in sci-fi movies but doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

Some say we will see Artificial General Intelligence (AGI) by 2050 and yet other prominent experts opine that it is much farther off than that and perhaps centuries away.

As noted, AI requires a lot of data and from the analysis learns the patterns or features of the data and applies it. While AI is the umbrella, there are subfields. Here are a few of the ones you most likely will encounter:

Machine Learning

Machine Learning is the technique that gives computers the potential to learn without being programmed and it is actively being used in daily life. Fundamentally, it is the science that enables machines to translate, execute and investigate data for solving real-world problems.

Neural Network

In simple terms, a neural network is a set of algorithms that are used to find the elemental relationships across the bunches of data via the process that imitates the human brain operating process. The bigger the neural network the close we get to the way we think as human beings. •

Expert Systems

This is a program that is designed to solve problems with requires human expertise and experience. A good example is medicine. A collection of data fed into expert systems can offer knowledge of similar cases or it can be used as a reference or a self-check tool.

Natural Language Processing

NLP is the part of computer science and AI that can help in communicating between computer and human by natural language. It is a technique of computational processing of human languages.

It enables a computer to read and understand data by mimicking human natural language. Of course, there are technologies that enable AI.

We live in the era of the internet of things (IoT) where appliances of all types are connected thus producing huge amounts of raw data.

Read: Johnson Controls and Pelion Partner on Artificial Intelligence/Internet of Things (AIoT)

Graphic processing units (GPUs) inside a computer can provide more than just the ability to render graphics, they have immense computer horsepower to help process data. Advanced algorithms are being developed in new ways to analyze the data.

APIs or application programming interfaces as one AI expert notes “are portable packages of code that make it possible to add AI functionality to existing products and software packages.

“They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.” Is AI as trend? Of course it is, but it is already ubiquitous today.

Click online in a search engine and it is used to recommend what you should buy next. It is used to understand what you say to virtual assistants, such as Amazon’s Alexa and Apple’s Siri, and even to spot spam, or detect credit card fraud. The list goes on and on.

Limitations Of AI

As wonderful as AI is (opinions vary on this) and understanding that it is going to change every industry, we must understand its limits.

The principal limitation of AI is that it does require data to learn. It does not think on its own. As noted before this means any inaccuracies in the data will be reflected in the results. And any additional layers of prediction or analysis have to be added separately.

Today’s AI systems are narrow AI and trained to do a clearly defined task. One experts points out that “the system that plays poker cannot play solitaire or chess. The system that detects fraud cannot drive a car or give you legal advice. In fact, an AI system that detects health care fraud cannot accurately detect tax fraud or warranty claims fraud.”

In other words, these systems are very (I do mean very) specialized. They are focused on a single task and are far from behaving like humans. Likewise, self-learning systems are not autonomous systems.

The imagined AI technologies that you see in movies and TV are still science fiction. What is not science fiction are the advances in data collection, algorithms and analytics, and computers that can probe complex data to learn and perfect specific tasks. These are becoming quite common.

Read: Artificial Intelligence Program Helps History Buffs Connect with Holocaust Survivors—Even Dead Ones

All the major cloud platforms – Amazon Web Services, Microsoft Azure and Google Cloud – provide access to GPU arrays for training and running machine-learning models. All the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualization tools to display the results clearly, and software that simplifies the building of models.

AI will only get bigger and become of increasing importance in our personal and business lives. Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95%.

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where it can match faces with 99% accuracy. In healthcare AI has helped in responding to the pandemic by aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

The point for us in commercial AV is to think about how AI can benefit our clients. We won’t provide the AI per se, but we will provide many of the technologies that use AI to their best advantages.

As in all things AV it is our job to know, suggest, and assist our clients meet their objectives. Count on the fact that AI will be part of that.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!