AV Brains & Brawn: Past, Present and Future of AI

In the first of a three-part series, Alan C. Brawn explores the various definitions and key milestones that have shaped the realm of artificial intelligence.

Alan C. Brawn Leave a Comment
AV Brains & Brawn: Past, Present and Future of AI

Image credits: IPOPBA/Stock.adob.com.

I am embarking on a writing journey that is (in a sense) bigger than all of us…and yet, when broken down into its component parts, it can ultimately be understandable. I am speaking about artificial intelligence (AI). Suffice it to say that, today, it is omnipresent in our lives, our business endeavors and seemingly in every article we read. It is simply unavoidable and can potentially transform many aspects of technology. As one who seeks to learn and understand, this topic has taken on a life of its own. If we are to believe what we see and read, AI is touted as the best group of technologies that have been introduced in modern times… or on the other extreme, a path to the elimination of humanity. As with most things the “truth” is a grayscale between extremes.

After collecting and reading (so far) over 200 articles, books, and research papers on AI, I feel obligated (and somewhat prepared) to make some semblance of sense out of the concept of AI by eliminating the hype and tangents and dealing with the reality of what it is, and how it relates to where we are. Hence my decision to write a trilogy of pieces, coming out over three months, starting now with definitions of AI from various sources, a brief history (yes, this is relevant), and key milestones.

Then, in part two, the types of AI and how they work, followed by part three; applications citing the good, bad, and the ugly of AI as we know it today. The target audience is the AV integrator who needs a data-driven fast track and the foundational knowledge from which to grow their own experience with, and utilization of, AI.

Diversity of Thinking About AI

I want to begin by looking at several definitions to illustrate the diversity of thinking about AI depending on the source. One size does not fit all.

The original definition by John McCarthy, who coined the term “Artificial Intelligence” in 1955 “… the science and engineering of making intelligent machines and making a machine behave in ways that would be called intelligent if a human were so behaving”. In 2004, he further defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

In 1995, decades before most of the following definitions, Stuart Russell and Peter Norvig published Artificial Intelligence: A Modern Approach (now in its fourth edition used in more than 1,400 universities worldwide) becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems based on rationality and thinking vs. acting:

  • Human approach:
    • Systems that think like humans
    • Systems that act like humans
  • Ideal approach:
    • Systems that think rationally
    • Systems that act rationally

AI Mimics the Human Mind

Person on computer with hand outstretched and holding futuristic AI.

Image credit: Stock.adobe.com/ A Stockphoto.

IBM defines AI with “Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.”

Google’s AI division claims: “Artificial intelligence is a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze. AI is a broad field that encompasses many different disciplines, including computer science, data analytics and statistics, hardware and software engineering, linguistics, neuroscience, and even philosophy and psychology.”

Sales Force defines AI as “… a field of computer science that focuses on creating machines that can learn, recognize, predict, plan, and recommend — plus understand and respond to images and language.”

Jair Ribeiro, a visionary thought leader in artificial intelligence, remarks, “AI is the field of computer science that enables machines to perform tasks requiring human-like intelligence. It involves creating intelligent agents that can sense, comprehend, learn, and act in a way that extends human capabilities.”

With the exponential increase in computing power, we have experienced over the last decade, AI can process large amounts of data in ways that humans cannot. AI is at the convergence of computer power “handling” lots of data input into them. The goal for AI is to be able to do things like recognize patterns, make decisions, and judge like humans. On an operational level for business use, AI is a set of technologies that are based primarily on machine learning and deep learning, used for data analytics, predictions and forecasting, object categorization, natural language processing, recommendations, intelligent data retrieval, and more.

Key Chronology and Events in the History of AI

It may seem like AI is a recent development in technology…but the concept of “artificial intelligence” (although not the name) goes back thousands of years. Inventors made things called “automatons” which were mechanical devices that moved independently of human intervention. One of the earliest automatons on record comes from 400 BCE and refers to a mechanical pigeon created by a friend of the philosopher Plato. Many years later, one of the most famous automatons was created by Leonardo da Vinci around the year 1495.

While the idea of a machine being able to function on its own is not new, we will focus on the 20th century, with developments that have led toward our modern-day AI. The following are a few of the major milestones leading to the AI of today:

In 1940, Isaac Asimov (a science fiction author and biochemist) began writing stories about robots (later collected in I, Robot). In his depiction of the 21st century, “positronic” robots operate according to the Three Laws of Robotics:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

By developing a set of ethics for robots and rejecting previous conceptions of them as marauding metal monsters, Asimov greatly influenced other writers’ treatment of the subject and ultimately some modern AI thinking.

The Technical Concept of AI

The technical concept of AI began in 1943 with Norbert Wiener, a pioneer in cybernetics. The aim was to unify mathematical theory, electronics, and automation as “a whole theory of control and communication, both in animals and machines.”

In 1950, John Von Neumann and Alan Turing (considered the technological founding fathers) formalized the architecture of our contemporary computers and demonstrated that it was a universal machine, capable of executing what is programmed. In 1950, Turing published his work “Computer Machinery and Intelligence”. In this paper, Turing (often referred to as the “father of computer science”) asks the following question, “Can machines think?”

From there, he offers a test, now famously known as the “Turing Test,” where a human interrogator would try to distinguish between a computer and human text response. Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

In 1956, at a Dartmouth College conference, John McCarthy of MIT and Marvin Minsky of Carnegie-Mellon described the term “AI.” They defined it as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning.”

From a sociological point of view, the 1968 Stanley Kubrick directed film “2001: A Space Odyssey” had a profound effect on thinking about AI. This is where a computer — HAL 9000 — takes on a “life” of its own. The concept of a HAL 9000 being real summarizes the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film was not in itself scientific but contributed to popularizing the theme and made us wonder if, one day, the machines would experience emotions.

It was with the advent of the first microprocessors at the end of 1970 that AI took off again and entered the golden age of expert systems. In 1979, The American Association of Artificial Intelligence which is now known as the Association for the Advancement of Artificial Intelligence (AAAI) was founded.

Predictions of an ‘AI Winter’

In 1984, the AAAI predicted an “AI Winter” where funding and interest would decrease and make research significantly more difficult. The “AI winter” came in 1987 and lasted till 1993. This was a period of low consumer, public and private interest in AI which led to decreased research funding, which, in turn, led to few breakthroughs. Both private investors and the government lost interest in AI and halted their funding due to high cost versus seemingly low return. It should be recalled that in the 1990s, the term artificial intelligence had almost become taboo and more modest variations had even entered university language, such as “advanced computing.”

Notable dates following the “AI winter” include the following:

  • In May 1997, Deep Blue (developed by IBM) beat the world chess champion, Gary Kasparov, in a highly publicized match, becoming the first program to beat a human chess champion. This was also the year desktop; Windows based speech recognition software (developed by Dragon Systems) was released to the general public.
  • In 2000, Professor Cynthia Breazeal developed the first robot that could simulate human emotions with its face, which included eyes, eyebrows, ears, and mouth. It was called Kismet.
  • In 2003, NASA landed two rovers onto Mars (Spirit and Opportunity) and they navigated the surface of the planet without human intervention.
  • In 2006, companies such as Twitter, Facebook, and Netflix started utilizing AI as a part of their advertising and user experience (UX) algorithms.
  • In 2010, Microsoft launched the Xbox 360 Kinect, the first gaming hardware designed to track body movement and translate it into gaming directions.
  • In 2011, an NLP computer programmed to answer questions named Watson (created by IBM) won Jeopardy against two former champions in a televised game.
  • Also, in 2011, Apple released Siri, the first popular virtual assistant.

AI Enables Problem-Solving

human hand, microchip., AI

As AI tools continue to evolve, they’re only going to becomes more powerful. GreenButterfly/stock.adobe.com

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Stay tuned for part two of this AI trilogy, in which we will explore types of AI and how they do what they do.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!