Features

AI – It’s Probably Not What You Think

By Jeff Goldfinger, XtraMile Training and Development

As a computer scientist by education, I tend to follow the latest technology trends. As an aerospace, defense, and security industry consultant, I restrict myself to those that are meaningful in our B2B and B2G domains. As a business development educator and professional development coach, I then must convert the, often cryptic, language of the tech products to actionable recommendations for my clients.

Since the debut of ChatGPT and Tesla’s recent preview of version 12 of their Full Self Driving (FSD v12) software, a narrative around incorporating AI for the customs community has reached a fever pitch. Yet, while reading the wide spectrum of questions being levied at the private sector from public sector program managers and procurement officials worldwide, I felt compelled to write this article to both educate and (re)set expectations.

A Brief History of AI Time
The first formal mention of computer-generated intelligence came from Britain’s Alan Turing 75 years ago. Although science fiction movies of the 1950s and 60s anticipated a future full of AI-assistants (e.g., Lost in Space, Star Trek, 2001: A Space Odyssey), the decades since Minsky, Edmonds, and von Neumann’s invention of the first neural network computer in 1951, have seen only small, evolutionary steps.

Until, of course, the release of GPT 3.5 went viral in November 2022.

To truly understand and exploit the power of AI, it might first be useful to compare how computers learn to how humans learn. In both cases, there are three essential components—signals from the outside world (data), thinking to make sense of the signals (algorithms), and a processor that performs the sensing calculations (compute).

Regardless of society’s rapid growth over the past 5,000 years of recorded history, the algorithms in our brain have hardly changed since the emergence of imaginative humans around 50,000 years ago. Nor has our brain’s compute power going back at least 250,000 years when our species first emerged on the African plains.

Not so for computers. With the invention of modern transistor circuit computers, in accordance with Moore’s Law, processing power has doubled roughly every 18-24 months. And, while human memory capacity remains stagnant, digital data storage has become exponentially cheaper in accordance with Wright’s Law, allowing all manner of public and private institutions to collect huge volumes of user data.

The Human and AI Learning Ladder
As infants, we first learn to make sense of the world by observing our surroundings without doing much more than collecting all the inputs from our five senses and putting them into distinctive chunks—how many things move or don’t move, taste good or bad, make us cry or make us laugh. This is our brain’s way of describing the world we inhabit.

Business operations are no different. The infant stage of computers in business allowed the development of “dashboards” to display everything that had happened. How many things were sold or not sold, made better or made worse, broke vs. repaired, cost money vs. made money. Today, there are all manner of computer-based tools to organize your dashboards such as Microsoft’s PowerBI, Salesforce’s Tableau, and Google’s Looker. Just about every profitable business of any size has taken this first step on the learning ladder called “Descriptive” analytics or “What happened in my organization?”

Sidenote, why do you think they call these business tools dashboards? A car’s instrument panel, its dashboard, is a BI tool. It’s telling you what has happened with your car—how many miles have been driven, how much fuel you’ve burned, how hot or cold is the engine, etc.

As children, after we learned to walk, our brain started anticipating what would happen next. If I touch that hot stove, I’ll burn my hand. If I run too fast, I might fall. If I eat that rotten tomato, I’ll get a stomachache. There are now tools in everyday life and business that are also designed to predict the future. Cars can now not only tell us how many miles we’ve driven but how many more miles we can drive until our gas tank runs dry or battery pack is exhausted. Retailers use the Descriptive BI data to feed algorithms to predict how many items to stock their shelves with and at what price. Publicly traded companies are required by law to predict what their future earnings will be and make that knowledge public.

This is called the Predictive stage of AI learning or “What is likely to happen?” Predictive algorithms are rarely, if ever, 100% accurate. So, the goal of every organization is to train the algorithms to continue shrinking the error bars. This can be done manually by financial analysts as they tweak the formulas in their spreadsheets or by use of machine learning where the algorithms learn on their own based on feedback loops from the Descriptive BI tools.

As teenagers, we start developing a sense of autonomy because not only can we now predict what might happen, we start planning which choices to make. For example, my son’s descriptive brain tells him that he had a test every month in science class so his predictive algorithm lets him know that he will likely have a test next month so he should probably study to improve his outcome—a prescription to achieve a better result (while avoiding his father’s ranting to do the same).

Data scientists call this the Prescriptive stage of AI learning where the algorithm answers the question: “What should I do about what’s going to happen?” As a simple example, many cars have a “Get Service Now” light that illuminates when the car’s Descriptive and Predictive algorithms detect signals that are abnormal and troubling. As a more sophisticated example, online retailers and streaming video services now regularly have “Recommended for You” choices based on your prior buying habits (Descriptive BI), how accurate their prior predictions were, and combing through similar data from others just like you—age, level of education, where you live—from the personal profiling data they collect or purchase from data aggregators.

As adults, around age 25 or so, our brains are now fully formed and the “Executive Function” part of our pre-frontal cortex is finally able to, as neuroscientists would say, “do the hard but right thing” such as not driving too fast on snow-covered roads or, in my case, eat more salad instead of pasta. For computers, the best example of this level of AI is the development of fully autonomous, driverless automobiles. While some cars can now automatically parallel park themselves, a few companies like Tesla, Cruise, and Waymo, have demonstrated the capability to enter a destination and have the car navigate there with little or no intervention.

We have now reached the Semantic level of AI, where the computer reacts to a command within a particular social context. The car cannot just drive straight from point A to point B, it must observe traffic lanes, stop signs, crossing traffic, obstacles, etc. This is what made ChatGPT v3.5’s debut so significant. It was the first time that you could ask the computer to do something practical—write me a business plan, correct my resume—in the Semantic context of feeding it your business idea and your employment history.

But just like fully adult humans make mistakes, so does Semantic AI. Waymo and Cruise vehicles still get stuck in the middle of roads when their software gets confused and Tesla’s FSD still requires occasional intervention. We humans (mostly) learn from our mistakes as do more sophisticated AI algorithms. There’s already version 4.0 of GPT which corrected some of version 3.5’s errors and Tesla’s FSD v12 entirely abandoned a rules-based approach in favor of 100% neural networks, far more akin to human learning than any previous version.

I See an AI in your Future
As we humans mature from infancy to adulthood, learning becomes increasingly complex requiring access to more and more data and more and more energy (best to budget now for your teenager’s voracious appetite). While our brains represent only 7% of our body’s volume, they consume nearly 20% of the energy (via oxygen and glucose). Similarly, at each stage of the learning ladder—Descriptive BI, Predictive, Prescriptive, and Semantic—the amount of data consumed, and computational resources required grows exponentially. This is why we can find hundreds of Prescriptive BI tools on the market but less than a handful of Prescriptive.

What then is the meaning of all this for the customs community? There are three key challenges that must be addressed.

First, computational power is not going to be solved by the limited resources of government agencies and customs equipment manufacturers. Let the chip manufacturers and cloud computing providers solve that for us.

Second, leverage the extraordinary advantage computers have over humans in sensing. While the five human senses have a very narrow range and limited sensitivity, digital sensing systems can scan the entire electromagnetic spectrum from radio waves to gamma rays, from infrasound to ultrasound, and with airborne particle sniffers that are more sensitive than even a bloodhound’s nose (which is already 10,000 times more sensitive than a human’s). Vast amounts of text and imagery at ports of entry are already readily available. The key question is what is being done with it all?

The community consensus seems to be to keep the data in their existing silos. Vendors stick with propriety data formats and customs agencies are timid about cross-border data sharing agreements. This has led to suboptimal resource utilization at POEs where many still operate under the assumption of assigning one agent to one scanner. This naturally has led to fractional inspection rates (1 – 5% at most crossings) or bottlenecks that severely delay the flow of commerce.

Third, AI algorithms at any of the three levels of autonomy (Predictive, Prescriptive, and Semantic) require not only large amounts of data but the data must be curated. Imagine training a car’s autonomous driving on examples from crappy drivers. When Tesla first allowed consumers to install their Beta versions of FSD, before accepting their payment, they required the drivers to achieve a specific “safety score” so that the algorithms were training on high quality, real-world driving examples.

Fourth is the matter of setting expectations. While there are many Descriptive BI tools in use today by customs agencies, there are far fewer Predictive tools, perhaps only a handful of Prescriptive tools, and zero Semantic tools. By some estimates, the ability to replace human agents with Semantic AI is at least 3-5 years away and more likely a decade or more. Don’t believe any vendor that tells you otherwise.

Accelerating the Ascent up the AI Learning Ladder
This is where I, as an industry consultant, tip my hat to US company S2 Global with their CertScan AI integration platform. They have made, in my opinion, the absolutely correct strategic decision to produce an AI-enablement platform. To borrow from an old ad campaign, “They don’t make the AI algorithms, they make them better.” By providing access to millions of quality integrated data packages (text + imagery) to vendors and then validating the AI algorithm once in use, they have demonstrated that 100% inspection rates, increased revenue collections, and improved operator performance can be achieved today.