by Tristam Constant, International Mission Operations, Anduril
Recent advances in the manufacturability, sophistication, and affordability of small unmanned aerial systems (sUAS) – “drones” in more colloquial parlance – have complexified the border security mission. On one hand, unmanned – even autonomous – aerial systems can radically expand the situational awareness of border security forces, a particular boon for those agencies experiencing manpower shortages and those tasked with patrolling austere environments. On the other hand, the government hardly has a monopoly on the use of these drones: drone users range from low-level hobbyists to sophisticated criminal organizations. The most sophisticated actors have demonstrated an ability to conduct surveillance missions with drones to discern the operational schedules, inventory, and practices of government agencies, as well as to transport goods (e.g., narcotics) across borders in clandestine fashion. The power of small drones to shape the battlefield has become readily apparent in recent years, but the threat they pose in a border environment has more recently come to light.
The drone threat is made especially pernicious by a variety of factors, including:
• Attritability: sUAS are inexpensive, unmanned, and hence attritable. A good quality small DJI will set you back a thousand pounds or less, a trivial sum compared to the value they can provide. This means that the threat can be near-constant and the value of detecting and disabling an individual drone only minor. It also means that malign actors might utilize large numbers of drones simultaneously for a given purpose, even in the understanding that even a large fraction of those drones will be detected and potentially destroyed, to stretch their adversaries’ resources.
• Adaptability: Commercially available drones, which come in a large variety of shapes and sizes, can be customized with relative ease by technically savvy actors. As a result, the capabilities of the unmanned aerial systems with which a border force may have to deal may evolve on a monthly, weekly, potentially daily basis.
• Resistance to legacy technology: The small form factor, low-altitude flight, and manoeuvrability of small drones renders most traditional air defence systems, designed for larger targets at higher altitudes, relatively ineffective. Even when traditional systems succeed in defeating a drone threat, it is often at asymmetric expense: several hundred thousand pounds for a surface-to-air missile versus a few thousand pounds for a drone. Of course, border forces are typically not equipped with nor legally authorised to use exquisite military systems like surface-to-air missiles, but this underlines the ineffectiveness of existing technology in neutralizing the threat.
United States Marine Corps General Kenneth F. McKenzie Jr. expressed the gravity of the threat when he lamented to the U.S. Congress that “For the first time since the Korean War, we are operating without complete air superiority.”
Border security entities wishing to keep up with the drone threat are left with one solution: leverage emerging technology. Thankfully, while drone technology has improved in recent years, so has the technology capable of neutralizing it: advanced software. Specifically, advances in artificial intelligence and sensor fusion have radically expanded our ability to manage a multifarious, evolving threat.
Because much of this technology is new, government entities are grappling with complex regulatory questions governing its use. Industry and government alike should welcome such conversations and work collaboratively through them. Our shared mission – to empower border forces to regain control over the air domain – will benefit from mutual understanding of the technological possibilities available to governments, not in some abstract future, but today.
Artificial Intelligence: More Than a Buzzword
Artificial intelligence (AI) enables the most effective solutions to the drone threat available today. AI is a term used to mean many things in many contexts, but most fundamentally, an artificially intelligent system is one that learn or be trained to carry out tasks without direct human intervention (i.e., autonomously). An example of highly sophisticated artificial intelligence would be a self-driving car, while a more rudimentary piece of AI might be a software program that can “read” basic handwriting. AI has come an incredible distance in recent years: in 2012, Google was celebrated for teaching a neural network to correctly guess whether a video of a cat was, indeed, a cat. Today, AI can wipe the floor with human beings at complex games like chess and Go, recognize objects at a far greater degree of fidelity than human beings, and conduct highly complex tasks like piloting planes. In the context of the drone threat, an example AI system might be able to “see” and “recognize” a drone in the distance and alert a human operator to the drone’s presence. A complex AI-enabled system might be able to differentiate between different types of drones.
The value of AI in combating the drone threat is staggering. Consider, for example, the aforementioned problem of attritability. The attritability of small drones might embolden a criminal actor, who figures that she can launch a dozen drones at once to deliver narcotics. She might even include decoy drones that aren’t carrying a “payload” to distract the border force and waste their resources. Without artificial intelligence, the challenge of manually detecting, following, and neutralizing a dozen simultaneous threats would be extremely difficult, not to mention manpower-intensive. AI can take on virtually the entire burden: noticing, tracking, and warning of drone threats. As long as it is fed by ample sensor data, tracking a dozen or even a hundred drones simultaneously would pose no issue to an AI-enabled system. Such a system would not get tired nor bored: it would vigilantly scan the skies with the same level of hawkish resolve by day, night, heat or cold.
This is but one example of how AI can shape drone defences. Private companies, ranging from social media giants to defence start-ups, are pumping billions of pounds of research and development into building artificial intelligence every year. Just as drone technology is improving rapidly, so is AI. And those improvements don’t require expensive installations. One of the most powerful aspects of a software-first, AI-enabled system – e.g., a Tesla car – is its ability to receive software updates remotely and continuously. A developer in London can push software updates to systems deployed in Dover with the push of a button.
Hardware Shaped by Software
AI has not only reshaped how we think about the capabilities of advanced software – it has prompted a revolution in hardware design, too. The commercial sector has realized this far more rapidly than the government. Take any significant commercial technology from the past ten years, and chances are that you will find that everything from sensor selection to control surface design is optimized for software-driven capabilities, such as distributed computing, data collection and analytics, machine-to-machine command and control, autonomy, and sensing.
Why is this? Take our AI-enabled system above, for example. It is autonomously detecting drones – but how is it doing this? It is presumably imbibing sensor data and spitting out predictions as to whether the data it sees corresponds to what it knows about a drone. There’s no licking your finger and holding it to the wind: it is a highly-calibrated system trained on huge volumes of raw data. Because of this, it needs substantially lower quality data than a human being to make the same determination. A camera running an advanced computer vision program, for example, might need a fraction of a second’s glance at a drone a mile away to correctly identify it. A human operator – even assuming they are not tired or distracted – might need an extended look with a high-definition camera to determine if that little blip in the distance is indeed a drone threat. In a time sensitive situation with a drone delivering contraband, that time delay could be extremely costly.
But even this scenario is putting humans and machines on an artificially “fair” playing field, one sensor vs. one sensor. In reality, whereas a human being might be able to look at two, possibly three data feeds at once, an AI-enabled system can ingest data from dozens, hundreds, thousands of sensors at once. And because the costs of high-end sensors scale super-linearly vis-à-vis capability (i.e., a radar three times better than another radar will be more than three times as expensive), networks of cheap sensors are cost-savers compared to exquisite systems. These sensors can be of different kinds, too: an AI-enabled system could algorithmically corroborate a radar reading with a computer vision hit, a radio frequency detection, and an infrared reading all at once. Such a system could completely relieve a border force of the need to have humans staring at sensor feeds at all and empower them to reallocate human beings where they are most effective.
To manage a rapidly evolving hardware threat in the sky, border forces must think software-first. That isn’t just a matter of buying software and AI-enabled systems – it goes to the core of how governments procure technology. There is a reason that commercial technology companies are running laps around companies building technology for the government: they can fluidly change how they develop, price, and deliver new systems. Hence, just as the ways in which technology is developed have changed, so must how the government purchases it. Manifold improvements could be made to antiquated acquisition processes, which by and large cross geographies and government entities, including:
• Experimenting with pricing and delivery models in line with commercial standards (e.g., rewarding continuous delivery of software updates, using subscription and “as-a-service” pricing, discouraging “cost-plus” contracts, and more);
• Designating software companies as prime contractors on large programs;
• Modernizing intellectual property requirements to account for how software is developed;
• Most importantly, taking software seriously. The most innovative companies in the world, which hoover up our brightest science and engineering students, are software-first. Building world-class software that can manage a complex UAS threat is an engineering challenge equivalent to building exquisite hardware systems – the government should price it as such.
Indeed, the AI and other advanced software developed in the commercial sector is capable today of managing the UAS threat. Industry is eager and ready to respond to the call. The government, however, must show it is willing to learn. Falling back on the hardware-centric model of the 20th century cannot address a 21st century threat like drones.