The U.S. Air Force Research Laboratory’s (AFRL) Rome, N.Y., branch is seeking innovative ideas from small businesses on artificial intelligence (AI) and “next generation distributed command and control” to aid the Air Force in “contested environments.”

An AFRL business notice laid out an umbrella contract worth $99 million through 2028. The plan budgets $40.9 million in awards for fiscal 2024 and 2025.

“Much of the DoD’s AI is currently designed and built by data scientists in pristine ‘lab-like’ environments with low-stress settings, where compute resources are plentiful, environments are static, and response timelines are akin to those in academia and industry,” AFRL said. “In these settings, there are few competing factors that engineers must reconcile – the mantra is always ‘the more, the merrier’ with regards to data, GPUs (graphics processing units), epochs, and performance. In contrast, battle managers of AI must operate in austere environments with limited resources. The team must quickly analyze a complex trade space of engineering options to balance model performance with available power, compute cycles, policy restrictions, and response deadlines. For example, operators might choose to train smaller models with less parameters in order accommodate short timelines at the expense of robustness and generality.”

U.S. Navy researchers recently discussed major roadblocks to the service’s effective use of AI in future conflicts, including incomplete and misleading AI models/data that can stem from enemy deception (Defense Daily, Feb. 28).

“AI is usually sandwiched within software stacks or embedded within complex hardware systems,” AFRL said. “Although end users may have direct access to AI inferences (output), such as bounding boxes for object detection and blobs of natural language in large language models, the underlying models are not typically available for inspection or replacement. Therefore, battle management of AI requires a new kind of software architecture that embraces portability and composability of AI models. Operators need white-box visibility into AI-based systems and new software interfaces to query, publish, and deploy ad hoc models onto platforms during mission execution. With the right user training, interfaces, and control processes that provide white-box insight for both operators and engineers, we propose that AI can be managed much like other physical assets.”

The AFRL business notice said that the military likely requires AI Courses of Action (COAs), as real world wars do not conform to AI models. For example, bad weather may hamper a drone’s images and thus an AI object detection model. Through an AI COA, a battle management AI interface officer would monitor and spot the anomaly, and an AI safety officer would analyze the risk of continued use of the AI model and any needed changes, possibly a new AI object detector “to handle low luminosity and noisy images resulting from inclement weather,” AFRL said.

Such AI interface officers would monitor drones’ computer vision models and try to spot “AI drift” anomalies.

“Modern warfare has become increasingly dependent on AI-based warfighting systems that use trained models to perform tasks at speed and scales beyond human capacity,” AFRL said. “These AI-based systems can support a variety of functions such as classification of targets for ISR and control of autonomous vehicles for combat. Because models are trained a priori on data (and simulations) in an anticipatory fashion, AI-based systems encounter situations in the real world that are incompatible with training feature distributions and parameterization of employed algorithms.”

“The result is degradation to model performance that can negatively impact mission effectiveness and safety,” the lab’s notice said. “Therefore, the Air Force requires new battle management processes to monitor performance of AI-based systems and update incumbent models in response to changing battlespace conditions. In the trivial case, operators will simply repurpose a pretrained model that fortuitously fulfills unanticipated mission requirements. In the extreme case, operators will coordinate a distributed workflow, known as an AI COA, to retrain, test, and deploy new models in line with mission execution, so that dependent systems can continue to function as intended with minimal loss of service. This process to detect shifts in performance of AI-based systems and adapt models for new environments is analogous to traditional battle management during conflict, where assets are provisioned and dynamically revectored to prosecute new targets in short order.”