The Navy’s top officer recently said he thinks the U.S. needs more regulation in deploying Artificial Intelligence (AI) to help build trust and manage risk in the new technologies.

“So in the applications that we’ve used in the Navy, whether it’s been business systems, or manpower, or logistics, or operationally, particularly with unmanned most recently, the bottom line, for us has been the issue of trust,” Chief of Naval Operations Adm. Mike Gilday said during a panel discussion with the other service chiefs at the Council on Foreign Relations on May 22.

The U.S. military service chiefs talk during a panel discussion at the Council on Foreign Relations on May 22, 2023. (Image: Screenshot of CFR livestream event)
The U.S. military service chiefs talk during a panel discussion at the Council on Foreign Relations on May 22, 2023. (Image: Screenshot of CFR livestream event)

In response to recent conversations with people on the business side of AI who are sometimes frightened by the possibilities, he said understanding how the algorithms work and what the potential downfalls are is important to build trust via governance.

“I think we’re going to need more governance inside whether it’s a U.S. government or certainly within DoD, in terms of making decisions, how we apply those capabilities, what kind of risk we’re assuming. And that’s I think, a worthwhile approach.”

Gen. Charles Q. Brown, chief of staff of the Air Force, agreed on the importance of trust and managing the risk of using AI for “nefarious purposes” and establishing norms for using AI.

“I  think about when we first started doing cyber, and we put cyber in a PowerPoint slide and everybody talked about it, it was going be the panacea for everything. But it took us about 10 to 15 years to really understand it. For AI, it’s been on PowerPoint slides probably a lot less time, when it’s moving so quickly today, that words are going to go and that’s the thing I think we got to be concerned about, of how it can be used against us or how our AI, if you don’t trust it, it could give you a bad vector on an area.”

However, Brown also saw it as a “two-sided coin” with opportunity to make decisions faster or cull through data faster.

“So you can actually, where do you need to prioritize, you know, as an individual to make decisions or employ weapons.”

Commandant of the Marine Corps Gen. David Berger agreed the information processing capability of AI will be very useful in the future.

He said tactical commanders have known for years they are becoming oversaturated with information that ends up slowing decision making. 

“Not because you didn’t have enough awareness, because you had too much, couldn’t sort through it. So the ability to sort through that quickly, and get to the key elements, figure out what they are, looking at the options that make sense. Those are things that for tactical units make absolute sense.”

Gen. James McConville, Chief of Staff of the Army, said he sees AI helping especially in logistics and predictive maintenance as well as sorting through the data as Berger said.

“Artificial intelligence can take a lot of data then kind of take a look at it when parts need to be done in an assistance way.”

McConville added given all the information coming into operation centers, he predicted in the future multiple sensors will take in information on swarms of enemy systems attacking U.S. forces.

“They use algorithms to help us sort that out and then get through an integrated battle command system and pick the right arrow, if you will, to engage those systems. That is where I think artificial intelligence, we’re working at right now, is going to help. And it’s going to converge,” he said.

However, McConville argued he still sees humans staying in the loop in most cases to make those kinds of decisions on what solution to target threats.

“I still see having a person to loop in most cases, making those types of decisions. That difference is that it won’t be that person having to work all the way through the data, they will get a solution, they’ll be able to take a look at it and go, yes, we want to do that, or no, we don’t want to do that. And even to aircraft and some of the things we do where you can start to get to a pilot’s associate or a driver’s associate where you’re no longer doing all the work to either fly, drive or target the system. But you still have a person in the loop to make those type of decisions.”