The Pentagon said Monday it has adopted a set of ethical principles for artificial intelligence capabilities, with officials viewing the new guidelines as an opportunity to bolster collaboration with non-traditional partners on AI-related projects.

Defense Secretary Mark Esper approved the new ethical guidelines proposed by the Defense Innovation Board (DIB), which includes executives from Google [GOOG], Facebook [FB], and

Microsoft [MSFT] and advises the Pentagon on major technology efforts, after a 15-month study gathering input from the private sector, defense experts and academia.

U.S. Department of Defense Chief Information Officer Dana Deasy and The Director of the Joint Artificial Intelligence Center U.S. Air Force Lt. Gen. John N.T. Shanahan hold a round table meeting at the Pentagon in Washington, D.C., Feb. 12, 2019. (DoD photo by U.S. Army Sgt. Amber I. Smith)

“Today’s announcement by the secretary is a significant milestone. It lays the foundation for the ethical design, development, deployment and the use of AI by the Department of Defense. These principles build upon the department’s long history of ethical adoption of new technologies,” DoD CIO Dana Deasy told reporters.

The Pentagon has stood up an executive steering committee to lead implementation efforts for the five broad principles, which Deasy is intended to encompass “responsible, equitable, traceable, reliable and governable” AI initiatives.

Air Force Lt Gen. Jack Shanahan, director of DoD’s Joint Artificial Intelligence Center (JAIC), said the guidelines offer the opportunity for the U.S. to set international norms in the AI space as it competes with countries such as China and Russia in fielding new capabilities.

“I would suggest that some authoritarian nations are less concerned about high performance of algorithms and more okay with accepting they’ll make some mistakes and then move on. We will not field an algorithm until we are convinced it meets our level of performance and standards. If we don’t believe it can be used in a safe and ethical manner, then we won’t field it,” Shanahan said.

Shanahan noted the guidelines’ ability to act as a “conversation starter” with companies that don’t typically do business with the DoD and referenced his experience leading Project Maven, which faced scrutiny after Google dropped its contract following pushback from employees about the company’s tools being used for an AI drone imaging project.

“What our team also found in talking with the big companies is nobody is very far along in this area of ethics implementation. There’s been a lot of great talk about it, but everybody finds there are some challenges when you actually take principles and apply them to every aspect of the AI field,” Shanahan said. “I hope it shows that we have more in common than most people might suspect by hearing some of the stories about what we did in the past with Project Maven.”

The National Security Commission on Artificial Intelligence (NSCAI) in November submitted a report to Congress that detailed the likelihood of DoD falling behind China in AI  if the department did not better harness industry’s technological progress (Defense Daily, Nov. 5). 

Deasy and Shanahan said the final version of the principles remains largely unchanged from the the guidelines the DIB submitted in October, and specifically addressed a debate the group had over specific language on having an “off switch” for AI systems that start performing unintended actions (Defense Daily, Oct. 31). 

“Yes, it is preserved in there. And in some ways, it’s even a little broader than it was in the DIB language. It says now ‘possessing the ability to detect and avoid unintended consequences and to disengage or deactivate deployed systems that demonstrate unintended behavior.’ That could be either human or automated ways of doing that,” Shanahan said.