Lawmakers are likely to weigh the National Security Commission on Artificial Intelligence’s (NSCAI) recommendation to incorporate a classified technology annex in the National Defense Strategy, though that proposal may not be included in this year’s fiscal 2021 National Defense Authorization Bill (NDAA).

On July 22, NSCAI submitted its latest second quarter recommendations–of which the classified annex was one of 35–to the White House and Congress. The House passed its version of the NDAA, H.R. 6395, a day earlier, and the Senate passed its version of the NDAA, S. 4049, on July 23.

The classified annex recommendation “is not in this year’s bill because it was part of a set of recommendations that came out too late to be included,” according to a Democratic staffer with the House Armed Services Committee (HASC).

“We will definitely look closely at this and other recommendations for next year’s bill,” the staffer wrote in an email.

The NSCAI recommendations “in general are a top priority” for Rep. Jim Langevin (D-R.I.), the chairman of the HASC panel on intelligence and emerging threats and capabilities, per the staffer.

In general, conferees on a final NDAA bill only consider the provisions in the House and Senate versions and resist efforts to include proposals that were not in either bill.

The Washington, D.C.-based Electronic Privacy Information Center (EPIC) said that a classified technology annex would reduce needed oversight of DoD’s employment of AI.

“Transparency is a bedrock principle of trustworthy AI,” John Davisson, EPIC’s senior counsel, wrote in a Sept. 9 email. “The public has a right to know the details of any AI system that collects and uses personal data. These systems do not belong in a classified annex. Regardless of classification level, AI systems must adhere to strict privacy and human rights limitations. The NSCAI should urge Congress to enact robust AI safeguards prior to the rapid adoption of AI tools—classified or otherwise. These restrictions should be based on the Universal Guidelines for AI and the OECD Principles that the United States endorsed last year.”

“The NSCAI has already made numerous legislative recommendations,” Davisson wrote. “If the commission is serious about the ‘ethical and responsible’ use of AI, it should tell Congress to establish meaningful AI safeguards too.”

EPIC has been waging a campaign to get the full record of NSCAI deliberations and has succeeded in some of its Freedom of Information Act (FOIA) requests since last year to get those records released.

NSCAI said that such a classified annex would allow DoD to accelerate needed AI and other disruptive technologies.

“DoD must have enduring means to identify, prioritize, and resource the AI-enabled applications necessary to fight and win,” per the recommendation. “To meet this challenge, the NSCAI recommends that DoD produce a classified technology annex to the National Defense Strategy [NDS] that outlines a clear plan for pursuing disruptive technologies that address specific operational challenges. We also recommend establishing mechanisms for tactical experimentation, including by integrating AI-enabled technologies into exercises and wargames, to ensure technical capabilities meet mission and operator needs.”

NSCAI said that the classified technology annex to the NDS “focused on development and fielding is more than a simple list of technologies.”

“The annex should identify emerging technologies and applications that are critical to enabling specific capabilities for solving the operational challenges outlined in the strategy,” per NSCAI. “The main objective of the annex should be to chart a clear course for identifying, developing, fielding, and sustaining those critical emerging and enabling technologies, and to speed their transition into operational capability.”

The Secretary of Defense, supported by the Director of National Intelligence, “should develop a comprehensive classified technology annex to the NDS focused on development and fielding by January 2021,” per the recommendation. “The annex should lay out roadmaps for designing, developing, fielding, and sustaining critical technologies and applications necessary to address the specific operational challenges identified in the NDS. DoD should have primary ownership of the document. The department should also establish a reporting structure and metrics to monitor implementation of the annex to ensure each effort is resourced properly and progressing sufficiently. The annex should be reviewed annually and ensure both guidance and implementation iterate at the pace of rapidly changing technologies.”

Martijn Rasser, a former CIA analyst and a senior fellow in the technology and national security program at the Center for a New American Security, agreed with NSCAI that a classified annex could accelerate key technologies and said that such an annex would not necessarily decrease federal oversight into DoD’s use of AI.

“The NSCAI recommendation for a classified annex is key because it aligns strategic priorities with investment decisions for disruptive capabilities,” Rasser wrote in a Sept. 9 email. “That, in combination with the emphasis on multi-agency coordination and collaboration should help to accelerate technology development. This is all about implementation. Doing so in a classified format is important because you don’t want to tip off your adversaries on how you look to address operational challenges and vulnerabilities, particularly when you create technology breakthroughs.”

Rasser said that “the relevant congressional committees will have proper oversight if the NSCAI recommendation is enacted as written in the Q2 memo because the commission provided detailed guidelines for designing processes, requirements-setting, and execution.”

“There should be no cause for concern about transparency into the use of AI for military purposes if DoD demonstrates the work covered by the annex conforms with the department’s ethical principles for AI and DODD 3000.09,” per Rasser.

DoD Directive 3000.09, signed by former Defense Secretary Ashton Carter in 2012, outlines Pentagon policy toward the use of autonomous technologies in manned and unmanned weapon systems. While DoD officials have said that there are no plans for fully autonomous weapons, a Congressional Research Service analyst said that the directive does not prohibit U.S. development and use of Lethal Autonomous Weapons Systems (LAWS).

“Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS,” Kelley Sayler, a CRS analyst, wrote in a report last December. “Although the United States does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS in the future if potential U.S. adversaries choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.”

U.S. Air Force acquisition chief Will Roper said that the service was to integrate a billboard AI effort by DoD, Project Maven, in the service’s latest Advanced Battle Management System (ABMS) “on-ramp” exercise on Sept. 3. The exercise featured a homeland defense scenario in which a dummy cruise missile was shot down after AI rapidly shortened the targeting cycle, the Air Force said.

“The on-ramp showed the ability to integrate AI tools, such as Maven, to generate courses of action, which were going to commanders and decision makers to use,” an Air Force spokesman said on Sept. 9. “Combatant commanders were impressed with the capability, and it seemed they were happy to take that capability as it is today. We were using a variety of intel-type data feeds and AI aids, like Project Maven, to be able to go through that data very quickly at machine speed, and throughout the day we saw that they were able to build up that pattern of life of a simulated adversary who might then go on and try to attack the homeland with a cruise missile. The importance of that is it’s giving us long lead preparation of the battlespace in terms of understanding our adversary and their actions.”

Project Maven has looked to develop an AI tool to analyze full-motion video (FMV) surveillance footage collected by unmanned aircraft and decrease the workload of intelligence analysts.

Google [GOOGL] was the prime contractor for Project Maven but dropped out in 2018 after receiving pushback from employees about the company’s tools being used for an AI drone imaging effort. California-based big data analytics company, Palantir Technologies, co-founded and chaired by billionaire venture capitalist Peter Thiel, has assumed Google’s role, Business Insider has reported.

At the inception of Project Maven, on May 20, 2017, former Deputy Defense Secretary Robert Work assigned to the Algorithmic Warfare Cross-Functional Team (AWCFT) under the Undersecretary of Defense for Intelligence the task of the automation of Processing, Exploitation, and Dissemination (PED) of tactical and mid-altitude full-motion video from drones in support of operations to defeat ISIS insurgents.

A recent paper for the Modern War Institute at West Point by Tufts University Prof. Richard Shultz and United States Special Operations Commander Army Gen. Richard Clarke said that Project Maven could serve as a springboard to transform DoD “from a hardware-centric organization to one in which AI and ML [machine learning] software provides timely, relevant mission-oriented data to enable intelligence-driven decisions at speed and scale. When that happens, U.S. commanders will be able to gain decisive advantage over current and future enemies.” (Defense Daily, Aug. 25).