The three witnesses at an Apr. 19 hearing of the Senate Armed Services Committee’s (SASC) cybersecurity panel said that a pause in the development of cutting edge artificial intelligence would not be practicable nor advisable militarily.

Sen. Mike Rounds (R-S.D.), the subcommittee’s ranking member, asked the witnesses their views on an Apr. 12 open letter by the nonprofit Future of Life Institute, a watchdog group monitoring artificial intelligence, biotechnologies, nuclear technology, and climate change.

The letter, whose signatories include Elon Musk, the CEO of SpaceX, and Steve Wozniak, the co-founder of Apple, Inc. [AAPL], called for a pause in the development of artificial intelligence more powerful than GPT-4.

“I think it would be very difficult to broker an international agreement to hit pause on AI in a way that is actually verifiable,” Jason Matheny, the CEO of RAND and a former member of the National Security Commission on Artificial Intelligence, said in response to Rounds’ question. “I think that would be close to impossible. I think we’re taking appropriate first steps to create a governance system in which we could at least delay China’s access, for example, to very high performance computing chips thanks to the October of 2022 export controls on AI chips and the subsequent controls on semiconductor manufacturing equipment.”

Matheny said that DoD could establish guardrails ensuring that Pentagon cyber red teams track AI developments affecting cyber defense and offense, including automated cyber weapons; export controls on leading edge AI chips; using the Defense Production Act to require companies to report the development or distribution of large AI computing clusters, training runs, and training models; and the inclusion in DoD contracts with cloud-computing providers the requirement that they implement “know your customer” screening before training large AI models.

“China has already said that these generative [AI] models must display socialist characteristics,” Shyam Sankar, the chief technology officer of Palantir Technologies [PLTR], testified on Apr. 19 in response to Rounds’ question. “It must not enable the overthrow of the state so these sorts of constraints that are being baked in to the extent that becomes the standard AI for the world is highly problematic, and I would double down on the idea that a democratic AI is crucial, now that we will continue to build these [AI] guardrails…but I think ceding our nascent [AI] advantage here may not be wise.”

Josh Lospinoso, the co-founder and CEO of the Virginia-based cybersecurity firm, Shift5, told the SASC cybersecurity panel that “it’s impractical to take some kind of [AI development] pause.”

“I think if we did that, our adversaries would continue development and we’d end up cediing or abdicating leadership on ethics and norms on these matters, if we’re not continuing to develop,” he said.

The Apr. 12 Future of Life Institute’s open letter called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

“This pause should be public and verifiable, and include all key actors,” the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

While Matheny said that a pause in AI development would not be practical, he caveated that in later testimony.

“What keeps me up at night is AI being applied to development of new cyber weapons and bio weapons for which we don’t have reliable defenses,” he said. “And I worry that right now, the most likely scenario is one in which those [AI] models were either stolen from the United States, were built with U.S. tech/U.S. chips/U.S. chip making equipment. I think the strongest argument for a pause is our own labs’ need to get their cybersecurity together to reduce the likelihood that the [AI] models that they’re building will be stolen by our adversaries.”

DoD, and U.S. industry officials are hoping that the CHIPS and Science Act, signed into law by President Biden last August will be a building block to reshoring the semiconductor supply chain, including packaging and testing, necessary for U.S. weapon systems, including those enabled by AI (Defense Daily, Aug 9, 2022).

The U.S. has the lead in the design of semiconductors, but builds just 12 percent and packages and tests even less. In addition, artificial intelligence (AI) relies on semiconductors smaller than 14 nanometers (nm), and just two companies–South Korea’s Samsung Electronics and Taiwan-based Taiwan Semiconductor Manufacturing Company (TSMC)–have built chips below 10 nm. TSMC has finished construction of a $12 billion chip plant–or “fab”–in Arizona.

Lospinoso, a Rhodes scholar at West Point and a former U.S. Army cyber officer, told the SASC cybersecurity panel on Apr. 19 that government ownership of data rights will be key.

“When I was in uniform, it drove me absolutely crazy that we can operate an aircraft or ground vehicle or a submarine in a combat environment and not be able to collect or own the data that came off that platform,” he testified. “That is just a massive national security issue.”

“Unfortunately, we struggle mightily with extracting even the simplest data streams off the vast majority of our major weapon systems,” Lospinoso said. “In some cases, that’s just because we haven’t made the investment. In other cases, it’s because the defense primes, frankly, lock that data up, and they don’t want the government to have access to it because they want to build additional products and services on top of that platform, and I think that, if we’re going to win in a ‘near peer’ conflict, the DoD needs to own the data that its weapon systems are generating in a combat environment.”