Flexible, neural networks will be a feature of the Airbus-Dassault Aviation Future Combat Air System (FCAS), an Airbus official said last week, according to a report in sister publication, Avionics International.
On May 14, Airbus and the Germany-based Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE held a virtual working group meeting, featuring members of an independent panel of experts, on the responsible use of new technologies in the design of the Future Combat Air System (FCAS), Europe’s largest defense project.
Established last year, the panel includes members from Airbus, the German Ministry of Defense, German Ministry of Foreign Affairs, foundations, universities, and think tanks. The panel is to aid in the development of guidelines for the ethical use of artificial intelligence and autonomy in the FCAS program, which is to feature a sixth-generation manned fighter and unmanned, “remote carrier” platforms controlled by the pilot of the manned fighter. Such requirements are to ensure meaningful, human control of FCAS functions.
Enabling the manned and unmanned teaming of FCAS will be an “Air Combat Cloud,” which is to integrate sensor data. Civil functions are also to benefit from FCAS down the line. FCAS, which thus far involves France, Germany, and Spain, is to replace Dassault’s Rafale fighter and the Airbus/BAE Systems/Leonardo-built Eurofighter.
“I have clear requirements on the table, how to design such kind of a product [FCAS] to fly safely in airspace, but I have very limited requirements which are driven by our ethical compliance,” Thomas Grohs, chief architect of FCAS at Airbus Defence and Space, said during the May 14 virtual meeting. “I’m really looking forward to have such kind of a requirements listing established together with my colleagues and this forum and others participating–a requirements list that allows me to design the system to be compliant with such kind of requirements.”
Such requirements will set up a framework for such FCAS features, as neural networks and human control of FCAS functions–a so-called human “circuit breaker” to head off potentially fatal machine errors.
“I have to make the system flexible from a neural network point of design because I need to train such neural networks on their specific behavior,” Grohs said during last week’s May 14 virtual working group meeting. “However, this behavior may differ from the different users that may use the equipment from their ethical understanding. This is for me then driving a design requirement that I have to make the system modular with respect to neural network implementation, that those are loadable, pending one that uses this from his different ethical understanding. Such are the things I need to look at and to see can we find proper solution to make this happen.”
ANSYS and Airbus Defense and Space told Avionics International last June that the companies are developing an AI design tool to create the embedded flight control software for FCAS. Airbus has said that it is also creating a new version of the ANSYS SCADE aerospace systems simulation software configuration. The upgraded version of the tool will use artificial intelligence algorithms as a replacement for traditional model-based systems development to facilitate FCAS manned-unmanned teaming and the safe flight of FCAS “remote carriers.” An ANYS official said that most of the academic and industry research behind the use of AI for software development involves the use of convolutional neural network (CNN) input and output layers.
In terms of human, “circuit breakers” for FCAS, “not everything I could realize from a technical perspective to be fully automated…should be automated,” Grohs said. “I should have decisive break points in there that could be activated from an ethical perspective of the human ‘in the loop’ or, at least, ‘on the loop’ to be able to take proper decisions from an ethical perspective. Those requirements need to be laid out and be plotted against each of the functional chains for the potential users that later on will use the product.”
Ulrike Franke, a member of the FCAS experts panel and a policy fellow at the European Council on Foreign Relations, said that thus far there have been “pronounced divergences” in European views on the employment of military AI and autonomous weapons systems. Franke said that “France appears to be more open” to such use, while Germany is “more cautious” and that one challenge for FCAS will be “how to reconcile these differences.” Possible resolutions include the establishment of “red lines” for machine decision making or providing measures for how much autonomy FCAS sub-systems can have.
For its part, Germany wants to retain human decision making in FCAS targeting. German Air Force Brig. Gen. Gerald Funke, the FCAS project leader for the German Ministry of Defense, has written that Germany “will not accept any technical concept that would give any system the possibility to authorize the death of another person solely on the basis of the logic of an algorithm.”
“Human beings will remain the sole determinants, responsible for decisions and all their consequences!” Funke wrote.
During the May 14 virtual working group meeting, Funke said that it is still too early in the FCAS concept phase to know whether the FCAS manned fighter will be a one-seat or two-seat design to guarantee sufficient human control and that such a decision will become clearer “when we know what are the roles of the human in the vehicle.”
“So far, I would guess it’s more one-seater than a two seater, but we leave it open,” Funke said. “We have not decided it yet, apart from my side.”
Rüdiger Bohn, the deputy federal government commissioner for disarmament and arms control at the German Foreign Ministry, said that the Airbus/Fraunhofer FKIE initiative “is an excellent opportunity for Europe to influence the global policy debate on international arms control solutions by developing industry standards, for instance on the military use of AI and on how human control can be programmed into the design of new weapons systems.”