VIENNA: Referred to as a “Oppenheimer moment” of the modern era, a worldwide conference on Tuesday said that regulations governing AI weapons ought to be established while the technology is still in its early stages of development.
Analysts claim that artificial intelligence (AI) has the potential to transform combat, much like gunpowder and the atomic bomb did. It might make conflicts between people completely different and far deadlier.
At the conclusion of the two-day conference in Vienna, the summary stated, “This is our generation’s ‘Oppenheimer moment,’ where geopolitical tensions threaten to lead a major scientific breakthrough down a very dangerous path for the future of humanity.”
During World War II, US physicist Robert Oppenheimer contributed to the development of nuclear weapons. The two-day conference, which attracted some 1,000 participants from over 140 nations, including political leaders, professionals, and members of civil society, was organized and sponsored by Austria in Vienna.
“Assessing our strong commitment to work with urgency and with all interested stakeholders for an international legal instrument to regulate autonomous weapons systems,” the group stated in a concluding statement.
The summary, which will be forwarded to the UN secretary general, stated: “We have a responsibility to act and to put in place the rules that we need to protect humanity… Human control must prevail in the use of force.”
Thanks to advanced sensors controlled by algorithms that let a computer “see,” AI may be used to turn a variety of weapons into autonomous systems. This will make it possible to find, pick, and attack locations that include or are inhabited by humans without the need for human intervention.
Although most weapons are still in the concept or prototype stages, Russia’s conflict in Ukraine has provided a sense of what they could be able to do. Although remotely operated drones are not new, both sides are using them and they are getting more autonomous.
In his opening remarks at the conference on Monday, Austrian Foreign Minister Alexander Schallenberg predicted that “autonomous weapons systems will soon fill the world’s battlefields.”
The “time to agree on international rules and norms to ensure human control” was imminent, he said. With the backing of 164 governments, Austria, a neutral nation eager to advance disarmament in international fora, presented the first UN resolution governing autonomous weapons systems in 2023.
“Imperable mistakes”
The “hallucinating” main AI tool ChatGPT has created incorrect replies that its inventor OpenAI is unable to fix, according to a Vienna-based privacy campaign group that announced it would submit a complaint against ChatGPT in Austria.
According to NOYB, or “None of Your Business,” there was no way to ensure that the program delivered correct information. The group released a statement saying, “ChatGPT keeps hallucinating— and not even OpenAI can stop it.”
According to the group, the firm has publicly admitted that it is unable to amend false information generated by its generative AI tool and has not provided an explanation for the source of the data or what personal information ChatGPT retains.