According to a report released on Wednesday, state-sponsored hackers from China, Iran, and Russia have been honing their craft and deceiving their targets by using tools from Microsoft-backed OpenAI.
In its research, Microsoft claimed to have monitored hacker groups connected to the Chinese and North Korean governments, the Iranian Revolutionary Guard, and Russian military intelligence as they worked to refine their large-language model hacking campaigns.
These computer programs, sometimes referred to as artificial intelligence, use vast volumes of text to produce responses that sound human. As it implemented a complete prohibition on state-sponsored hacking groups utilizing its AI technologies, the business made the discovery public.
Microsoft Vice President for Client Security Tom Burt informed Reuters in an interview prior to the report’s release, “We just don’t want those actors that we’ve identified that we monitor and know are hackers of various kinds to have accessibility to this technological devices, regardless of whether there’s any infraction of the law or any terms of service.”
Diplomatic representatives from Iran, North Korea, and Russia did not respond to queries requesting comment on the accusations right away.
Liu Pengyu, the spokeswoman for the US embassy in China, stated that the country rejects “baseless smears and accusations against China” and supports the “safe, reliable, and controllable” application of AI technology to “enhance the common well-being of all mankind.”
The claim that state-sponsored hackers have been seen utilizing artificial intelligence (AI) tools to enhance their espionage ability is likely to highlight worries regarding the technology’s potential for misuse and its quick spread.
Since last year, senior security experts in the West have issued warnings about rogue actors utilizing these technologies, but there hasn’t been any concrete information available up to this point.
Bob Rotsted, who oversees cybersecurity threat intelligence at OpenAI, said, “This is one among the first, if not absolute first, incidences of an AI company stepping out and explaining openly how cybersecurity attackers use AI technologies.”
The hackers used “early-stage” and “incremental” AI techniques, according to OpenAI and Microsoft.
Burt claimed neither had witnessed any significant advances made by cyber espionage.
“We observed them utilizing this technology in the same manner as any other user,” he stated.
The study gave diverse descriptions of how hacking groups used the massive language models.
Microsoft claimed that hackers using the models were investigating “different satellite and radar systems that may relate to conventional army operations in Ukraine.” The hackers are said to be employed by Russia’s military spy agency, the GRU.
According to Microsoft, content “that is probable for use in phishing attacks campaigns” targeting local specialists was created by North Korean hackers using the models.
According to Microsoft, Iranian hackers also used the models to help them produce more believable emails. At one point, they used the models to draft a message meant to entice “prominent feminists” to a website that was set up to be hacked.
According to the software behemoth, Chinese state-sponsored hackers were also working with massive language models to pose queries regarding cybersecurity concerns, competing intelligence services, and “notable individuals.”
Burt and Rotsted declined to comment on the amount of activity or the number of suspended accounts. Furthermore, Burt used the novelty of artificial intelligence (AI) and the concerns around its use to support the zero-tolerance policy against hacking organizations, which does not apply to Microsoft products like its search engine Bing.
He declared, “This technology is very powerful and new at the same time.”
SOURCE: DAWN NEWS