Bluetek IT Solutions Blog

Bluetek IT Solutions has been serving the Pennsylvania area since 2005, providing IT Support such as technical helpdesk support, computer support and consulting to small and medium-sized businesses.

AI Wrote Better Phishing Emails Than Humans in a Recent Test

NATURAL LANGUAGE PROCESSING continues to find its way into unexpected corners. This time, it's phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale. 

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That's where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore's Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.

“Researchers have pointed out that AI requires some level of expertise. It takes millions of dollars to train a really good model,” says Eugene Lim, a Government Technology Agency cybersecurity specialist. “But once you put it on AI-as-a-service it costs a couple of cents and it’s really easy to use—just text in, text out. You don’t even have to run code, you just give it a prompt and it will give you output. So that lowers the barrier of entry to a much bigger audience and increases the potential targets for spearphishing. Suddenly every single email on a mass scale can be personalized for each recipient.”

The researchers used OpenAI's GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues' backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person's proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

“There are lots of variables to account for,” says Tan Kee Hock, a Government Technology Agency cybersecurity specialist.

Still, the findings spurred the researchers to think more deeply about how AI-as-a-service may play a role in phishing and spearphishing campaigns moving forward. OpenAI itself, for example, has long feared the potential for misuse of its own service or other similar ones. The researchers note that it and other scrupulous AI-as-a-service providers have clear codes of conduct, attempt to audit their platforms for potentially malicious activity, or even try to verify user identities to some degree. 

“Misuse of language models is an industry-wide issue that we take very seriously as part of our commitment to the safe and responsible deployment of AI,” OpenAI told WIRED in a statement. “We grant access to GPT-3 through our API, and we review every production use of GPT-3 before it goes live. We impose technical measures, such as rate limits, to reduce the likelihood and impact of malicious use by API users. Our active monitoring systems and audits are designed to surface potential evidence of misuse at the earliest possible stage, and we are continually working to improve the accuracy and effectiveness of our safety tools.”

OpenAI does its own studies on anti-abuse measures and the Government Technology Agency researchers notified the company about their work.

The researchers emphasize, though, that in practice there's a tension between monitoring these services for potential abuse and conducting invasive surveillance on legitimate platform users. And what's even more complicated is that not all AI-as-a-service providers care about reducing abusive uses of their platforms. Some may ultimately even cater to scammers.

“Really what surprised us was how easy it is to get access to these AI APIs," Lim says. “Some like OpenAI are very strict and stringent, but other providers offer free trials, don’t verify your email address, don’t ask for a credit card. You could just keep using new free trials and churning out content. It's a technically advanced resource that actors can get access to easily.” 

AI governance frameworks like thosen development by the Singaporean government and European Union could aid businesses in addressing abuse, the researchers say. But they also focused a portion of their research on tools that could potentially detect synthetic or AI-generated phishing emails—a challenging topic that has also gained attention as deepfakes and AI-generated fake news proliferate. The researchers again used deep learning language models like OpenAI's GPT-3 to develop a framework that can differentiate AI generated text from that composed by humans. The idea is to build mechanisms that can flag synthetic media in emails to make it easier to catch possible AI-generated phishing messages.

The researchers note, though, that as synthetic media is used for more and more legitimate functions, like customer service communications and marketing, it will be even more difficult to develop screening tools that flag only phishing messages.

“Phishing email detection is important, but also just generally be prepared for messages that are coming that may be extremely appealing and then also convincing,” Government Technology Agency cybersecurity specialist Glenice Tan says. “There's still a role for security training. Be careful and remain skeptical. Unfortunately, those are still important things.”

And as Government Technology Agency researcher Timothy Lee puts it, the impressive human mimicry of AI-generated phishing emails means that for potential victims the challenge is still the same as the stakes grow ever higher.

“They still only need to get it right once, it doesn’t matter if you receive thousands of phishing messages written all different ways," Lee says. "Just one that caught you off guard—and boom!”

Ransomware: What To Do If Hit By An Attack
Comment for this post has been locked by admin.
 

Comments

No comments made yet. Be the first to submit a comment
Guest
Already Registered? Login Here
Guest
Thursday, November 21 2024

Captcha Image

Mobile? Grab this Article!

QR-Code dieser Seite

Blog Archive

2021
January
February
March
April
May
June
July
August
2020
January
February
March
April
May
June
July
August
September
October
November
December
2019
January
February
March
April
May
June
July
August
September
October
November
December
2018
January
February
March
April
May
June
July
August
September
October
December