A Single Cloud Compromise Can Feed an Army of AI Sex Bots (krebsonsecurity.com) 9
An anonymous reader quotes a report from KrebsOnSecurity: Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape. Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.
Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access. So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be. Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.
"After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked," Permiso researchers wrote in a report released today. "Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse," they continued. "Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature."
Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access. So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be. Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.
"After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked," Permiso researchers wrote in a report released today. "Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse," they continued. "Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature."
Oh shit (Score:5, Funny)
Still better than... (Score:2)
Given previous FA, I guess I'd prefer an army of physical AI sex bots versus a 3.8 Tbps DDOS attack. /s
Re: (Score:2)
Probably not if they go around and rape people (and children)...
Hmm. There may be some valid applications for AI in warfare here. I mean, let the poor soldiers from barbaric country XYZ rest after a day full of killing enemy soldiers, instead of then having to rape the local civilians all evening! "Rape-Bot" as an anti-civilian terror weapon. AI has so many great potential applications!
(Yes, some simply rape civilians as a personal criminal choice, but in many wars it gets ordered as a war-crime to terroriz
Re: (Score:2)
Probably not if they go around and rape people (and children)...
You can't rape anyone with pixels, and since there is no child involved, it isn't "child exploitation," even if the customer thinks it is.
Funny thing (Score:1)
About 35 years ago, while studying CS I talked with a friend about the possibility of AI powered-sex chats as one of the few probably viable mainstream AI applications. We both agreed that it would probably be possible, but were not interested in actually doing something like that. I am surprised it took so long to become a thing.
Which raises a question (Score:2)
What was in the training data and where did it come from?