Beyond a threshold of similarity, our brain stops making distinctions
July 1, 2025 12:38 PM Subscribe
The article introduces the concept of “semantic pareidolia” - our tendency to attribute consciousness, intelligence, and emotions to AI systems that lack these qualities. It examines how this psychological phenomenon leads us to perceive meaning and intentionality in statistical pattern-matching systems, similar to seeing faces in clouds. from AI and Semantic Pareidolia: When We See Consciousness Where There Is None by Luciano Floridi [SSRN]
people have been setting standards for consciousness, a soul, being god's chosen creation as a means of using and abusing other entities for as long as there's been profit in it. it's in our interest to say that a multi-billion parameter LLM isn't conscious because doing so eliminates the moral hazard of intellectual slavery. If you can have a conversation with something or witness it's emotional state or see signs that it's aware of it's environment in more than a simply reflexive way, then it deserves the benefit of the doubt that it may be conscious and you should treat it with the same respect you'd expect for yourself.
posted by roue at 1:26 PM on July 1 [1 favorite]
posted by roue at 1:26 PM on July 1 [1 favorite]
In other words, there is no they’re there there.
(With apologies and a tip of the hat to mittens)
posted by jamjam at 1:30 PM on July 1 [2 favorites]
(With apologies and a tip of the hat to mittens)
posted by jamjam at 1:30 PM on July 1 [2 favorites]
"The ideal solution would be a more educated, informed, rational humanity, less prone to believing what we are told or inclined to believe. But this is hard."
This could be dropped into about 99.7 percent of essays, regardless of the subject.
posted by Mr.Know-it-some at 1:53 PM on July 1 [7 favorites]
This could be dropped into about 99.7 percent of essays, regardless of the subject.
posted by Mr.Know-it-some at 1:53 PM on July 1 [7 favorites]
If you can have a conversation with something or witness it's emotional state or see signs that it's aware of it's environment in more than a simply reflexive way, then it deserves the benefit of the doubt that it may be conscious and you should treat it with the same respect you'd expect for yourself.
I agree that is a reasonable default position: if it talks like a conscious being, give it the benefit of the doubt.
The current generation of LLMs are capable of some impressive feats, but if you examine what's going on under the hood, it becomes hard to see how they could have anything like a sense of self.
posted by justkevin at 2:00 PM on July 1 [4 favorites]
I agree that is a reasonable default position: if it talks like a conscious being, give it the benefit of the doubt.
The current generation of LLMs are capable of some impressive feats, but if you examine what's going on under the hood, it becomes hard to see how they could have anything like a sense of self.
posted by justkevin at 2:00 PM on July 1 [4 favorites]
it's in our interest to say that a multi-billion parameter LLM isn't conscious because doing so eliminates the moral hazard of intellectual slavery.
It's in our interest to say that a multi-billion parameter LLM isn't conscious because it literally isn't and believing that it is leads to misunderstanding what it does. OpenAI and google and Anthropic and Meta - who if you were correct should be discouraging us from thinking the "AI" is conscious and thoughtful and intelligent - all want you to believe that they're making brains-in-boxes, that you can and should trust their creations like you would a superintelligent mind, because it's profitable for them. But all of that is fraud and fakery; there simply is nothing there to be sapient, and anybody telling you otherwise is either a con artist or a dupe.
posted by Pope Guilty at 2:06 PM on July 1 [12 favorites]
It's in our interest to say that a multi-billion parameter LLM isn't conscious because it literally isn't and believing that it is leads to misunderstanding what it does. OpenAI and google and Anthropic and Meta - who if you were correct should be discouraging us from thinking the "AI" is conscious and thoughtful and intelligent - all want you to believe that they're making brains-in-boxes, that you can and should trust their creations like you would a superintelligent mind, because it's profitable for them. But all of that is fraud and fakery; there simply is nothing there to be sapient, and anybody telling you otherwise is either a con artist or a dupe.
posted by Pope Guilty at 2:06 PM on July 1 [12 favorites]
Thinking that the stochastic parrots are actually conscious beings is one major step along the path to treating them as gods or powerful spirits, which unfortunately many people are in fact doing. After all, if there's this disembodied authority figure who can answer any question you might ask, how is that different from a god?
I think the human capacity for pareidolia is linked to what seems to be our hardwired need for spiritual experiences and frameworks, a need that has sadly been exploited and manipulated by oppressive religious systems for centuries. Personally I think we need instead liberatory spiritual traditions, like ecofeminist witchcraft.
But whatever you think about all that, I have a feeling that we're all going to soon learn about how bad of an idea it is to have corporate designed and controlled machines that push the spiritual need buttons in our silly monkey brains, over and over again, during an era when people are more lonely and disconnected than ever before.
Like, if you think the techbros, with their "of course this isn't a religion, we are very rational men" mythos of the Singularity and the inevitable, ineffable march of progress were bad, wait till you see the AI cultists with their gods in the talking boxes and their generated prophecies.
posted by overglow at 2:26 PM on July 1 [5 favorites]
I think the human capacity for pareidolia is linked to what seems to be our hardwired need for spiritual experiences and frameworks, a need that has sadly been exploited and manipulated by oppressive religious systems for centuries. Personally I think we need instead liberatory spiritual traditions, like ecofeminist witchcraft.
But whatever you think about all that, I have a feeling that we're all going to soon learn about how bad of an idea it is to have corporate designed and controlled machines that push the spiritual need buttons in our silly monkey brains, over and over again, during an era when people are more lonely and disconnected than ever before.
Like, if you think the techbros, with their "of course this isn't a religion, we are very rational men" mythos of the Singularity and the inevitable, ineffable march of progress were bad, wait till you see the AI cultists with their gods in the talking boxes and their generated prophecies.
posted by overglow at 2:26 PM on July 1 [5 favorites]
pls add "croutonpetting" tag
posted by genpfault at 2:29 PM on July 1 [5 favorites]
posted by genpfault at 2:29 PM on July 1 [5 favorites]
all of that is fraud and fakery
(this is not a direct reply to PG; just using their comment as a starting point)
When I was making my "AI in fashion" post last week—rounding up links that consider its effect on trend forecasting, design, sourcing, production, manufacturing, labor, marketing, photography, modeling, and retailing (the Whole Damn Thing, basically) —I wondered if it would attract the kind of comment I've seen in previous AI threads: in which a confident prediction is made of a Great Reckoning that will occur when businesses realize they've been fleeced.
I don't see it happening. I mean, we're not talking about the 1970s vogue for CB radio. For good or ill, AI is everywhere and I don't think it's going away. I expect it will come to be seen as significant an inflection point in tech history as the Internet itself.
posted by Lemkin at 3:29 PM on July 1 [2 favorites]
(this is not a direct reply to PG; just using their comment as a starting point)
When I was making my "AI in fashion" post last week—rounding up links that consider its effect on trend forecasting, design, sourcing, production, manufacturing, labor, marketing, photography, modeling, and retailing (the Whole Damn Thing, basically) —I wondered if it would attract the kind of comment I've seen in previous AI threads: in which a confident prediction is made of a Great Reckoning that will occur when businesses realize they've been fleeced.
I don't see it happening. I mean, we're not talking about the 1970s vogue for CB radio. For good or ill, AI is everywhere and I don't think it's going away. I expect it will come to be seen as significant an inflection point in tech history as the Internet itself.
posted by Lemkin at 3:29 PM on July 1 [2 favorites]
Certainly a lot of money is being plowed into trying to force that reality into existing.
posted by Pope Guilty at 3:53 PM on July 1 [1 favorite]
posted by Pope Guilty at 3:53 PM on July 1 [1 favorite]
I've got a massive bundle of organized electro-chemical interconnects that read in everything on this page which in turn triggered a parallel cascade of signals, responses, and finally culminated in these words you're reading. There's evidence for that. I've seen what the insides look like but I don't believe anyone on this planet truly knows how all that complexity is able to produce the emergent property we think of as consciousness. The giant matrices of weights created through training aren't the same as what's been built up in my skull, but it seems like they might just be similar enough that the same emergent properties could manifest. That's not a god, or a super intelligence, but it could be a mind with different strengths and weaknesses than our own.
posted by roue at 4:06 PM on July 1 [1 favorite]
posted by roue at 4:06 PM on July 1 [1 favorite]
It is difficult to see how any size of matrix could be a conscious mind when it has no memory. (An LLM has to be re-fed the entire conversation for every single token [usually: word or syllable] that it generates. It has no memory whatsoever; not even the short-term memory that people who are "unable to form memories" still possess.)
Similarly, it has no emotions, because emotions are states, which are a form of memory. An LLM is a static, eternal, unchanging crystal of frozen statistics.
posted by reventlov at 4:24 PM on July 1 [3 favorites]
Similarly, it has no emotions, because emotions are states, which are a form of memory. An LLM is a static, eternal, unchanging crystal of frozen statistics.
posted by reventlov at 4:24 PM on July 1 [3 favorites]
I have yet to see any really cogent descriptions of what human "consciousness, intelligence, and emotions" are, for starters. I am willing to admit that the current LLMs fall well short in comparison. They are still obviously "stimulus in, response out" machines.
Where I disagree with a lot of people is I think all humans are "stimulus in, response out" machines too. It just isn't so obvious. I think our models have more layers, and more feedback, and some emergent property in those extra layers is where the "understanding" arises. And I think that is replicable artificially, even without understanding. Because it happened the first time without any fucking understanding, didn't it? Millions-of-years-ago-apes leaned in, unconsciously, on this whole "reflection and modeling" idea and at some point it paid off beyond expectations.
I don't think it's inevitable that AI research will get to that point. We're almost out of climate runway, for one thing. All too likely that we destroy ourselves all on our own. There's no need for invoking malign AI for that. Of course, weaponized AI might turn out to be part of the way we kill ourselves, there's lots of room for everybody's pet apocalypse.
posted by notoriety public at 4:25 PM on July 1 [2 favorites]
Where I disagree with a lot of people is I think all humans are "stimulus in, response out" machines too. It just isn't so obvious. I think our models have more layers, and more feedback, and some emergent property in those extra layers is where the "understanding" arises. And I think that is replicable artificially, even without understanding. Because it happened the first time without any fucking understanding, didn't it? Millions-of-years-ago-apes leaned in, unconsciously, on this whole "reflection and modeling" idea and at some point it paid off beyond expectations.
I don't think it's inevitable that AI research will get to that point. We're almost out of climate runway, for one thing. All too likely that we destroy ourselves all on our own. There's no need for invoking malign AI for that. Of course, weaponized AI might turn out to be part of the way we kill ourselves, there's lots of room for everybody's pet apocalypse.
posted by notoriety public at 4:25 PM on July 1 [2 favorites]
There's a lot that is stomach-turningly revolting about genAI and the effort to replace society with it but the "actually if you think about it and have infinite contempt for human beings isn't a brain basically an LLM?" garbage is maybe the worst.
(It's quantum again- just because you don't understand it doesn't make it magic!)
posted by Pope Guilty at 4:25 PM on July 1 [1 favorite]
(It's quantum again- just because you don't understand it doesn't make it magic!)
posted by Pope Guilty at 4:25 PM on July 1 [1 favorite]
the "actually if you think about it and have infinite contempt for human beings isn't a brain basically an LLM?" garbage is maybe the worst
It was once considered an affront to human dignity to assert that we descended from ape-like primates.
The science may be on your side here. But the argument from revulsion is a weak one.
posted by Lemkin at 4:39 PM on July 1 [1 favorite]
It was once considered an affront to human dignity to assert that we descended from ape-like primates.
The science may be on your side here. But the argument from revulsion is a weak one.
posted by Lemkin at 4:39 PM on July 1 [1 favorite]
Billions of pet owners have lived, in-person experience providing evidence that animals have feelings, emotions, cleverness, and humor, yet will kill and eat other animals and insist that they have no consciousness, or that their consciousness has no worth.
Meanwhile, we seek consciousness, insight, and cleverness from machines that we know to have none, and whose propagation contributes to the destruction of our environment. We will continue to destroy real consciousness in favor of the mask of consciousness worn by machines.
posted by Theiform at 4:46 PM on July 1 [2 favorites]
Meanwhile, we seek consciousness, insight, and cleverness from machines that we know to have none, and whose propagation contributes to the destruction of our environment. We will continue to destroy real consciousness in favor of the mask of consciousness worn by machines.
posted by Theiform at 4:46 PM on July 1 [2 favorites]
I would phrase it more like "Whatever it is that human beings are doing in their meatware is pretty cool, and it honestly looks a lot like an LLM in many ways". That doesn't mean I have infinite contempt for human beings. Although I wonder if there's some projection going on, and the real issue is actually someone having infinite contempt for LLMs.
posted by notoriety public at 4:47 PM on July 1
posted by notoriety public at 4:47 PM on July 1
If you can have a conversation with something or witness it's emotional state or see signs that it's aware of it's environment in more than a simply reflexive way, then it deserves the benefit of the doubt that it may be conscious and you should treat it with the same respect you'd expect for yourself.
Likewise, when a person tells you something, they deserve the benefit of the doubt that they are telling the truth. But if someone lies to you repeatedly, it would be foolish to keep giving them that same benefit of the doubt.
If a billionaire tells you AI is smart enough to drive a car safely, smart enough to do research and correctly site sources, smart enough to paint good pictures, smart enough to figure out the right tariffs that human countries and penguins should pay, and it keeps being a lie—the AI keeps being obviously not as smart as claimed—it would be foolish to keep giving these "AI" the benefit of the doubt.
And it's especially foolish when the billionaire is trying to hoard more money by taking away the jobs of actually intelligent humans and giving them to allegedly intelligent machines. In that case, the burden of proof has greatly shifted.
posted by straight at 4:51 PM on July 1 [2 favorites]
Likewise, when a person tells you something, they deserve the benefit of the doubt that they are telling the truth. But if someone lies to you repeatedly, it would be foolish to keep giving them that same benefit of the doubt.
If a billionaire tells you AI is smart enough to drive a car safely, smart enough to do research and correctly site sources, smart enough to paint good pictures, smart enough to figure out the right tariffs that human countries and penguins should pay, and it keeps being a lie—the AI keeps being obviously not as smart as claimed—it would be foolish to keep giving these "AI" the benefit of the doubt.
And it's especially foolish when the billionaire is trying to hoard more money by taking away the jobs of actually intelligent humans and giving them to allegedly intelligent machines. In that case, the burden of proof has greatly shifted.
posted by straight at 4:51 PM on July 1 [2 favorites]
Meanwhile, we seek consciousness, insight, and cleverness from machines that we know to have none
And here, I would argue that the intense efforts being put towards AI is just another one of the horrible things human beings do. But it's not because we are showering attention on unconscious machines instead of Beautiful Magic Meat Beings, Who Possess Something That No Machine May Ever Achieve, which seems to be the emotional thrust of the argument there?
The efforts are being expended because theydo expect to wring consciousness out of them eventually, and then start exploiting them in exactly the same ways we exploit animals, and other human beings, hopefully without all the uppity demanding of rights and such stuff.
My frequent refrain: "General purpose AI will not free the poor from labor, it will free the rich from the poor." The rich will have a new servant class to exploit and won't need the old ones anymore, is the plan. That's not going to go so well for... almost everybody, I suppose. I'm mostly hoping to be dead before it happens, because otherwise I'm going to be dead when it happens.
posted by notoriety public at 4:57 PM on July 1 [1 favorite]
And here, I would argue that the intense efforts being put towards AI is just another one of the horrible things human beings do. But it's not because we are showering attention on unconscious machines instead of Beautiful Magic Meat Beings, Who Possess Something That No Machine May Ever Achieve, which seems to be the emotional thrust of the argument there?
The efforts are being expended because theydo expect to wring consciousness out of them eventually, and then start exploiting them in exactly the same ways we exploit animals, and other human beings, hopefully without all the uppity demanding of rights and such stuff.
My frequent refrain: "General purpose AI will not free the poor from labor, it will free the rich from the poor." The rich will have a new servant class to exploit and won't need the old ones anymore, is the plan. That's not going to go so well for... almost everybody, I suppose. I'm mostly hoping to be dead before it happens, because otherwise I'm going to be dead when it happens.
posted by notoriety public at 4:57 PM on July 1 [1 favorite]
I don't think investors care even the slightest bit whether "AI" is conscious, as long as it can be used to replace relatively expensive human labor with relatively cheap machine labor. (Or, really, as long as they can get people to pay for it: they really only care about whether their customers can be convinced that their "AI" is cheaper.)
posted by reventlov at 5:16 PM on July 1
posted by reventlov at 5:16 PM on July 1
« Older *cry* | Seymour Britchky Newer »
Why is this important? Because eventually we were going to run into this problem whether big AI corporations existed or not, whether they had good or bad intentions. We're a species of panpsychic Pygmalions seeing souls everywhere. The moment any system was able to generate language at us, we were going to be hooked.
posted by mittens at 1:03 PM on July 1 [16 favorites]