The public conversation is confused and chaotic.
May 24, 2025 12:07 PM Subscribe
AI 2027: "The CEOs of OpenAI, google DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.”
What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.
We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy."
One of the authors of these scenarios is Daniel Kokotajlo, formerly at OpenAI. He previously authored an AI prediction scenario in August 2021. It got many things wrong, but was still surprisingly successful: he predicted the rise of chain-of-thought, inference scaling, sweeping AI chip export controls, and $100 million training runs—all more than a year before ChatGPT appeared.
In light of his scenarios Kokotajlo proposes more transparency in frontier AI development.
Kevin Roose commentary at NYT on AI 2027.
One of the authors of these scenarios is Daniel Kokotajlo, formerly at OpenAI. He previously authored an AI prediction scenario in August 2021. It got many things wrong, but was still surprisingly successful: he predicted the rise of chain-of-thought, inference scaling, sweeping AI chip export controls, and $100 million training runs—all more than a year before ChatGPT appeared.
In light of his scenarios Kokotajlo proposes more transparency in frontier AI development.
Kevin Roose commentary at NYT on AI 2027.
Make GPUs For Graphics Again
posted by AlbertCalavicci at 12:20 PM on May 24 [16 favorites]
posted by AlbertCalavicci at 12:20 PM on May 24 [16 favorites]
Anyone else just want to stuff all these nerds into a locker
posted by theodolite at 12:23 PM on May 24 [15 favorites]
posted by theodolite at 12:23 PM on May 24 [15 favorites]
the fantasy here is a fictional company called OpenBrain that trains a model two orders of a magnitude larger than ChatGPT 4. Except there’s nothing left to train it on, (we’ve already fed literally all digital information into the latest models) and it’s not at all clear that a model that much bigger will ever be practically feasible, assuming it was possible to train it. in any case it supposes a rate of progress in development that we just haven’t seen. The difference in capability between the GPT4, from two years ago, and the latest models, is marginal. They’re basically the same, at the cost of billions of dollars. Where will the leap come from? The piece doesn’t say of course because if anyone knew they would quickly become one of the richest people on earth
posted by dis_integration at 12:26 PM on May 24 [4 favorites]
posted by dis_integration at 12:26 PM on May 24 [4 favorites]
And they'll be power by fusion reactors, which I'm sure will exist within a similar timeline.
posted by rhooke at 12:30 PM on May 24 [6 favorites]
posted by rhooke at 12:30 PM on May 24 [6 favorites]
Given that we still can't actually define what 'intelligence' is, the goalposts are conveniently already placed on rails.
posted by Ickster at 12:34 PM on May 24 [5 favorites]
posted by Ickster at 12:34 PM on May 24 [5 favorites]
"—or relaxes and collects an incredibly luxurious universal basic income."
This seems very unlikely.
posted by the Real Dan at 12:39 PM on May 24 [5 favorites]
This seems very unlikely.
posted by the Real Dan at 12:39 PM on May 24 [5 favorites]
MetaFilter: The public conversation is confused and chaotic
posted by Lemkin at 12:53 PM on May 24 [2 favorites]
posted by Lemkin at 12:53 PM on May 24 [2 favorites]
This was the same thing a year ago (approximately)!
posted by demonic winged headgear at 12:55 PM on May 24
posted by demonic winged headgear at 12:55 PM on May 24
optimistic of them not to also write a Butlerian Jihad ending
posted by allegedly at 12:57 PM on May 24 [8 favorites]
posted by allegedly at 12:57 PM on May 24 [8 favorites]
→
Didn't Altman define it as "when my company hits $1T market cap, we'll have it" or some similar inanity?
posted by scruss at 1:01 PM on May 24 [5 favorites]
Given that we still can't actually define what 'intelligence' is
Didn't Altman define it as "when my company hits $1T market cap, we'll have it" or some similar inanity?
posted by scruss at 1:01 PM on May 24 [5 favorites]
Anybody can say anything. People who stand to get rich(er) offa this disaster are saying what you'd think.
posted by Sing Or Swim at 1:21 PM on May 24 [5 favorites]
posted by Sing Or Swim at 1:21 PM on May 24 [5 favorites]
I am finding Gary Marcus, a PhD level neural scientist, increasingly indispensable in being informed about developments in artificial "intelligence."
In his thorough response to the AI 2027 Scenario, he points out that:
In his thorough response to the AI 2027 Scenario, he points out that:
[t]hey don’t describe any other scenarios; they don’t give any estimates of the likelihood of other scenarios. It is, again, speculation...He goes on to explore one specific prediction in detail:
“It is our best guess about what that might like look like“ is a very subjective claim, but I would hazard a guess that the 2027 scenario was chosen not by the aid of a detailed mathematical estimating exercise, such as a decision tree in decision analysis with probabilities assigned (if one exists it was not shown), but rather was the product of a different kind of process: trying to render vivid one particular nightmare they had, in order to make plausible the notion that superintelligent machines could soon cause mayhem. As an exercise in rendering things vivid it is masterful; as a scientific analysis of a range of scenarios and which might be most likely, it’s dead on arrival. There is no serious analysis of alternative scenarios at all.
By late 2025, their fictional company OpenBrain has finished “training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research.”posted by overglow at 1:52 PM on May 24 [9 favorites]
Some of what they describe therein, roughly six months later, is fully plausible. I have no doubt that all of the major companies are working on agents are trying to help with AI research. But will they be “great” at that by the end of the year? And will they have resolved the tendency towards errors that the stumbling prototype agents of mid-2025 had made? That much progress in 6 months would truly be phenomenal. But I seriously doubt it.
Remember how we were told in 2023 that hallucinations would be solved in a matter of months? They are still here. Remember how we were told in 2012 that we would all have driverless cars by 2017? That hasn’t come to pass, either, except in about 10 of the world’s 20,000 cities. google Duplex, a widely hyped and now mostly forgotten system for “Accomplishing Real-World Tasks Over the Phone” from 2018 still hasn’t fully materialized, exactly as Ernest Davis and I projected at the time. The failure of the AI 2027 team to reckon with the immense history of broken promises and delays in the AI field is, in a team that styles itself as forecasters, inexcusable.
Very few major advances take just six months from conception to full implementation. In reality, the chance that we will have reliable AI agents that truly advance AI research by the end of the year is small.
But every further statement in the AI 2027 essay rests on that longshot happening and happening then. The errors in unrealistic projections are cumulative. If “great” AI research agents don’t arrive by the end of 2025, everything in the rest of the essay gets moved back.
I suspect of overconfidence the partisans on both sides of the issue.
posted by Lemkin at 2:20 PM on May 24 [1 favorite]
posted by Lemkin at 2:20 PM on May 24 [1 favorite]
Remember how we were told in 2012 that we would all have driverless cars by 2017?
Except in Phoenix, Austin, LA, San Francisco (and maybe Boston soon) - rollout is slow due to manufacture pace, google caution, and regulation nervousness (one robot killing is worth thousands of drunk driver deaths in terms of publicity ).
Now if a researcher develops an cognitive machine, it'll require vast quantities of GPU's, TPU's, Ultra pewpew U's and that's one brain, now how many human brains are there? Can one super computer brain do more than a lot of human brains? Maybe but there are a whole bunch of geniuses just at MIT that have not solved all our problems or taken control of the entire world (or even their departments procurement procedures) so one super GPU brain may not actually be all that powerful.
posted by sammyo at 2:33 PM on May 24
Except in Phoenix, Austin, LA, San Francisco (and maybe Boston soon) - rollout is slow due to manufacture pace, google caution, and regulation nervousness (one robot killing is worth thousands of drunk driver deaths in terms of publicity ).
Now if a researcher develops an cognitive machine, it'll require vast quantities of GPU's, TPU's, Ultra pewpew U's and that's one brain, now how many human brains are there? Can one super computer brain do more than a lot of human brains? Maybe but there are a whole bunch of geniuses just at MIT that have not solved all our problems or taken control of the entire world (or even their departments procurement procedures) so one super GPU brain may not actually be all that powerful.
posted by sammyo at 2:33 PM on May 24
Yeah, Gary Marcus can really only be trusted to say what Gary Marcus says: That data-driven deep learning is a dead-end, and the grand rise of symbolic computation is right around the corner. He's been continuously saying that the limit of current methods has been reached for as long as I can recall.
posted by kaibutsu at 2:33 PM on May 24 [1 favorite]
posted by kaibutsu at 2:33 PM on May 24 [1 favorite]
Heh. If you are an AI company, you want everyone to be afraid of the coming AGI apocalypse so that they'll think AI is doing so well that this is actually a possibility. Better buy that stock before it skyrockets when the company takes over the world!
posted by eye of newt at 3:07 PM on May 24 [4 favorites]
posted by eye of newt at 3:07 PM on May 24 [4 favorites]
Some people read Science News while they’re on the can, some read Popular Mechanics, and some, Omni.
I used to think for sure SN was the way to go, but then the PM folks made a really strong push. Never would have thought the Omni people would win, but here we are.
posted by Ice Cream Socialist at 3:20 PM on May 24 [2 favorites]
I used to think for sure SN was the way to go, but then the PM folks made a really strong push. Never would have thought the Omni people would win, but here we are.
posted by Ice Cream Socialist at 3:20 PM on May 24 [2 favorites]
A while back, I AskMeFi'ed whether anyone could source an anecdote for me, in which someone settled a long, rambling debate about how horses stand up, with a practical demonstration.
With that in mind I'm tempted to say 'just wait 5 years; now link me some dancing parrots', but that dooms me to enduring 5 more years of panegyric with no confidence that the target won't just recede again when it goes unhit.
For me it's all about elegance. At the pool, there was one guy who made decent progress in the water, but did so by thrashing the water into submission. It was how you'd imagine a hydrophobe swimming. His progress up the lane looked as though the water were boiling around him. He probably got great exercise from it but you wouldn't enter him in any contests. Data-based attempts to create AI are that guy. If they're the solution, even allowing for incremental optimisations, then the world got uglier.
posted by BCMagee at 3:39 PM on May 24 [1 favorite]
With that in mind I'm tempted to say 'just wait 5 years; now link me some dancing parrots', but that dooms me to enduring 5 more years of panegyric with no confidence that the target won't just recede again when it goes unhit.
For me it's all about elegance. At the pool, there was one guy who made decent progress in the water, but did so by thrashing the water into submission. It was how you'd imagine a hydrophobe swimming. His progress up the lane looked as though the water were boiling around him. He probably got great exercise from it but you wouldn't enter him in any contests. Data-based attempts to create AI are that guy. If they're the solution, even allowing for incremental optimisations, then the world got uglier.
posted by BCMagee at 3:39 PM on May 24 [1 favorite]
In 1949 mathematician Alan Turing postulated The Imitation Game. In this game a human evaluator reviews the transcript of a conversation between a human and a machine.
Turning theorized that if the human evaluator was unable to tell which was human, and which was the computer, then the computer had been shown to have human levels of intelligence.
It seems to me that recent history has proven Turing kinda got it backwards. A human evaluator's inability to tell the difference between the computer and the human doesn’t prove that the computer is intelligent, it proves that the observer believes the computer to be intelligent, even when presented with significant evidence to the contrary.
Also, anytime someone tells you something is "5 years away" they're making it up. We've been "5 years away" from AGI, from self-driving cars, and room-temperature fusion (to name just three) for long, long time.
posted by Frayed Knot at 3:40 PM on May 24 [2 favorites]
Turning theorized that if the human evaluator was unable to tell which was human, and which was the computer, then the computer had been shown to have human levels of intelligence.
It seems to me that recent history has proven Turing kinda got it backwards. A human evaluator's inability to tell the difference between the computer and the human doesn’t prove that the computer is intelligent, it proves that the observer believes the computer to be intelligent, even when presented with significant evidence to the contrary.
Also, anytime someone tells you something is "5 years away" they're making it up. We've been "5 years away" from AGI, from self-driving cars, and room-temperature fusion (to name just three) for long, long time.
posted by Frayed Knot at 3:40 PM on May 24 [2 favorites]
I’m gonna be honest, I’m not looking forwards to the rollout of self driving cars, what with being a lady who rides a bicycle everywhere around town, but “they’re in ten of the world’s 20k cities” sure sounds like a significant step towards them being a thing that might actually work for the most part. Hopefully I will not become part of a news item about them disastrously failing, or become a silent statistic once we get used to a new baseline of carmurder, but it does feel like “five years away” might finally be down to “four”.
I hear Waymo has a few test cars going around New Orleans with human drivers, trying to learn how to deal with our shitty, shitty roads.
posted by egypturnash at 4:00 PM on May 24 [3 favorites]
I hear Waymo has a few test cars going around New Orleans with human drivers, trying to learn how to deal with our shitty, shitty roads.
posted by egypturnash at 4:00 PM on May 24 [3 favorites]
Heads-up, storybored, that your NYT link doesn’t go to the article that I think you mean it to go to - right now it opens something from last year about Kokotajlo’s departure from OpenAI.
In other news, this Potemkin nonprofit is yet another instance of Bay-Area-rationalist-brainrot propped up by paranoid kajillionaire money. The four main authors of this paper include:
- someone who started a philosophy PhD but dropped out to work at OpenAI, and whose only other jobs have been at other rationalist/EA p(doom) “nonprofits”
- someone who graduated with a CS/Econ degree in 2020 and then worked for one year at a business, after which he switched to rationalist/EA p(doom) “nonprofits”
- someone who graduated with a CS/math degree in 2022 and who has *only* worked at rationalist/EA p(doom) “nonprofits”
- someone who has not yet graduated from college
For people who trust this analysis, why?
posted by rrrrrrrrrt at 4:19 PM on May 24 [3 favorites]
In other news, this Potemkin nonprofit is yet another instance of Bay-Area-rationalist-brainrot propped up by paranoid kajillionaire money. The four main authors of this paper include:
- someone who started a philosophy PhD but dropped out to work at OpenAI, and whose only other jobs have been at other rationalist/EA p(doom) “nonprofits”
- someone who graduated with a CS/Econ degree in 2020 and then worked for one year at a business, after which he switched to rationalist/EA p(doom) “nonprofits”
- someone who graduated with a CS/math degree in 2022 and who has *only* worked at rationalist/EA p(doom) “nonprofits”
- someone who has not yet graduated from college
For people who trust this analysis, why?
posted by rrrrrrrrrt at 4:19 PM on May 24 [3 favorites]
I hear Waymo has a few test cars going around New Orleans with human drivers, trying to learn how to deal with our shitty, shitty roads.
I don’t believe we’re anywhere near the level of AI where a car can drive autonomously in New Orleans. Besides the high number of reckless human drivers, there are too many unexpected weird road conditions, especially sinkholes and flooding, that a robot isn’t going to be able to judge right. There are also a lot of situations with unexpected road and lane closures (parades, police activity, construction, car crashes) that I think a robot would struggle with.
Which means a Waymo car there will need to be remotely supervised by a human, maybe one looking at a grid of screens representing different cars.
And I do wonder if agentic AI will need something similar to prevent the situation of that Washington Post reporter where the bot used his credit card without permission and spent $31 on a dozen eggs. The internet is at least as treacherous as New Orleans streets.
posted by smelendez at 4:41 PM on May 24
I don’t believe we’re anywhere near the level of AI where a car can drive autonomously in New Orleans. Besides the high number of reckless human drivers, there are too many unexpected weird road conditions, especially sinkholes and flooding, that a robot isn’t going to be able to judge right. There are also a lot of situations with unexpected road and lane closures (parades, police activity, construction, car crashes) that I think a robot would struggle with.
Which means a Waymo car there will need to be remotely supervised by a human, maybe one looking at a grid of screens representing different cars.
And I do wonder if agentic AI will need something similar to prevent the situation of that Washington Post reporter where the bot used his credit card without permission and spent $31 on a dozen eggs. The internet is at least as treacherous as New Orleans streets.
posted by smelendez at 4:41 PM on May 24
If I recall correctly, Altman's AGI criterion is $100B in revenue.
It would be nice if any of these people were serious instead of hucksters. If they do achieve AGI, I firmly believe it will be by accident in a "throw enough compute at anything and sentience sometimes happens" way.
posted by tclark at 4:44 PM on May 24
It would be nice if any of these people were serious instead of hucksters. If they do achieve AGI, I firmly believe it will be by accident in a "throw enough compute at anything and sentience sometimes happens" way.
posted by tclark at 4:44 PM on May 24
The idea that we just need to do LLMs as hard as possible and then AGI will magically happen is both completely ridiculous and necessarily true if a bunch of companies' valuations aren't a few more years of accomplishing nothing away from cratering.
posted by Pope Guilty at 5:18 PM on May 24 [1 favorite]
posted by Pope Guilty at 5:18 PM on May 24 [1 favorite]
All that ever actually matters is what people will pay for, because anything else is posturing.
I am very comfortable with the idea that within the next five years there will be significant revenue shifted from paying people to think to paying companies for AI services that we now would regard as manifestations of AGI in some limited, or maybe even broad, sense. Whether the computers whose runtime is being sold actually are “generally intelligent” is a perfect example of what doesn’t matter.
If you aren’t preparing to profit from this you are making a very serious mistake.
posted by MattD at 5:21 PM on May 24 [1 favorite]
I am very comfortable with the idea that within the next five years there will be significant revenue shifted from paying people to think to paying companies for AI services that we now would regard as manifestations of AGI in some limited, or maybe even broad, sense. Whether the computers whose runtime is being sold actually are “generally intelligent” is a perfect example of what doesn’t matter.
If you aren’t preparing to profit from this you are making a very serious mistake.
posted by MattD at 5:21 PM on May 24 [1 favorite]
« Older justaqrcode.com | This is toward the extreme of outcomes Newer »
Heh. AGI has been 5 years away since 1970. It's been very reliable that way.
posted by Tell Me No Lies at 12:17 PM on May 24 [15 favorites]