cross-posted from: https://lemmy.world/post/44699253
This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.
Hopefully, it is the start of the AI bubble bursting.
As someone who named their daughter Sora in 2021, this is the best news I’ve gotten this year.
Congrats! 🥳🥳🥳
Chime in if you disagreee, but there’s really only 2 reasons a company like OpenAI shuts down a core service like Sora:
- The service is hemorrhaging money to the point of financial unsustainability.
- The service is not popular enough to drive investor hype as a “loss leader”
We already know that OpenAI is losing money on their generative “AI” products across the board, to the tune of billions of dollars per year, and the economic woes that come from rising hardware prices, oil and gas shortages, and another pointless war in the middle east only make the situation worse for them money-wise.
And so that really just leaves me to conclude that Sora has not maintained the level of popularity and growth needed to impress investors as Q1 comes to a close. Whether it’s users, subscriptions, or time, they must have looked at the numbers and really didn’t like what they saw.
Hopefully this is the beginning of the end of the ridiculous “AI” bubble, and the start of a new tech sector correction.
There’s a third option this time.
It uses a lot of resources they can use immediately for the military contract that will now inevitably form the backbone of the company and effectively will mean they have won the AI war. Anthropic fumbled by not doing what the military wanted immediately, and showing a minimal backbone publicly.
I listened to a Vox’s Today Explained that tackled this whole contract. What was said on there was that Anthropic had in some very minor stipulations about AI and war, but were rejected. OpenAI came in with their offer and then after getting it, the contract they signed had the wording that Anthropic was asking for.
It basically came down to, Altman was the favorite of the Trump administration and got the contract because of behind the scenes bullshit and because Dario was/is super critical of Trump when it comes to AI safety.
In addition, marketing AI with image generation is a lot easier of a way to impress the public than the more technical applications, or the frightening prospect of the “security” applications, but image generation is only a good use of resources as advertisment, and the introductory phase is over.
I think this might ignore something else video image generation is good for which is propaganda.
Fake or highly edited video of strikes in Iran, random video circulating online proporting that the Netanyahu hand videos, and random videos of Israeli strikes on Palestine (which I assume are to discredit actual video of the atrocities happening there), have been going viral for awhile now.
Advertising is probably one of the few industries that can use image generation and video generation via AI LLM in a way that would actually cut costs but the downside is people are increasingly militantly against ads and they are against AI generated content including ads, so this isn’t likely to become the reality any time soon.
If the McDonald’s ad and others like it had been better vetted for AI uncanny valley aspects and hallucinations that cause trucks to transform into short bus versions of themselves mid ad spot etc, the public might not have paid attention at all.
And lots of those same advertising firms are using AI to their benefit behind the scenes to purchase ad space. But using AI in ads in a public facing way is a dream out of reach for them for now because they bungled it so bad.
You’re right. I should have specified “publicly accessable” image generation.
useful for propaganda videos thats what trump and other conservatives want to “trick people” that it is true whats being shown.
Finger crossed, my friend! Fingers crossed.
Or 3 massive liability/ lawsuit /investigation about to be announced.
The market for professional video is fairly small, and most of the cost is in sales. ie. the advertising agency, or movie/show pitch that demands the producers get rich independent of production costs.
. Ai companies want to replace all YouTube , and TikTok,creators with ai video content farms, capturing the creator market and if it scales the streaming market. Instead of getting a cut gen ai would let platforms eat the whole pie alone.
A significant number of smaller creators I watch have drastically increased the quality of animations and b roll by using ai tools.
These tool are a big deal to a big market.
i heard sora uses around several MW of electricity just for a short 5-20second videos. extremely high cost.
I think they intend to IPO this year and need to slow the hemorrhaging funds quickly to be more appealing to the stock market.
I guess. But shrinking the scope of their products doesnt exactly inspire the idea of infinite growth.
I suspect it’s that they got eclipsed by ByteDance with Seedance 2.0.
The video for that model is really good and makes Sora look pretty meh, and it may have been that current work on a next gen Sora wasn’t going to be competitive enough.
The worst thing a lab can do right now is look like they are falling behind (i.e. Meta), especially with OpenAI planning for an IPO.
So on top of the lackluster “social media” offering tied to Sora they decided to shutter the entire product line of video and pivot to enterprise (where they’ve already lost significant market share to Anthropic).
They’re in a pretty meh place at the moment overall tbh. I’m skeptical they’ll recover.
(But I wouldn’t mistake their fumbling for an industry wide shift on AI in general or even video AI.)
Best news I’ve heard all day! POP THE SLOP
Finally, a good news
It’s so they can repurpose that capacity for developing robots. It’s not good at all.
OpenAI told the BBC on Wednesday that it has discontinued Sora so that it can focus on other developments, such as robotics “that will help people solve real-world, physical tasks”.
Robots aren’t like software, it’s immediately obvious when they don’t work the way they’re advertised whereas chatbots can trick people into thinking they’re way more useful than they actually are. The “fake it till you make it” “move fast and break things” ethos of tech doesn’t work when there’s actual, physical evidence that shit’s busted.
Unpopular Opinion Incoming
I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.
Most of the “AI is broken and doesn’t work” on here is solid echo chamber cope. It’s more competent than several of my coworkers, though it’s thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.
I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.
Mind telling us what it is that you do? I heard similar things being said in the Plain English podcast last week (and the host was pretty anti-AI before) and I’m starting to wonder if certain jobs are going to be more affected than others.
Or are your coworkers just bad at what they do? :P When I was working tech support, there were people that were worse at their jobs than the bots of the time, let alone LLMs, I swear.
Electrical engineering. My mentioned coworkers are competent but more junior in the field. We did a miniature internal study and found the best models provided accurate, relevant information on the first prompt about 90% of the time when asked to explain or verify concepts. The remainder consisted of hallucinations or misunderstood queries.
They struggled with questions that instead required complex problem, providing some mixture of appropriate solutions, overly complex but still functional solutions, and hallucinated shite.
I recommended that we do not move forward with adopting AI in any capacity. While it has some utility for basic information retrieval and fact checking, it still required someone with sufficient knowledge to be able to quickly evaluate the quality of its output. Helpful for someone who knows what they’re doing, dangerous 10% if the time for someone who does not. I also highlighted the ethical concerns, many of which my peers were unaware.
Cool anecdote. Every time we actually see real data, though, the numbers don’t reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren’t seeing actual usefulness. The most recent study out of Duke University observes “a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”
A delay. Sure.
I really appreciate your dismissive, arrogant tone. Your casual dismissing of my anecdote really added to how you provided even less substance to support your point.
But hey, it got you those “supporting the echo chamber by dunking on dissent” up votes, and that’s what we’re all here for, right?
I directly quoted a study from Duke University, how is that “even less substance” that your anecdote?
Correct, thought there is still good news in a way: OpenAI is running out of money rapidly. So much so, that they have to pick and choose one thing over the other.
They would have done the robot thing anyways, but the fact that they had to shut something else down for it sbows that the massive deficit is starting to affect them pretty heavily.
Maybe im just coping, but imo, the cracks are getting bigger and bigger.
So many people seem to have no idea what they’re talking about. This isn’t ending AI video creation, it just cost them a lot of money to offer it. You can generate a video on your own computer already. AI video isn’t going away because one company isn’t letting people do it on their servers for free any more.
Didn’t realise you could do it locally, just checked online and there’s several options. So why are these fuckers building huge, resource-greedy data centres. . ?
Real answer: Because they want to own the world.
Because they want to do a lot of it and faster than a home pc could so they can offer it as a service.
What you can do locally is slower and with much smaller models.
So they can charge you to do it on your phone…
you mean giving away billions of dollars of computer with no monetisation strategy was bad? man who would have thought. not sam, apparently. if only there were like, some way to have realised that the goal of business is to earn money
Let me get this straight: Disney was supposed to give Openai license for their characters, and on top of that invest billion dollars in the Openai? The money literally went the wrong way
Not really. Disney management has drunken the same Koolaid as any other management right now: they believe they can fire large parts of their staff and replace them with “AI”, allowing them to achieve similar or even greater productivity at a fraction of the cost (i.e. whatever fee "open"AI charges). To achieve that, they need to give Sora access to their characters (so it can be trained to produce Disney movies) and invest in the company (as a down payment; money that would be recuperated by eliminating workers from the equation).

Could this mean less wholly AI generated videos on YouTube? Please be so.
People will just switch to using other tools like googles veo
Doesn’t that require a subscription though? It may not eliminate the slot videos, but that subscription is going to be a pretty substantial barrier to entry
Those pathetic AI youtube commercials where there is some fake over muscled geriatric talking about some miracle cure are the worst.
I just close them out. I’m hoping somewhere in youtubes algorithm of suck they are paying attention to how much those ads are hated.
I think one of the reasons why consumer facing AI content is failing so bad is because we have had good video content for decades so it’s super obvious when a video is just off.
I think this relates to the main reason why AI is failing (or at least not popular with consumers). It automatically just means the product has less quality than you’ve been used to for your entire life. It hasn’t really provided anything new to consumers.
In the dotcom era, the push was to create lots of free services. Once you had enough users, you wanted to see how many would be willing to pay for it. There was a formula that justified getting more investment (it varied by domain). Back then, almost nobody other than Amazon survived the hard shaking of the tree.
We may be coming up to the point where customer acquisition through free service ends. Whatsever is left standing will move to the next round.
Everybody else gets dropped on the floor.
All people ever did with sora was make doorbell cam footage of dogs watergunning old ladies and gorillas getting sucked into tornados. AI image and video generation is just a tool to make a funny joke, it’s incapable of doing anything serious in its current state, and with the amount of processing power it needs just to be a digital circus clown it’s unlikely to become anything more.
There’s also 100 YouTube channels of “real life Pokémon”. What will we ever do without those???
AI image generation is amazing for replacing stock photos, and not bad at replacing clipart and porn images.
AI video generation is ok at replacing very simple videos without continuity or physics, but their only real applications are for spreading misinformation or mindless scrolling, there’s just no real way to get anyone to pay for them.
That’s aside from the fact that sora could’ve been great for generating generic stock footage/b-roll, but the way they implemented it was to generate a script, then audio, then video, which meant that it really struggled to generate anything without a focal point, ie what it would actually be useful for.
… but their only real applications are for spreading misinformation …
There’s a ton of that going around with sora, but for all I know it could be a smalll group of people. According to their pricing page, 140 bucks a month normally (50% off right now lol) will get you almost 5000 videos a year. Seems plenty to spread a bunch of shit.
So youtube will be worth watching again right?
Right?
It still is for the creators there. Instead of browsing the algorithm I start on the subscriptions page, to only see uploads from people I actually want to.
There’s sometimes complaints about “I thought you were dead” when the channel has been uploading regularly the entire time. People just never got recommended the videos despite hitting all the buttons.
For example, did you know both Physics Girl and Tom Scott have returned this month - hopefully a sign that the world can still heal.
Some unsolicited add-on recommendations -
Ublock origin - beyond the addblocking, I use the picker tool to filter all the extra sections like “news”, “trending” “you might like” etc.
Unhook - toggles to disable a bunch of features like comments, home screen, end screen etc.
Enhancer for Youtube - Themeing and a bunch of extra settings like setting defaults for each video. speed, volume, resolution, fill screen (which is different than full screen), PIP while you scroll comments. (The author just did a rework, so it can be a little bugged sometimes - reinstalling it fixed it for me last time it went wonky.
Edit 2: I checked the contact page for the dev and went through an archive site to check, and yep, I did recall correctly. it is the same extension. glad that the dev is back maintaining the firefox extension again! best youtube debullshittificator extension IMO
Edit: I’ve checked the version history for the firefox extension and now I’m doubting myself. am I crazy? (yes, I am) was it another extension I’m thinking of?
Wait, hold on. Enhancer for Youtube is back on Firefox? Correct me if I’m wrong, but IIRC the dev didn’t update the extension for firefox for a long time due to some policies mozilla have with extensions, and IIRC it has been stuck at 2.5 something whereas the chromium version gotten the 3.0.
I’ve had to make do with another youtube extension since it keep bugging out.
this is great news!
I’m pretty certain, but could be remembering wrong, they paused development when YouTube/Google was going nuts breaking adblock, which broke the extension. The dev eventually removed their adblock and made everything else work.
There’s sometimes complaints about “I thought you were dead” when the channel has been uploading regularly the entire time.
Every once in a while, if you have the “notifications” on for a creator it’ll just randomly turn it off. I have a creator that I pay $1/mth subscription to, which you would think YouTube would take as a suggestion that I like that channel and show me when they upload something new. Nope! Instead the algorithm thinks I want to watch a fucking Neo Nazi musician.
Since Tom Scott is coming back soon, that’s a clear yes.
Openai is the canary
Don’t worry, I’m sure we’ll have other tools for quickly and cheaply creating falsified videos and the like. Faith in the veracity of video evidence probably won’t be coming back.











