Absolutely needed: to get high efficiency for this beast … as it gets better, we’ll become too dependent.
“all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,”
Does the article answer the question of what is the footprint of a prompt?
Basically nothing worth getting angry about
At some point, someone said the same thing about:
- electricity
- books
- cars
- computers
- medicine
- houses
Is this /c/technology or /c/anti_technology because it’s hard to tell most of the time.
I’m genuinely excited about the possibilities of AI, just not in the hands of a bunch of self-serving, amoral cunts.
I completely agree. However the genie is out of the bottle. Not much we can do to prevent it at this point, but there is plenty we can do to learn about it and defend against is abuse against us.
We could nationalise it. Unless the government is also a bunch of self-serving, amoral cunts, of course.
Is this /c/technology or /c/anti_technology because it’s hard to tell most of the time.
People here are generally anti-anything. That’s what echo chambers are for.
Sounds like most of Lemmy. Honestly sometimes I feel it’s worse than Reddit with the constant bashing on anything except Linux, Firefox, or - for some reason - Steam. Still glad I left Reddit though.
I can hate on Firefox if it’ll make you feel better.
It’s much better to be a critical thinker than mindlessly accepting whatever BS from some grifter just because it’s “accepted wisdom” in a completely brainwashed society.
Cars are literally privileged garbage that’s destroying the planet. Great comparison on that one.
Is this /c/technology or /c/anti_technology because it’s hard to tell most of the time.
Well only one of those is allowed to exist so you figure it out.
A better analogy for AI is the discovery of asbestos or the invention of single-use plastics. Terrible fucking idea.
Well, it’s a bit better than that, simply because you can train AI with solar power. Probably nobody does that currently, as it’s easier, faster-to-market and probably (for whatever corrupt reason) cheaper for business to let it run on burning fossils/nuclear. Currently there’s an insane amount of waste, often 1000s of models are trained and only the best performing one is deployed - and then it’s just a fancy autocomplete. The better use is for prediction of material failure, new medicine and protein folding, generally improved processes.
With asbestos you get some convenience, but it’ll be for eternity a pain to find a waste management facility that will accept it.
I think it’s probably a bit early to tell for certain on that assessment. There is definitely pros and cons to all technology. Electricity production causes environmental damage, building wooden houses require logging. Plastics are a byproduct of a withering industry. Asbestos might have saved more lives than it took, but there were probably much better ways to solve fire resistant buildings.
Why all these destructive things? Capitalism requires maximizing profits above all else. So, really the question is how will capitalism fuck us over with AI? So, so many ways. That’s why it’s important that we build community understanding of this technology in order to combat it. It’s not going away. It’s here to stay. So we either put our heads in the sand and pretend it’s not here or we can embrace it and learn how it works and how to defeat it and come up with open source tooling to combat it.
I’m in the latter camp. I love technology breakthroughs and want to learn first hand the capabilities to understand how it will be used against me and how I can use it.
Plastics are great, what are you smoking, plastics?
The energy issue almost feels like a red herring for distracting all idiots from actual AI problems and lemmy is just gobbling it up every day. It’s so tiring.
Partly, yep. Seems like every time I try to pin down an AI on a detail of a question worth asking - a math question, or a date in history, it’ll confidently reply with the first answer it finds … right or wrong.
I don’t think accuracy is an issue either. I’ve been on the web since inception and we always had a terribly inaccurate information landscape. It’s really about individual ability to put together found information to an accurate world model and LLMs is a tool just like any other.
The real issues imo are effects on society be it information manipulation, breaking our education and workforce systems. But all of that is overshadowed by meme issues like energy use or inaccuracy as these are easy to understand for any person while sociology, politics and macro economics are really hard.
That’s because it IS an issue, together with many other issues like disinformation, over reliance, wrong tools for wrong (most) jobs, etc.
Ypu know what I don’t hear on lemmy? People complaining that crypto world consumes more energy than AI world and one of those is far more useless in grand scheme of things.
So how comes it is A issue for AI, but everyone seemingly has forgotten about crypto?
Last I heard, securing one transaction on chain is equalivent to powering US household for many days (feel free to fact check). In comparison, generating LLM text for entire hour on your PC is pretty much the same as gaming for two hours (very approx., your gpu is unlikely at 100% load), which means gaming world is far more destructive energy wise. Are you getting triggered yet?
What stone did you live under? The huge power consumption of crypto was often debated. You just don’t hear too much about it because people don’t really talk about crypto much anymore. Now it’s mostly just people pro crypto who discuss it. And obviously they tend to talk leas about the downsides.
… Oh, so so the terawatts of power wasted is no longer a problem because people don’t really speak of it anymore and cryptobros tried to debate it.
If I’m living under a stone, you’re straight up stoned.
Fire bad, who cook with fire, fire burn, fire pollute, fire baaaaad
What’s your point here?
They’re trying to compare “AI” to fire. If you don’t see the point, I can’t blame you.
Yes, “AI” is literally contributing to the burning of the planet.
https://www.cleanairfund.org/news-item/wildfires-climate-change-and-air-pollution-a-vicious-cycle/
as it gets better
Bold assumption.
Yeah, I think there was some efforts, until we found out that adding billions of parameters to a model would allow both to write the useless part in emails that nobody reads and to strip out the useless part in emails that nobody reads.
Historically AI always got much better. Usually after the field collapsed in an AI winter and several years went by in search for a new technique to then repeat the hype cycle. Tech bros want it to get better without that winter stage though.
Each winter marks the beginning and end of a generation of AI. We are now seeing more progress and as long as there is no technical limit it seems that its progress will not be interrupted.
What progress are we seeing?
In what area of AI? Image generation is increasing in leaps and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.
What’s one main application of AI that hasn’t improved?
Which chatbots are getting smarter?
I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.
Copilot, ChatGPT, pretty much all of them.
Smarter how? Synthetic benchmarks?
Because I’ve heard the opposite from users and bloggers.
Advanced Reasoning models came out like 4 months ago lol
Advanced reasoning? Having LLM talk to itself?
AI usually got better when people realized it wasn’t going to do all it was hyped up for but was useful for a certain set of tasks.
Then it turned from world-changing hotness to super boring tech your washing machine uses to fine-tune its washing program.
Like the cliché goes: when it works, we don’t call it AI anymore.
The smart move is never calling it “AI” in the first place.
Unless you’re in comp sci, and AI is a field, not a marketing term. And in that case everyone already knows that’s not “it”.
The major thing that killed 1960s/70s AI was the Vietnam War. MIT’s CSAIL was funded heavily by DARPA. When public opinion turned against Vietnam and Congress started shutting off funding, DARPA wasn’t putting money into CSAIL anymore. Congress didn’t create an alternative funding path, so the whole thing dried up.
That lab basically created computing as we know it today. It bore fruit, and many companies owe their success to it. There were plenty of promising lines of research still going on.
Pretty sure “AI” didn’t exist in the 60s/70s either.
Yes, it did. Most of the basic research came from there. The first section of the book “Hackers” by Steven Levy is a good intro.
The perceptron was created in 1957 and a physical model was built a year later
Historically “AI” still doesn’t exist.
The matrix is getting more and more real every day