

More like they just got their Anthropic bill.
Cloud compute is gonna be cheap compared to the API costs for LLMs they use/offer.
More like they just got their Anthropic bill.
Cloud compute is gonna be cheap compared to the API costs for LLMs they use/offer.
Damn, yeah. My pixel will drain 50% or more in a day, in my pocket. Brutal
I too love a heavy dose of whataboutism with my science.
I think we all know petrol is worse by a huge margin. More knowledge about electric vehicles and their effects is just more good for engineers.
It means there is more room to improve and make things better.
Dafuq.
This is the craziest reaction to knowledge
Knowing something new we didn’t before means… We know more now.
Stop trying to politicize this.
This just means there is room to improve, this is a good thing.
Why too narrow of a use case?
Imagine federation with text linked to other text, that’d be crazy, right?
Wait, it’s actually more complicated than that 🤔
But FR using existing federated protocols to build something like this is EXACTLY what the protocols are for. You don’t need to implement the federation yourself, you can use an existing network
When the feds come for you for using it
It’sprobably not a honeypot. But it’s also likely to be negligent enough in implementation that it might as well be.
Lol, called it.
Incompetence and false bravado is all but guaranteed with development teams. Especially when it’s closed source, not audited, and has minimal room for feedback loops.
Samesies
The non-technical public is scared of the word “AI”. When it has a whole spectrum of meanings and implications.
AI has been in use in medicine, engineering, municipal infrastructure…etc long before LLMs/GenAI.
Even new products today (Like those assistive exoskeleton legs) use (non LLM) AI to interpret and extrapolate bodily functions l. And wouldn’t work without it.
It’s closed source, and the build and publishing pipeline isn’t transparent.
For me that makes this no different than a potential ICE honey pot
Only if you don’t have the critical thinking to understand how information management is a significant problem and barrier to medical care.
Being able to research and find material relevant to a patient’s problem is an arduous task that often is too high a barrier for doctors to invest in given their regular workloads.
Which leads to a reduction in effective care.
By providing a more efficient and effective way to dig up information that saves a ton of time and improves care.
It’s still up to the doctor to evaluate that information, but now they’re not slogging away trying to find it.
These are all holes in the Swiss cheese model.
Just because you and I cannot immediately consider ways of exploiting these vulnerabilities doesn’t mean they don’t exist or are not already in use (Including other endpoints of vulnerabilities not listed)
This is one of the biggest mindset gaps that exist in technology, which tends to result in a whole internet filled with exploitable services and devices. Which are more often than not used as proxies for crime or traffic, and not directly exploited.
Meaning that unless you have incredibly robust network traffic analysis, you won’t notice a thing.
There are so many sonarr and similar instances out there with minor vulnerabilities being exploited in the wild because of the same"Well, what can someone do with these vulnerabilities anyways" mindset. Turns out all it takes is a common deployment misconfiguration in several seedbox providers to turn it into an RCE, which wouldn’t have been possible if the vulnerability was patched.
Which is just holes in the swiss cheese model lining up. Something as simple as allowing an admin user access to their own password when they are logged in enables an entirely separate class of attacks. Excused because “If they’re already logged in, they know the password”. Well, not of there’s another vulnerability with authentication…
See how that works?
Please to see: https://github.com/jellyfin/jellyfin/issues/5415
Someone doesn’t necessarily have to brute Force a login if they know about pre-existing vulnerabilities, that may be exploited in unexpected ways
Fail2ban isn’t going to help you when jellyfin has vulnerable endpoints that need no authentication at all.
Jellyfin has a whole host of unresolved and unmitigated security vulnerabilities that make exposing it to the internet. A pretty poor choice.
And it won’t scale at all!
Congratulations, you made more AI slop, and the problem is still unsolved 🤣
Current AI solves 0% of difficult programming problems, 0%, it’s good at producing the lowest common denominator, protocols are sitting at 99th percentile here. You’re not going to be developing anything remotely close to a new, scale able, secure, federated protocol with it.
Nevermind the interoperability, client libraries…etc Or the proofs and protocol documentation. Which exist before the actual code.
Wayyyyyy less than 20%.
Even removing, incredibly liberal, bot percentages from reddit Lemmy is still < 0.001% of the audience
It’s a solution to a problem Lemmy will soon have in that case.
Which is bots.
Lemmy isn’t flooded with bots and astroturfing because it’s essentially too small to matter. The audience is something like < 0.001% that of reddit.
Once it grows the problem comes here as well, and we have no answers for it.
It’s a shitty situation for the internet as a whole, and the only solution is verifying humans. And corporations CANNOT be trusted with that kind of access/power
Are you a software engineer who has made use of these and similar tools?
If not, this is epic level armchairing.
The tools are definitely hyped, but they are also incredibly functional. They have many problems, but they also work and achieve their intended purpose.