

Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
It does need to do that to meaningfully change anything, however.
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.
essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way. I would not say they have different architectures.
those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.
this proves the issue is widespread, not fundamental.
That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
if someone can objectively answer “no” to that, the bubble collapses.
Hilarious chaos is widely regarded as a nazi server, so it has a lot of defeds, they post a lot of antitrans content and it’s not against the rules, you may want to try another instance.
No, it won’t hold up for 50 years, but if you don’t want one don’t get it?
that’s where regulators step in, do you honestly believe elon musk would not be implanting healthy people with neuralinks if regulators would allow? They won’t, this is tech for people whose lives are so awful that not having one is worse than the things that may go wrong, for a very, very long time.
Why does it have to? All current bci’s are designed for the disabled, why would this one be an exception?
this isn’t for you, you’re not a paraplegic, are you?
You live long enough to help paraplegics game?
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.