

Computers are better at logic than brains are. We emulate logic; they do it natively.
It just so happens there’s no logical algorithm for “reasoning” a problem through.
Computers are better at logic than brains are. We emulate logic; they do it natively.
It just so happens there’s no logical algorithm for “reasoning” a problem through.
I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.
Fair, but the same is true of me. I don’t actually “reason”; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a “nasty logic error” pattern match at some point in the process, I “know” I’ve found a “flaw in the argument” or “bug in the design”.
But there’s no from-first-principles method by which I developed all these patterns; it’s just things that have survived the test of time when other patterns have failed me.
I don’t think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.
humanoid robot: dances
amazon: shock
humanoid robot: makes coffee
amazon: shock
humanoid robot: delivers package
amazon: friendly shock
They aren’t bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.