• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • I don’t think having well-defined precision is a rare requirement, it’s more that most devs don’t understand (and/or care) about the pitfalls of inaccuracy, because they usually aren’t obvious. Also, languages like JavaScript/PHP make it hard to do things the right way. When I was working on an old PHP codebase, I ran into a popular currency library (Zend_Currency) that used floats for handling money, which I’m sure works fine up until the point the accountants call you up asking why they can’t balance the books. The “right way” was to use the bcmath extension, which was a huge pain.



  • Cuelang: https://cuelang.org/docs/reference/spec/#numeric-values

    Implementation restriction: although numeric values have arbitrary precision in the language, implementations may implement them using an internal representation with limited precision. That said, every implementation must:

    • Represent integer values with at least 256 bits.
    • Represent floating-point values with a mantissa of at least 256 bits and a signed binary exponent of at least 16 bits.
    • Give an error if unable to represent an integer value precisely.
    • Give an error if unable to represent a floating-point value due to overflow.
    • Round to the nearest representable value if unable to represent a floating-point value due to limits on precision. These requirements apply to the result of any expression except for builtin functions, for which an unusual loss of precision must be explicitly documented.






  • I agree, but I’m not sure it matters when it comes to the big questions, like “what separates us from the LLMs?” Answering that basically amounts to answering “what does it mean to be human?”, which has been stumping philosophers for millennia.

    It’s true that artificial neurons are significant different than biological ones, but are biological neurons what make us human? I’d argue no. Animals have neurons, so are they human? Also, if we ever did create a brain simulation that perfectly replicated someone’s brain down to the cellular level, and that simulation behaved exactly like the original, I would characterize that as a human.

    It’s also true LLMs can’t learn, but there are plenty of people with anterograde amnesia that can’t either.

    This feels similar to the debates about what separates us from other animal species. It used to be thought that humans were qualitatively different than other species by virtue of our use of tools, language, and culture. Then it was discovered that plenty of other animals use tools, have language, and something resembling a culture. These discoveries were ridiculed by many throughout the 20th century, even by scientists, because they wanted to keep believing humans are special in some qualitative way. I see the same thing happening with LLMs.