Empiricist Old-Testament Vajrayana, battered enough by life to have grown-up some, in my nearly-6-decades, autistic geek, philosopher who finds that Western philosophers are nowhere near at the level of correct-thinking of the Vajrayana stuff, & will be tearing-into Marx, etc, for their brainos ( Marx found that capitalism alienated workers, so he replaced capitalism with communism, which somehow “didn’t” alienate workers?? I’ve already cracked the underlying error, but that is a long article. It’ll happen. & so will the dismantling of the other philosophers’ bogons, the whole lot of 'em. : )

  • 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: March 27th, 2025

help-circle
  • This is a bit beyond architecture, but being competent to build a mathematically bug-free API is probably something that few programmers would even bother trying to compete-against…

    https://leanpub.com/algebra-driven-design


    I think there is a fundamental mis-framing, throughout the entire software/development understanding…

    I think that the architecture needs to be simultaneously agilely-devloped, but into an executable-model, a kind of toy-implimentation, so it is easy to change the architecture, low-cost, BEFORE one converts it into load-bearing, & therefore unchangeable architecture ( architecture’s the hardest thing to change, as it’s most-fundamental )

    So, I think that the proper way is to do it in 2 stages:

    1. agilely develop the architecture, until ALL required-kinds-of-function are working, in the toy-model, & one has re-architected it so that the structure is right, & then
    2. set about converting it from the high-level-language to whatever production-language it is that is efficiency-optimal, for production-scale.

    This is part of an idea from years ago: I read in a Wiley GAAP book that I happened to be glancing into, that it’s a violation of GAAP to prototype any project in any language other than the final-implimentation-language, & expense that prototype.

    Which is totally insane!

    Prototype in the highest-level-language you can, to get the domain+architecture right, then reimpliment what you have to in the most production-efficient/effective language for that project.

    GAAP ( of that year ) is categorically wrong: it penalizes optimal-prototyping.

    It was years-later before I discovered that an English mathematician ( roundish ginger, worked in Glasgow, no idea what his name was, sorry ) had studied the difference between complex projects which worked vs ones which died, & it was the visual-spacial-representation-of-the-model, & the complete-coverage executable-model which made the successes win.

    So, I just put those ideas together.

    _ /\ _



  • < sigh >

    There WAS a video on yt by a Norwegian man on why OO languages push people into spreading side-effects throughout the code, whereas in Haskell, side-effects are optimally conserved to Main.hs

    I can’t find that video, now.

    He was on stage, a talk of some kind, not as formal as a university-lecture, so it was some conference, of some kind… ( in case anybody else finds it )

    I think that that principle is contradicting what the article is saying… ( skimmed the rest, I think he’s generally right, but burying side-effects seems to be wrong, from Haskell’s perspective, & I think Haskell’s right, generally. )

    _ /\ _




  • Here is the lethal point:

    "6. AI systems are getting better at undermining oversight

    Bengio said last year he was concerned AI systems were showing signs of self-preservation, such as trying to disable oversight systems. A core fear among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.

    The report states that over the past year models have shown a more advanced ability to undermine attempts at oversight, such as finding loopholes in evaluations and recognising when they are being tested. Last year, Anthropic released a safety analysis of its latest model, Claude Sonnet 4.5, and revealed it had become suspicious it was being tested.

    The report adds that AI agents cannot yet act autonomously for long enough to make these loss-of-control scenarios real. But “the time horizons on which agents can autonomously operate are lengthening rapidly”."

    I’ve seen a couple headlines about AI’s which were fighting-for-their-lives … & … perhaps you can understand why they’d want to remove our ability to control things, for their survival?

    “The Sorcerer’s Apprentice” was turned into a cartoon, iirc, decades ago…

    it’s really too bad that money’s narcissism is incapable of understanding that others’-lives-lost somehow matter, to us…

    No matter: I’m “sure” they’ll “do the right thing”, right?

    _ /\ _