• deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    6 days ago

    “Vaughan was surprised to find it was often the technical staff, not marketing or sales, who dug in their heels.”

    So the people that understood it best were sceptical, and this didn’t give him pause.

    Can someone explain to me why all these empty suits dick ride LLMs so hard?

    • Benaaasaaas@group.lt
      link
      fedilink
      English
      arrow-up
      38
      ·
      6 days ago

      Because they try the tools, realize that their job is pretty much covered by LLMs and think it’s the same for everyone.

    • MysteriousSophon21@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      6 days ago

      Technical staff were skeptical because they actually know what AI can and can’t do reliably in production environments - it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistancy.

      • End-Stage-Ligma@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        6 days ago

        it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistency.

        Thank goodness nobody is crusading to have AI take over medicine.

      • medem@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 days ago

        …which is why I categorically refuse to use the term Artificial intelligence .

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 days ago

      Can someone explain to me why all these empty suits dick ride LLMs so hard?

      $$$$$$$

      AIs are cheaper than humans.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Not really. Because they don’t work and then they have to hire more humans which is more expensive than just keeping them on for the 6 months it’ll take for the CEOs to realise that.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    119
    ·
    7 days ago

    Just like an AI. Instead of learning from mistakes, he repeates them, and denies any wrongdoing.

  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    78
    ·
    7 days ago

    Late stage capitalism rewards management for any appearance of change. It really doesn’t matter whether the results of that change are good or bad. And even a CEO who keeps destroying companies can always find a similar position elsewhere. The feedback loop is hopelessly broken.

  • zarkanian@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    6 days ago

    Vaughan was surprised to find it was often the technical staff, not marketing or sales, who dug in their heels. They were the “most resistant,” he said, voicing various concerns about what the AI couldn’t do, rather than focusing on what it could. The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.

    Imagine that.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      Yeah I have a CEO like that, it makes me want to strangle him. He constantly considers the raising of valid concerns to be some sort of personality failing. Meetings with him are an utterly pointless exercise, they’re not meetings, they are times where he tells us what he’s already decided to do.

      Fortunately the held on Teams now, so I just joined the meeting and then go make a cup of coffee.

      • redwattlebird @lemmings.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        6 days ago

        No, I disagree. The CEO is by far the most replaceable person when it comes to AI if the directive is to simply make more money for shareholders based on market research. I would argue that the CEO is being a parasite here.

        • rekabis@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 days ago

          CEOs are invariably the parasites in virtually any company where they earn more than 10× than their median employee.

          Nothing that can be done inside the business can justify compensation like that. Ergo: parasitism of the profits, of siphoning away more and more value that the workers produce just for themselves and those of their fellow parasites.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    ·
    7 days ago

    Does he still have a company at all?

    This type of shortsightedness should be punished. I mean AI can be useful for certain tasks but it’s still just a tool. It’s like these CEOs were just introduced to a screwdriver and he’s trying use it for everything.

    “Look employees, you can use this new screwdriver thing to brush your teeth and wipe your ass. “

  • CaptPretentious@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    6 days ago

    Today, I ran into a bug. We’re being encouraged to use AI more so I asked copilot why it failed. I asked without really looking at the code. I tried multiple times and all AI could say was ‘yep it shouldn’t do that’ but didn’t tell me why. So, gave up on copilot and looked at the code. It took me less than a minute to find the problem.

    It was a switch statement and the case statement had (not real values) what basically reads as ’ variable’ == ‘caseA’ or ‘caseB’. Which will return true… Which is the bug. Like I’m stripping a bunch of stuff away but co-pilot couldn’t figure out that the case statement was bad.

    AI is quickly becoming the biggest red flag. Fast slop, is still slop.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      AI thinks in the same way that ants think, there’s no real intelligence or thought going on but ants are still able to build complex logistics chains by following simple rules, although AI works on completely different principles the effect is the same, it’s following a lot of simple rules that lead to something that looks like intelligence.

      The problem is a lot of people seem to think that AIs are genuinely simulations of a brain, they think the AI is genuinely conjugating because they kind of look like they do sometimes. The world is never going to get taken over by a mindless zombie AI. If we ever do get AGI it won’t be from LLMs that’s for sure.

    • KumaSudosa@feddit.dk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      I do find AI useful when I’m debugging a large SQL / Python script though and gotta say I make use of it in that case… other than that it’s useless and relying on it as ones main tool is idiotic

  • tarknassus@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    6 days ago

    “Vaughan was surprised to find it was often the technical staff…”

    Tell me you’re completely out of touch with your company and what it does without telling me you’re completely out of touch with your company and what it does. FFS how is this guy the CEO? Oh, he’s one of the founders? Brilliant.

    Vaughan says he didn’t want to force anyone. “You can’t compel people to change, especially if they don’t believe.”

    But he did. Change or be fired, basically.

    “You multiply people…give people the ability to multiply themselves and do things at a pace,” he said, touting the company’s ability to build new customer-ready products in as little as four days, an unthinkable timeline in the old regime.

    Ooh I bet some nefarious hacker types will be salivating at the incredibly rushed code base that is probably a spaghetti mess and as insecure as fuck.

    Vaughan disclosed that the company, which he said is in the nine-figure revenue range, finished 2024 at “near 75% Ebitda”—all while completing a major acquisition, Khoros.

    I had to look up EBITDA - some interesting points to consider when you look at this metric he used:

    A negative EBITDA indicates that a business has fundamental problems with profitability. A positive EBITDA, on the other hand, does not necessarily mean that the business generates cash. This is because the cash generation of a business depends on capital expenditures (needed to replace assets that have broken down), taxes, interest and movements in working capital as well as on EBITDA.
    While being a useful metric, one should not rely on EBITDA alone when assessing the performance of a company. The biggest criticism of using EBITDA as a measure to assess company performance is that it ignores the need for capital expenditures in its assessment.

    Hmmm… I’m no accountant (I leave that to my actual accountant), but surely if they were being profitable it would sound better to say something like “We’ve remained profitable throughout and our earnings per quarter are on par if not greater than before.”?

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      I’m no accountant but surely if they were being profitable it would sound better to say something like “We’ve remained profitable throughout and our earnings per quarter are on par if not greater than before.”?

      no, because profitability isn’t the key figure they are interested in. it’s growth. i recently got fired because of disappointing growth; e.g. the increase in profitability was not as large as they expected. which means they still made more money than last year.

      this is why expenditures get relegated to “externality” status; because otherwise projections would make it look like a company can not grow infinitely large, and surely that’s not true

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    7 days ago

    As a paid, captive squirrel, focusing on spinning my workout wheel and getting my nuts at the end of the day, I hate that AI is mostly a (very expensive) solution in search of a problem. I am being told “you must use AI, find a way to use it” but my AI successes are very few and mostly non-repeatable (my current AI use case is: “try it once for non-vital, not time-sensitive stuff, if at first you don’t succeed, just give up, if you succeed, you saved some time for more important stuff”).

    If I try to think as a CEO or an entrepreneur, though, I sort of see where these people might be coming from. They see AI as the new “internet”, something that for good or bad is getting ingrained in everything we do and that will cause your company to go bankrupt for trying too hard to do things “the new way” but also to quickly fade to irrelevance if you keep doing things in the same way.

    It’s easy, with the benefit of hindsight, to say now “haha, Blockbuster could have bought Netflix for $50 Millions and now they are out of business”, but all these people who have seen it happen are seeing AI as the new disruptive technology that can spell great success or complete doom for their current businesses. All hype? Maybe. But if I was a CEO I’d be probably sweating too (and having a couple of VPs at my company wipe up the sweat with dollar bills)

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 days ago

      I’m working in a small software development company. We’re exploring AI. It’s not being pushed without foundation.

      There’s no need to commit when you don’t even know what you’re committing to, disregarding cost and risk. It just doesn’t make sense. We should expect better from CEOs than emotionally following a fear of missing out without a reasonable assessment.

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        6 days ago

        There are use cases for AI. There are none for NFTs.

        One use case is whenever you need to produce some inane bullshit that nobody is probably going to read anyway, but it’s still required for some reason. Like cover letters.

        Now, you might argue that we should work towards a society where we don’t have to produce this inane bullshit that nobody’s going to read anyway, and I would agree with you. But as long as we’re here, we might as well offload this pointless labor onto a pointless labor-saving machine.

          • GnuLinuxDude@lemmy.ml
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 days ago

            So much spam… internet is hardly usable after a decade of SEO and now with LLM sprinkled on top.

        • deathbird@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Kindly disagree. People actually read cover letters, and a cryptographically secured entry on a pubic ledger has some conceivable use.

          • zarkanian@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 days ago

            People actually read cover letters

            I have my doubts. I’ve had more than one recruiter tell me “I don’t read cover letters”, and even if they are “reading” it, it’s going to be a brief skim at best.

            In any case, that still doesn’t mean that you should be writing them. My point is that it’s make-work. It’s there to weed out candidates and demonstrate your ability to jump through their hoops.

            a cryptographically secured entry on a pubic ledger has some conceivable use.

            We’re talking specifically about NFTs.

    • willington@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 days ago

      My use case for AI is to get it to tell me water to cereal ratios, like for rice, oatmal, corn meal. If there is a mistake, I can easily control for it, and it’s a decent enough starting point.

      That said, I am just being lazy by avoiding taking my own notes. I can easily make my own list of water to cereal ratios to hang on the fridge.

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 days ago

        Yeah, so far all of the cooking stuff I got from ChatGPT were things that I could’ve found on my own if I had searched better. It will give you a recipe that is edible. It will have the same 4-6 spices as every other recipe and it will require a can of tomatoes. These are all savory dishes; I assume there are a different set of spices and no tomatoes if it’s sweet, but I haven’t tested that theory.

        It gets old quick.

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    7 days ago

    Of course he would. He could probably give hitler lessons on oven design.

  • PastafARRian@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 days ago

    I wonder if he thinks we’re dumb or just doesn’t care. They’d have been laid off either way. “Return to work”, “Stack ranking”, “AI refusal”, whatever you say bro.

  • drunkpostdisaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 days ago

    Has ai ever disagreed with anyone? That’s probably why it’s so popular with rich ‘people’

    Granted, my ideas are all baller as fuck. But still…

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    “It enabled us to shit out products in 4 days.”

    Glad they incorporated such thorough testing in their process.