• x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 hours ago

    Well, I might get disliked for this opinion, but in some cases it’s perfectly fine for a computer to make a management decision. However, this should also mean that the person in charge of said computer, or the one putting the decision by the computer into actual action, should be the one that gets held responsible. There’s also the thing where it should be questioned how responsible it is to even consider the management decisions of a computer in a specific field. What I’m saying is that there’s no black and white answer here.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    A complete one-eighty nowadays…“As a highly paid “business” exec I have no ideas…computer, tell me what to do.”

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    22 hours ago

    I feel this way about things like companies, too. It must always be human beings that bear the personal responsibility for an organization’s crimes, not “the company” alone. When money can pay in lieu of personal responsibility, then there is no justice or accountability.

  • onnekas@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I generally agree.

    Imagine however, that a machine objectively makes the better decisions than any person. Should we then still trust the humans decision just to have someone who is accountable?

    What is the worth of having someone who is accountable anyway? Isn’t accountability just an incentive for humans to not just fuck things up? It’s also nice for pointing fingers if things go bad - but is there actually any value in that?

    Additionally: there is always a person who either made the machine or deployed the machine. IMO the people who deploy a machine and decide that this machine will now be making decisions should be accountable for those actions.

    • Maroon@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Imagine however, that a machine

      That’s hypothetical. In the real world, in the human society, the humans who are part of corporations and receiving profits by making/selling these computers must also bear the responsibility.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Tbf that leads to the problem of:

        Company/Individual makes program that is in no way meant for making management decision.

        Someone else comes and deploys that program to make management decisions.

        The ones that made that program couldn’t stop the ones that deployed it from deploying it.

        Even if the maker aimed to make a decision-making program, and marketed it as so. Whoever deployed it is ultimately the responsible for it. As long as the maker doesn’t fake tests or certifications of course, I’m sure that would violate many laws.

        • ZombiFrancis@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          The premise is that a computer must never make a management decision. Making a program capable of management decisons already failed. The deployment and use of that program to that end is already built upon that failure.

      • onnekas@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I believe those who deploy the machines should be responsible in the first place. The corporations who make/sell those machines should be accountable if they deceptively and intentionally program those machines to act maliciously or in somebody else’s interest.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      24 hours ago

      Imagine however, that a machine objectively makes the better decisions than any person.

      You can’t know if a decision is good or bad without a person to evaluate it. The situation you’re describing isn’t possible.

      the people who deploy a machine […] should be accountable for those actions.

      How is this meaningfully different from just having them make the decisions in the first place? Are they too stupid?

  • csm10495@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I’ve thought about this wrt to AI and work. Every time I sit in a post mortem it’s about human errors and process fixes.

    The day a post mortem ends with “well the AI did it so nothing we can do” is the day I look towards… with dread.

  • limer@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    I asked computer if I should read the article, it said no. Am I in an abusive relationship?

    That is ridiculous, clearly. I’ll use mainstream search engine, tailor made to my needs, to make sure it cannot happen

    • sleepundertheleaves@infosec.pub
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Unfortunately, what’s actually happening is humans are being kept in the loop of AI decisions solely to take the blame if the AI screws up.

      So the CEOs who bought the AI, and the company that sold the AI, and the AI tool itself, all get to dodge responsibility for the AI 's failures by blaming a human worker.

      For example, this discussion of an AI generated summer reading guide that hallucinated a bunch of non-existent books:

      The freelance writer who authored this giant summer reading guide with all its lists had been tasked with doing the work of literally dozens of writers, editors and fact-checkers. We don’t know whether his boss told him he had to use AI, but there’s no way one writer could do all that work without AI.

      In other words, that writer’s job wasn’t to write the article. His job was to be the “human in the loop” for an AI that wrote the articles, but on a schedule and with a workload that precluded his being able to do a good job. It’s more true to say that his job was to be the AI’s “accountability sink” (in the memorable phrasing of Dan Davies): he was being paid to take the blame for the AI’s mistakes.

      https://doctorow.medium.com/https-pluralistic-net-2025-09-11-vulgar-thatcherism-there-is-an-alternative-f1428b42a8fd

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I understand that there is always a fall guy. Even before AI was shoved everywhere, those really responsible for the problems they created were not held accountable and put the blame on a fall guy.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    arrow-up
    44
    ·
    2 days ago

    Executives today:

    This means if we put AI somewhere in our decision making, we can no longer be held accountable.

    • Wojwo@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      You know “accountability”, it’s when an executive fucks up and gets to retire early with a multimillion dollar golden parachute.

    • InputZero@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      2 days ago

      Yup!

      “I’m sorry but your contact is terminated because our management software designated your position as redundant and unnecessary. It wasn’t our decision to let you go, but it was our decision to begin using that software and it was our decision to program it to try to fire as many employees as possible, but it’s not our decision and therefore we can’t be held responsible. Goodbye.”

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        The same argument for cartels. “We didn’t all increase our prices to the exact same amount, we just paid a consulting company to tell us which price we should use. Of course our competitors used the exact same company, but that’s just a coincidence”.

  • ruuster13@lemmy.zip
    link
    fedilink
    arrow-up
    163
    ·
    3 days ago

    And when computers make all management decisions, let us not forget that managers told them to do so, lest we forget whom to hold accountable.

  • Salvo@aussie.zone
    link
    fedilink
    English
    arrow-up
    108
    ·
    3 days ago

    Managers aren’t being held accountable for their management decisions either.

    “Oh, I sacked our entire workforce and sold all the company assets, so the figures will look amazing this month.”

    <one month later>

    “Oh, the figures are down this month, a golden handshake!? Thank you very much.”

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      Most industries management fails upward. Definitely true in Pharma.

      There are CEOs with a 20 year string of development failures, but they bring “vast experience”.

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      It depends, though.

      There are cases where parts of a struggling company is worth less than the sum of its parts. At that point, the fiscally prudent option is to sell it off, either in one piece or multiple pieces. There are plenty of cases in American corporate history where the best option is to cut losses and leave a market.

      That being said, I’m surprised that private equity is still allowed to be a thing given the massive disparity shown in how a lot of financial disparity in how a lot of private equity companies run their companies against their fiduciary responsibilities to their companies’ stockholders and bondholders.