• sacredfire@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    23 hours ago

    That was an interesting read… The company I currently work for doesn’t allow AI tools to be fully integrated into our code base. I tinker around with them on my own time, but I’m left wondering what the profession is turning into for other people.

    Here on lemmy, we are definitely in the naysayers camp, but this article is trying to paint the picture that the reality is that almost everyone in tech is all on board and convinced these tools are the way. That writing code by hand is something of the past. The author certainly went to great lengths to recount many interviews with people who seem to share this opinion - many who I will note, have a vested interest in AI. Yet they didn’t really ask anyone who specifically held the opposing viewpoint. Only tangentially mentioning that there were opponents and dismissing them as perhaps diluted.

    I did appreciate that they touched on the difference between greenfield projects and brownfield projects and reported that Google only saw about a 10% increase in productivity with this kind of AI workflow.

    Still I wonder what the future holds and suppose it’s still too early to know how this will all turn out. I will admit that I’m more in the naysayers camp, but perhaps that’s from a fear of losing my livelihood? Am I predisposed to see how these tools are lacking? Have I not given them a fair chance?

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      It’s a tool that adds yet more complexity to our profession. More choice, more cost-benefit-analysis, more risk assessment, more shitty stuff to inherit and fix, more ability for shitty code producers to hide incompetence, more product and data policy analysis, more publisher trustworthyness and safety analysis, more concerns regarding what tooling sends into the cloud and what it can access and do locally, a significant “cheap and fast solution” you will be compared against requiring more communication, explanation, justification, new attack vectors to protect against, …

      My team and some others [can] use Copilot at my workplace. I haven’t had or seen significant gains. Only very selectively. Some other senior devs I trust are also skeptical/selective. We see the potential opportunities, but they’re not new solutions. Some other colleagues are more enthusiastic.

      What it does is make me question bad code from review request authors. Whether they missed it, are this unobservant, or incapable, or used AI. Quality, trustworthyness, and diligence are concerns anyway, but now I can’t really assess how much care and understanding they actually take, if they’re misled, take shortcuts, and how that changes over time - other than asking of course.

      I’m not scared for my job. It already changed the field and industry, but not in a net quality productivity gain. And will continue to in one way or another. There are many parts of and surrounding software development that it can’t do well.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      I’m left wondering what the profession is turning into for other people.

      All the code I review looks good at first glance and makes shit up as it goes once you read into it more. We use two different HTTP libraries - one sync, one async - in our asynchronous codebase. There’s a directory full of unreadable, obsolete markdown files that are essentially used as state. Most of my coworkers don’t know what their own code does. The project barely works. There’s tons of dead code, including dead broken code. There are barely any tests. Some tests assert true with extra steps. Documentation is full of obsolete implementation details and pointers to files that no longer exist. The README has a list of all the files in the repo at the top of it for some reason.

      I will admit that I’m more in the naysayers camp, but perhaps that’s from a fear of losing my livelihood?

      People are being laid off because of poor management and a shitty economy. No software devs are losing their jobs because AI replaced them. CEOs are just lying about that because it’s convenient. If software devs truly were more effective with these tools, you’d hire more.

      Am I predisposed to see how these tools are lacking? Have I not given them a fair chance?

      That’s up to you to decide. Try using them if you want. But don’t force yourself to become obsessed with them. If you find yourself more productive, then that’s that. If not, then you don’t. It’s just a tool, albeit a fallible one.

    • jubilationtcornpone@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      21 hours ago

      Still I wonder what the future holds and suppose it’s still too early to know how this will all turn out. I will admit that I’m more in the naysayers camp, but perhaps that’s from a fear of losing my livelihood?

      It’s all just conjecture at this point. I vividly remember how “the cloud” was allegedly going to help organizations eliminate the IT department, dramatically lower operating costs, and basically put every system admin out of a job.

      It succeeded at none of those things. It did help some organizations shift costs from CapEx to OpEx. But it also effectively made data centers available to organizations (and individuals) who didn’t have access to that kind of technology before. It didn’t live up to the hype but it has had a major impact.

      Personally, I figure a lot of these “AI” companies are going to fold. There’s just not any value in cramming LLM’s into every product. Not to mention we’ve spent the better part of 30+ years trying to get away from users having to type when they want the computer to do something. Moving back away from a “point- and-click” interface, which has hardly reached its general best state, could be a steep uphill battle.

      Again, all conjecture.