Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot

  • 0 Posts
  • 11 Comments
Joined 3 months ago
cake
Cake day: March 10th, 2025

help-circle
  • This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

    The whole paragraph is kinda FUD except for this. Normal research practice is to (get ready for a shock) do research and not just copy a high-level summary of what other people have done. If your professors were saying, “don’t cite encyclopedias, which includes Wikipedia” then that’s fine. But my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.

    I personally use ChatGPT like I would Wikipedia

    Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(



  • I think the academic advice about Wikipedia was sadly mistaken. It’s true that Wikipedia contains errors, but so do other sources. The problem was that it was a new thing and the idea that someone could vandalize a page startled people. It turns out, though, that Wikipedia has pretty good controls for this over a reasonable time-window. And there’s a history of edits. And most pages are accurate and free from vandalism.

    Just as you should not uncritically read any of your other sources, you shouldn’t uncritically read Wikipedia as a source. But if you are going to uncritically read, Wikipedia’s far from the worst thing to blindly trust.








  • I don’t understand how you think this works.

    If I say, “now we have robots that can build a car from scratch!” the automakers will be salivating. But if my robot actually cannot build a car, then I don’t think it’s going to cause mass layoffs.

    Many of the big software companies are doing mass layoffs. It’s not because AI has taken over the jobs. They always hired extra people as a form of anti-competitiveness. Now they’re doing layoffs to drive salaries down. That sucks and tech workers would be smart to unionize (we won’t). But I don’t see any radical shift in the industry.


  • To be honest, you sound like you’re only just starting to learn to code.

    Will coding forever belong to humans? No. Is the current generative-AI technology going to replace coders? Also no.

    The reaction you see is frustration because it’s obvious to anyone with decent skill that AI isn’t up to the challenge, but it’s not obvious to people who don’t have that skill and so we now spend a lot of time telling bosses “no, that’s not actually correct”.

    Someone else referenced Microsoft’s public work with Copilot. Here’s Copilot making 13 PRs over 5 days and only 4 ever get merged you might think “30% success is pretty good!” But compare that with human-generated PRs and you can see that 30% fucking sucks. And that’s not even looking inside the PR where the bot wastes everyone’s time making tons of mistakes. It’s just a terrible coworker and instead of getting fired they’re getting an award for top performer.