- 8 Posts
- 20 Comments
HaraldvonBlauzahn@feddit.orgto Programming@programming.dev•Trusting your own judgement on 'AI' is a huge risk22·3 days agoAh still rolling out the old “stochastic parrot” nonsense I see.
It is a bunch of stochastic parrots. It just happens frequently that the words they are parroting were orginally written by a bunch of intelligent people which were knowledgeable in their fields.
Note this doesn’t makes the parrots intelligent - in the same way that a book written by Einstein to explain special relativity has any own intelligence. Einstein was intelligent, his words transport his intelligent ideas, but the book conveying them to other people (as, the printed pages with cardboard cover) is as dumb as a stone. You would not ask a piece of cardboard so solve a math problem, would you?
HaraldvonBlauzahn@feddit.orgto Programming@programming.dev•Trusting your own judgement on 'AI' is a huge risk6·4 days agoReponding to another comment in [email protected]:
Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.
What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.
For the dynamic vs static typing debate, see the article by Dan Luu:
https://danluu.com/empirical-pl/
But this is not the central point of the above blog post. The central point of it is that, by the very nature of LLMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-experimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.
And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions for multiple complex and sometimes contradictory objectives (which is absolutely critical to any sane software architecture).
What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.
And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.
So what do you do with a file object?
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•Lukas Atkinso: Net-Negative Cursor13·14 days agoYou are right with this. But still, in Rust, a vector of u8 is different from a sequence of unicode characters. This would not work in Python3 either, while it’d work in Python2.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•Lukas Atkinso: Net-Negative Cursor2·14 days agoThanks, I fixed it!
What I find interesting is that move semantics silently add something to C++ that did not exist before: invalid objects.
Before, if you created an object, you could design it so that it kept all invariants until it was destroyed. I’d even argue that it is the true core of OOP that you get data structures with guaranteed invariants - a vector or hash map or binary heap never ceases to guarantee its invariants.
But now, you can construct complex objects and then move their data away with std::move() .
What happens with the invariants of these objects?
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•Lukas Atkinso: Net-Negative Cursor22·14 days agolet mut bytes = vec![0u8; len as usize]; buf.read_exact(&mut bytes)?; // Sanitize control characters let sanitized_bytes: Vec<u8> = bytes.into_iter() .filter(|&b| b >= 32 || b == 9 || b == 10 || b == 13) // Allow space, tab, newline, carriage return .collect();
This implicitly, and wrongly, swaps the interpretation of the input from UTF8 text to pure ASCII.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•Cognitive Debt (A term to describe the costs of skipping thinking)91·14 days agoDid you ever note that when intelligent engineers talk about designs (or quite generally when intelligent people talk about consequential decisions they took), they talk about their goals, about the alternatives they had, about what they knew about the properties of these alternatives and how these evaluated with their goals, about which alternatives they chose in the end and how they addressed the inevitable difficulties they encountered?
For me, this is quite a very telling sign of intelligence in individuals. And truly good engineering organizations do collect and treasure that knowledge - it is path-dependent and you cannot quickly and fully reproduce it when it is lost. And more importantly, some fundamental reasons for your decisions and designs might change, and you might have to revise them. Good decisions also have a quality of stability which is that the route taken does not change dramatically when an external factor changes a little.
So and now compare that to when you let automatically plan a route through a dense, complex suburban train network, by using a routing app. The route you get will likely be the fastest one, with the implicit assumption that this is what you of course want - but any small hiccup or delay in the transport network can well make it the slowest option.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•Cognitive Debt (A term to describe the costs of skipping thinking)6·15 days agoCognitive Debt is where you forgo the thinking in order just to get the answers, but have no real idea of why the answers are what they are.”
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·22 days agoA big part of the changed software job market in the US is caused by the rise of interest rates, and in consequence a large part of high-risk venture capital money drying up. This was finsncing a lot of start-ups without any solid product or business model. And, this began very clearly before the AI hype.
The trope that AI is actually replacing jobs is a lie that AI companies want you to believe.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·22 days agoA million drunk monkeys on typewriters can write a work of Shakespeare once in a while!
But who wants to pay a 50$ theater ticket in the front seat to see a play written by monkeys?
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·22 days agoIf you walk around in my city and open your eyes, you will see that half of the bars and restaurants are closed because there is a shortage of even unskilled staff and restaurants didn’t pay enough to people. They now work in other sectors.
And yes, software developers are leaving jobs with unreasonable demands and shitty work conditions. Last not least because conserving mental health is more important. Go, for exanple, to the news.ycombinators.com forum and just search for the keyword “burnout”. That’s becoming a massive problem for companies because rising complexity is not matched by adequate organizational practices.
And AI is not going to help with that - it is already massively increasing technical debt.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·22 days agoIt’s the Dunning-Kruger effect.
And it’s fostered by an massive amount of spam and astroturfing coming from “AI” companies, lying that LLMs are good at this or that. Sure, algorithms like neural networks can recognize patterns. Algorithms like backtracking can play chess or solve or transform algebraic equations. But these are not LLMs and LLMs will not and can not replace software engineering.
Sure, companies want to pay less for programming. But they don’t pay for software developers to generate some gibberish in source code syntax, they need working code. And this is why software engineers and good programmers will not only remain scarce but will become even shorter in supply.
And companies that don’t pay six-figure salaries to developers will find that experienced developers will flat out refuse to work on AI-generated codebases, because they are unmaintainable and lead to burnout and brain rot.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?0·23 days agoThe early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.
AI is fucking useless at all of that.
In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won’t order a bricklayer apprentice to draw the plan for a new house.
And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?0·23 days agoThe average coder is a junior, due to the explosive growth of the field (similar as in some fast-growing nations the average age is very young). Thus what is average is far below what good code is.
On top of that, good code cannot be automatically identified by algorithms. Some very good codebases might look like bad at a superficial level. For example the code base of LMDB is very diffetent from what common style guidelines suggest, but it is actually a masterpiece which is widely used. And vice versa, it is not difficult to make crappy code look pretty.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?2·23 days agoOMG, this is gold! My neighbor must have wondered why I am laughing so hard…
The “reverse centaur” comment citing Cory Doctorow is so true it hurts - they want that people serve machines and not the other way around. That’s exactly how Amazon’s warehouses work with workers being paced by facory floor robots.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·24 days agoWhen did you last time decide to buy a car that barely drives?
And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years. But a lot of computing is very long term with code bases that are developed over many years.
The world only needs so many shopping list apps - and there exist enough of them that writing one is not profitable.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·22 days agoPeople seem to think that the development speed of any larger and more complex software depends on the speed the wizards can type in code.
Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn’t need a Dvorak layout, bevause it is the “delete” key. In fact, you don’t need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.
Which LLMs can’t.
HaraldvonBlauzahn@feddit.orgOPto Programming@programming.dev•If AI is so good at coding - where are the open source contributions?1·24 days agoWell, sometimes I think the web is flooded with advertising an spam praising AI. For these companies, it makes perfect sense because billions of dollars has been spent at these companies and they are trying to cash in before the tides might turn.
But do you know what is puzzling (and you do have a point here)? Many posts that defend AI do not engage in logical argumentation but they argue beside the point, appeal to emotions or short-circuited argumentation that “new” always equals “better”, or claiming that AI is useful for coding as long as the code is not complex (compare that to the objection that mathematics is simple as long it is not complex, which is a red herring and a laughable argument). So, many thanks for you pointing out the above points and giving in few words a bunch of examples which underline that one has to think carefully about this topic!
It is very interesting to see how with Rust and Guix, there is some convergence between programming worlds which so far have been rather separate universes. For example, Rust makes it easy to write modern system libraries which previously would have been written in C, the Linux kernel is slowly adopting Rust, and Guix makes it easy to use such libraries in strong-dynamically typed languages like Guile, Racket, or Python.
For the general programming community, the promise is that Guix kinda solves the packaging and dependency resolution problem for multi-language projects. And it is making good strides - Guix contains over 50,000 packages now, not counting the nonguix channels which add e.g. non-free firmware. (Just for convenience, here how to install the Guix package manager im Arch).