Thank you! You do you.
Artwork
…cogito, ergo sum…
- 0 Posts
- 8 Comments
deleted by creator
“Self-hosted mind atrophy with skills degradation running in parallel.”
No. Absolutely no. You should code with your mind, and stay creative.Related: Using AI Generated Code Will Make You a Bad Programmer
Artwork@lemmy.worldto
Technology@lemmy.world•How Hackers Are Fighting Back Against ICEEnglish
1·3 days agoThank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.
How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.
Artwork@lemmy.worldto
Technology@lemmy.world•How Hackers Are Fighting Back Against ICEEnglish
143·6 days agoPlease no. Absolutely not. LLM is absolutely not “nice for dealing with confusion” but the very opposite.
Please do consider people effort, articles, attributions, and actually learning and organizing your knowledge. Please do train your mind, and self-confidence.
Artwork@lemmy.worldto
Technology@lemmy.world•Stack Overflow in freefall: 78 percent drop in number of questionsEnglish
287·7 days agoThank you, but I am sorry, I will not read the output of the LLM. I’ll re-recheck the grammar manually.
Artwork@lemmy.worldto
Technology@lemmy.world•Stack Overflow in freefall: 78 percent drop in number of questionsEnglish
1913·7 days agoIt’s worth to mention that the StackOverflow survey referenced does not include many countries with also great/genius developers, including Belarus, Russia, China, Iran…
There are related cases raised on Meta scopes: Developer Survey 2025 is, apparently, region blocked…Apparently, while I’ve being employed in security as software engineer for at least 19 years now, I’ve never ever considered these trendy LLM/“AI” all serious, and still do not.
Sorry, I have literally no interest in all of it that makes you dependent on it, atrophies mind, degrades research and social skills, and negates self-confidencen with respect to other authors, their work, and attributions. Nor any of my colleagues in military and those I know better in person.
Constant research, general IDEs like JetBrains’s, IDA Pro, Sublime Text, VS Code, etc. backed by forums, chats, and Communities, is absolutely enough for the accountable and fun work in our teams, who manage to keep in adequate deadlines.
Nor will use any LLM in my work, art, or research… I prefer people, communication, discoveries, effort, creativity, and human art…
I just disable it everywhere possible, and will do all my life. The close case to my environment was VS Code, and hopefully there’s no reason to build it from source since they still leave built-in options to disable it: https://stackoverflow.com/a/79534407/5113030 (How can I disable GitHub Copilot in VS Code?..)Isn’t it just inadequate to not think and develop your mind, and let alone pass control of your environment to a yet another model or “advanced T9” of unknown source of unknown iteration.
In pentesting, random black-box IO, medicine experimental unverified intel, log data approximation why not? But in environment control, education, art or programming, fine art… No, never ^^
Meanwhile… so freaking, incredibly many developers, artists are left without attribution, respect, gratitude…
So many people atrophy their skils for learning, contribution, researching, accumulation, self-organization…
So much human precious time is wasted…
So much gets devalued…The time will show… and just a few actually accountable will recover only, probably…
This is so heartache… sorrowful…
Artwork@lemmy.worldto
Hardware@lemmy.world•OpenAI's first Jony Ive-designed AI hardware might just be a penEnglish
1·10 days agoNo, sorry. I do not need and will never need your “support” for my mind to atrophy it and make my mind depend on your “tool”. My mind is my pen I want to train and learn about. This is one of the reason I was born I believe - to learn, discover, and communicate with people to learn from. That “pen” will only damage all of this instead, ruin my limited time, and devalue art and the whole fun and purpose of the art, I believe.
Not only that, but… who in their dear mind… or what actual artist would like to “draw”/“write” with someone’s else art or words coming from that pen’s LLM/model, from some unknown actual artists…? What is the… reason… then? Where is the art then?

No, thank you. Sorry, never.
Not only that, but the huge probability of mistakes is just deafening. The last time I used LLM was in 2023 someone recommended for a task at paper work, and I got a literal headache in 10 minutes… Since then I never ever will use that sorrow for anything that is not for blackbox pentesting or experimental unverified data generated you may find in medicine or military isolated solutions.
That deafening feel that every single bit of output from that LLM or void machine may contain a mistake no soul is accountable for to ask about… A generated bit of someone’s work you just cannot verify since no source nor human is available… How would you trace the rationale that resulted in the output shown?
Faster? Is that so… Doesn’t verification of every output require even more time to test it and consider stable, to prove it is correct, to stay accountable for the knowledge and actions you perform as a developer, artist, researcher… human?
Your mind is to be trained to do a research, remember, and do not depend on someone’s service to a level of predominance/replacement.
Meanwhile, effort, passion, creativity, empathy, and love, in turn, you carry, supports in long-term.
You may not care now, though, but you do you. It’s your mind and memory you develop.