

I want to write gnocchi code, where each little nugget is good on its own and they still blend together perfectly in the sauce. But I still end up with mashed potato-code if I don’t watch myself.


I want to write gnocchi code, where each little nugget is good on its own and they still blend together perfectly in the sauce. But I still end up with mashed potato-code if I don’t watch myself.
You replied to only one of my points, and that’s not even what I said…
They train new models on base models, and I’m talking about how they scraped the internet without permission or how websites sold their users data without compensation and how no one was ever given any opportunity to opt out of sharing your work and your words to train these base models on.
Without that grand scale theft we would have no base models anywhere near what we have now.
I’m not opposed to willingly sharing, I’m opposed to profiting from stealing.
I noticed how quickly my own skills started deteriorating when trying to work with it. I’m trying to build my skills, not outsource them.
I also don’t love the environmental impact, nor the immorality of how they got/get their training sets for the base models.
If my work tried to force me to use it, I would be looking to change employer. Or lie and say I use it. But our AI use is heavily regulated and generally disencouraged, so luckily no issues there.
Of course.
My reasons for not using AI are the same as they were four months ago and will be the same in four months, regardless of what the models can or can’t do.
Ask again in four years.
Nightmarish. Can you make the AI write up new documentation every time you want to push a change, so it looks like you’re using it frequently but you still get to write the code yourself?
I would love letting LLM deal with documentation. It’s the bane of my existence.