

Something happens Americanly in America
Americans: “What are we, a bunch of üntermench asians???”
Something happens Americanly in America
Americans: “What are we, a bunch of üntermench asians???”
It’s not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts
But ERP is not a cool buzzword, hence it can fuck off we’re in 2025
You’re misunderstanding tool use, the LLM only queries something to be done then the actual system returns the result. You can also summarize the result or something but hallucinations in that workload are remarkably low (however without tuning they can drop important information from the response)
The place where it can hallucinate is generating steps for your natural language query, or the entry stage. That’s why you need to safeguard like your ass depends on it. (Which it does, if your boss is stupid enough)
The model ISN’T outputing the letters individually, binary models (as I mentioned) do; not transformers.
The model output is more like Strawberry <S-T-R><A-W-B>
<S-T-R-A-W-B><E-R-R>
<S-T-R-A-W-B-E-R-R-Y>
Tokens can be a letter, part of a word, any single lexeme, any word, or even multiple words (“let be”)
Okay I did a shit job demonstrating the time axis. The model doesn’t know the underlying letters of the previous tokens and this processes is going forward in time
No, this literally is the explanation. The model understands the concept of “Strawberry”, It can output from the model (and that itself is very complicated) in English as Strawberry, jn Persian as توت فرنگی and so on.
But the model does not understand how many Rs exist in Strawberry or how many ت exist in توت فرنگی
For usage like that you’d wire an LLM into a tool use workflow with whatever accounting software you have. The LLM would make queries to the rigid, non-hallucinating accounting system.
I still don’t think it would be anywhere close to a good idea because you’d need a lot of safeguards and also fuck your accounting and you’ll have some unpleasant meetings with the local equivalent of the IRS.
This is because auto regressive LLMs work on high level “Tokens”. There are LLM experiments which can access byte information, to correctly answer such questions.
Also, they don’t want to support you omegalul do you really think call centers are hired to give a fuck about you? this is intentional
Lol, LMAO even
Have an infographic
Honestly? They’re based for being so easy to make
For the record, I am a C/Dart/Rust native dev 2+ years deep in a pretty big project full of highly async code. This shit would’ve been done a year ago if the stack was web based instead of 100% native code
download “LM Studio” and you can download models and run them through it
I recommend something like an older Mistral model (FOSS model) for beginners, then move on to Mistral Small 24B, QwQ 32B and the likes
First, please answer, do you want everything FOSS or are you OK with a little bit of proprietary code because we can do both
Fuck ClosedAI
I want everyone here to download an inference engine (use llama.cpp) and get on open source and open data AI RIGHT NOW!
My man, this is literally what they just did. This isn’t an strawman. Atleast google the meaning of your catchphrase ffs