Nicht einfach reparieren
I learned something new! Although it’s almost obvious, it wasn’t clear to me. Right now we can’t simply fix faulty large language models when they make mistakes or spread false information. That’s because LLMs are created using neural networks. Neural networks learn on their own and build mathematical probabilities by repeatedly computing certain things with small variations. So if you want to change something in an LLM, you’d have to adjust countless such probabilities. This process is so labor-intensive that it’s easier to create a new LLM. That’s why GPTs are released as larger, major version updates. ...