Nov 1, 2024
An interesting thought, Roberto. More and more content will generated by AI and it will be become harder to distinguish from human-generated content. Then new LLMs get trained on that set of material, which accumulates more and more errors, and you have a nice little feedback loop yielding ever more crap. Thanks for pointing this out.