by fionapass | Aug 11, 2024 | AI
Every impenetrable LLM can be jailbroken. And every service agreement that guarantees the safety of your data, once entered into a prompt window, will promise not to use it to train future models. All of this can be broken, loopholed or hacked. Once you give your...