Article on popular science: prompt injection attacks are a new risk associated with the interation if llms into other services. •prompt injection attacks imply the use of a prompt that bypass safety restrictions of a given ai / llm, which cannot differentiate between illicit instructions and inputs. •a proper prompt injection attacks can thus use an assistant to interact with a service and complete a sets of instructions. I’d like to hear what you think about this
These are good examples:
It’s not that hard to trick many users, that’s why corporations require their employees to take regular cybersecurity trainings. LLMs can be even easier to manipulate.