A prompt injection attack is the culprit — hidden commands that can override an AI model's instructions and get it to do ...