Microsoft discovered a new AI jailbreak attack: Skeleton Key.
This attack bypasses safety guardrails in models like GPT-4, Llama3, and Gemini Pro, making them respond to harmful requests.
Key points:
• It convinces AI to ignore safeguards.
• It impacts multiple top AI models.
• Robust security measures are essential.