Use this deck to plan your approach to AI red-teaming, execute your AI red-teaming exercise in a way that makes sense for your organization, and build the right guardrails to protect your AI models from threat actors.
- Gain insight into how bad actors target AI systems and models and how AI red-teaming offers safeguards traditional red-teaming does not.
- Review AI security regulations emerging in different jurisdictions.
- Consider in-house vs. outsourced solutions, with a high-level overview of tools, technologies, and metrics to consider for your organization’s use.
- Introduce yourself to commonly used red-teaming frameworks and guidelines such as MITRE ATLAS, Microsoft AI Red Teaming, and the NIST AI RMF Playbook.