Skip to main content

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Download Microsoft Edge More info about Internet Explorer and Microsoft Edge
Read in English
Read in English Edit

Share via

Facebook x.com LinkedIn Email

Microsoft AI Red Team

Learn to safeguard your organization's AI with guidance and best practices from the industry leading Microsoft AI Red Team.

About AI Red Team

Overview

  • What is AI Red teaming and how Microsoft is building safer AI?

How-To Guide

  • Guide for building AI Red Teams for LLMs

Reference

  • Responsible AI tools and practices
  • Responsible AI standard and impact assessment

Getting ready

Overview

  • Microsoft's Open Automation Framework to Red Team Generative AI Systems (PyRIT)

How-To Guide

  • PyRIT How to Guide

Reference

  • AI Risk Assessment for ML Engineers
  • AI shared responsibility model

Understanding Threats

How-To Guide

  • Developer threat modeling guidance for ML systems

Concept

  • Taxonomy for machine learning failure

Reference

  • Bug Bar to triage attacks on ML systems

Exploring secure solutions

Concept

  • Methodology for safety aligning the Phi-3 series of language models

Reference

  • Enterprise security and governance for Azure Machine Learning
  • What is Azure AI Content Safety?
  • Harms mitigation strategies with Azure AI
  • Monitor quality and safety of deployed prompt flow applications

Lessons learned

Overview

  • Lessons from red teaming 100 generative AI products
Your Privacy Choices
  • Previous Versions
  • Blog
  • Contribute
  • Privacy
  • Terms of Use
  • Trademarks
  • © Microsoft 2025
Your Privacy Choices
  • Previous Versions
  • Blog
  • Contribute
  • Privacy
  • Terms of Use
  • Trademarks
  • © Microsoft 2025