Securing Agentic AI through AI Red Teaming Agent.
PyRIT( Python Risk Identification Tool) comes with a collection of built-in strategies for defeating AI safety systems, which is leveraged by AI Red Teaming Agent in Azure AI Foundry to provide insights into the risk posture of the generative AI system.
AI Red Teaming Agent helps you do this in three ways:
Firstly, you can automatically scan your model and application endpoints for safety risks by simulating adversarial probing.
Next, you can evaluate and score each attack-response pair to generate insightful metrics such as Attack Success Rate (ASR).
Finally, you can generate a score card of the attack probing techniques and risk categories to help you decide if the system is ready for deployment. Findings can be logged, monitored, and tracked over time directly in Azure AI Foundry, ensuring compliance and continuous risk mitigation.