Ing. Pablo Lorenzatto, Lic. Carlos Giudice, Lic. Matías Grinberg
As AI systems keep evolving and being integrated in day to day life we are beginning to find new kinds of risks that were not as prevalent with previous Machine Learning systems. For the purpose of this work we identify two broad categories of risks:
The potential harm of these risks is explored in the NIST AI RFM [1], highlighting:
The first step in addressing these dangers is understanding the attack surface in each lifecycle phase. The OWASP AI guidelines [2] highlight:
AI systems are hardware, software, processes, and artifacts. OWASP recommends traditional cybersecurity and AI-specific controls of which we highlight these two: