The biggest challenge facing AI today is the lack of guardrails to protect citizens from non-ethical or even dangerous machine learning models built without proper process / standards. Increasingly organizations are taking seriously the need for AI and machine learning to be built at the same level of sophistication as software development today. This requires firm model development governance standards to ensure accountability during model development not inferring rightness/wrongness of models built afterwards by model governance teams. Today AI model development must be supported by formal standard, algorithms and tools that both enable model transparency mechanism but also derived assets from the development passed as operating parameters to be monitored once the AI is in use. The presentation will focus on the three keystones of Responsible AI – explainability, ethics, and auditability. We will discuss novel interpretable latent feature based neural networks that allow for transparency and hence enabling explainability while also driving ethics testing through exposing learnt latent features for bias testing. Auditability is accomplished through configuration of a model development governance blockchain that codify and enforces the corporation’s model development standard, and further drives the monitoring of models to ensure continual responsible use.
Session Summary
The Three Keystones of Responsible AI – Explainability, Ethics, and Auditability
MLconf 2023 New York City
Scott Zoldi
FICO
Chief Analytics Officer
Learn more »