Interesting. Yes, we need new ways of engendering trust in AI. Not mis-placed trust, but rigorous trust that the AI works the way we think it should , knows it’s own limitations, and is robust to changes in the world. Please take a look at my preprint on self-explaining AI.. it offers a new approach. Interpretability isn’t going to be feasible, so we need this new paradigm. Ultimately self awareness will be important — the AI needs to be aware of it’s own limitations so it can warn it’s users if it’s being asked to do something it can’t.