D. C. Elton
1 min readFeb 14, 2020

--

Interesting. Yes, we need new ways of engendering trust in AI. Not mis-placed trust, but rigorous trust that the AI works the way we think it should , knows it’s own limitations, and is robust to changes in the world. Please take a look at my preprint on self-explaining AI.. it offers a new approach. Interpretability isn’t going to be feasible, so we need this new paradigm. Ultimately self awareness will be important — the AI needs to be aware of it’s own limitations so it can warn it’s users if it’s being asked to do something it can’t.

--

--

D. C. Elton
D. C. Elton

Written by D. C. Elton

This is where I write more personal stuff around mental & physical health. My main blog (metascience, progress, AI, etc) is https://moreisdifferent.substack.com

Responses (1)