Sign In

Communications of the ACM

ACM Opinion

Scientists Increasingly Cannot Explain How AI Works

View as: Print Mobile App Share:
Illustration shows a pc board in the shape of a human brain.

Though there is already a subset of AI known as Explainable AI (XAI), the general techniques it promotes are often diminutive and inaccurate.

Credit: Getty Images

Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we cannot pinpoint the exact reasons for our preferences, emotions, and desires at any given moment. A similar problem exists in artificial intelligence (AI).

The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks often seem to mirror not just human intelligence but also human inexplicability.

From Vice
View Full Article


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account