The problem of bias—particularly racial and gender bias—in algorithms and artificial intelligence systems is bleeding into the public consciousness through a steady flow of unwelcome stories. According to IBM Research's A.I. ethics chief, fixing the problem is no simple matter.
"Addressing this issue is really a process," Francesca Rossi said Tuesday at the Fortune Global Forum in Paris. "When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice, not just training data, can bring unconscious bias."
Rossi was responding to the suggestion that IBM's AI Fairness 360 toolkit—a set of metrics that the company released last year in order to detect and remove hidden bias in datasets and machine learning models—could encourage a "checklist mentality" in companies that use it.
"There is not a moment where you check something and you're done," she said. "For us, these toolkits are a way for the community of researchers to understand together the best way to address the problem."
But Rossi stressed that the solutions would not just come from A.I. researchers. "It's important to understand that any one of these issues is not something that can be even identified and resolved by A.I. experts alone," she said. "It tends to be addressed in a very multidisciplinary context and a very multi-stakeholder context."
Rossi and Antoine Bordes, the co-managing director of Facebook's A.I. Research unit, also discussed the issue of ensuring that the decisions made by algorithms were explainable—something that regulators, particularly in Europe, are increasingly keen on making happen.
"Is it explainability or interpretability?" mused Bordes. "Where we pivot more is interpretability… it's more internal diagnostics."
IBM's Rossi, however, had a different take.
"Our two companies have completely different business models," said Rossi, explaining that the users of enterprise A.I. see the issue as fundamental."
"You may inject A.I. into companies where the decisions that are made are very high-stake, like the public sector, the judicial system [or] health care," Rossi said. "You need to provide the possibility for accountability and redress." In health care, she added, "for each decision you may want to provide different explanations to different recipients of the decision, such as doctors, patients and relatives."
No entries found