Experts affiliated with McGill University, the City University of New York, Harvard University, and Stanford University took issue with Google Health's artificial intelligence (AI) model for predicting breast cancer.
Google asserted that the algorithm—trained on more than 90,000 mammograms—could identify more false negatives than previous models, but the critics said its lack of transparency "undermines its scientific value."
They cited the research's dearth of details, including a description of model development and the data processing and training pipelines employed.
Google failed to disclose the definition of several hyperparameters for the model's framework, and the variables used to enhance the training dataset.
The rebuttal's co-authors said while AI methods in medicine have high potential usability, "ensuring that these methods meet their potential ... requires that these studies be reproducible."
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found