Credit: Beaconer
One of the central uses of artificial intelligence (AI) is to make predictions. The ability to learn statistical relationships within enormous datasets enables AI, given a set of current conditions or features, to predict future outcomes, often with exceptional accuracy. Increasingly, AI is being used to make predictions about individual human behavior in the form of risk assessments. Algorithms are used to estimate the likelihood that an individual will fully repay a loan, appear at a bail hearing, or safeguard children. These predictions are used to guide decisions about whether vital opportunities (to access credit, to await trial at home rather than while incarcerated, or to retain custody) are extended or withdrawn.
An adverse decision—for instance, a denial of credit based on a prediction of probable loan default—has negative consequences for the decision subject, both in the near term and into the quite distant future (see the sidebar on credit scoring for an example). In an ideal world, such decisions would be made on the basis of a person's individual character, on their trustworthiness. But forecasting behavior is not tantamount to assessing trustworthiness. The latter task requires understanding reasons, motivations, circumstances, and the presence or absence of morally excusing conditions.9 Although a behavioral prediction is not the same as an evaluation of moral character, it may well be experienced that way. Humans are highly sensitive to whether others perceive them as trustworthy.25 A decision to withhold an opportunity on the basis that a person is "too risky" is naturally interpreted as a derogation of character. This can lead to insult, injury, demoralization, and marginalization.
No entries found