acm-header
Sign In

Communications of the ACM

Viewpoint

Can Universities Combat the 'Wrong Kind of AI'?


instructor and two students in a university research laboratory

Credit: Andrij Borys Associates, Shutterstock

The May 20, 2021 issue of the Boston Review hosted a Forum on the theme "AI's Future Doesn't Have to Be Dystopian," with a lead essay by the MIT economist Daron Acemoglu and responses from a range of natural and social science researchers.1 While recognizing the great potential of AI to increase human productivity and create jobs and shared prosperity, he cautioned that "current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society." In this he was following in the wake of a series of recent books that have centered on the disruptive effects of AI and automation on the future of jobs.6,7,8

In a previous paper, Acemoglu and Restrepo2 caution about the advance of what they term the "wrong kind of AI." What is the "wrong kind of AI"? According to Acemoglu and Restrepo, "the wrong kind of AI, primarily focusing on automation, tends to generate benefits for a narrow part of society that is already rich and politically powerful, including highly skilled professionals and companies whose business model is centered on automation and data." The current trajectory of "wrong kind of AI" "automates work to an excessive degree while refusing to invest in human productivity," and if unchecked, "further advances will displace workers and fail to create new opportunities … and the influence of these actors may further propagate the dominant position of this type of AI."1

So excessive focus on automation is a central defining characteristic of "the wrong kind of AI." Several factors seem to indicate that automation will accelerate following the COVID-19 pandemic. An article in the New York Times (Apr. 20, 2020) observed "society sees the benefits of restructuring workplaces in ways that minimize close human contact. The pandemic is prompting some stores to adopt even more aggressive "contactless" options. While fully automated stores, such as Amazon Go, might have seemed like a technological curiosity a few months ago, they are likely to become a more viable option for retailers." The Times continues, "A new wave of automation could also mean that when companies start hiring again, they do so in smaller numbers … you may see fewer workers when the recovery does come."

While people argue about how far AI and automation will completely eliminate jobs, in talking about the "wrong kind of AI," we also must consider what is happening to existing jobs. In their responses to Acemoglu's article, Kate Crawford and Rob Reich highlight the pernicious effect of AI technologies on workers.1 AI driven "bossware" creates a surveillance panoptican that constantly monitors labor. Automated scheduling systems optimize for the company at the cost of uncertain work hours, risks and compensation to workers. The "gig" economy denies workers the benefits of full employment such as healthcare and retirement benefits. Kate Crawford's recent book4 is an extensive Atlas of the human costs of AI.

How do we combat the "wrong kind of AI"? Acemoglu1 warns that if we are to combat this trend and redirect AI for the wider social good, it "requires a massive redirection of technological change … We cannot expect this redirection to be led by today's corporate giants, whose profit incentives and business models have centered on automation and monitoring" and "removing the (fallible) human element from the production process." The point that large corporations cannot be expected to redirect AI toward social good was put into stark focus by the recent testimony before the U.S. Congress of Facebook whistleblower Frances Haugen, which revealed that Facebook executives were fully aware of the damage their "algorithm" was causing, yet chose to ignore it putting "astronomical profits before people." As my co-author Shalom Lappin and I argue in a recent article in the Guardian, "the view that private companies can be trusted to regulate themselves out of a sense of social responsibility is entirely without merit" and "any company that prioritizes social benefit over profit will quickly cease to exist."5

The central leadership role in combating the "wrong kind of AI" should thus be the responsibility of the university: Rob Reich's response to Acemoglu's essay1 is titled The Frontier of AI Science Should be in Universities. The university, in its classical ideal of Humboldt, Newman and Veblen, is concerned with the dispassionate pursuit of knowledge, both for its own sake and to cultivate critical thinking for the wider benefit of society.3 Thus, in particular, AI research in universities would be expected to progress in directions for the wider benefit of society as a whole. Universities have a wide range of talent in several research fields from engineering and natural science to social sciences and humanities that even the big tech titans cannot match. Universities could bring this talent together into collaborative settings to create interdisciplinary AI initiatives. Reich gives Stanford's Human-Centered AI Center (where he is affiliated) as an example.


Excessive focus on automation is a central defining characteristic of "the wrong kind of AI."


There are, however, several challenges that confront us in making the vision more than lip service amongst university researchers. As Reich points out there is a big brain drain of talent to industry from academia—the most recent AI index reports that in North America, 65% of graduating Ph.D.'s in AI went to industry in 2019 as opposed to 44.4 % in 2010. The reason is not just the fact that industry compensation far exceeds what universities can afford, but also increasingly, greater access to computing power and data resources, especially in the Big Tech titans no university could hope to remotely match. Frontier research in fields such as NLP is impossible today without access to the monster models of the BERT and GPT-3 lineage. This is accentuated by the research culture where publications are accepted at prestigious venues for simply beating the previous performance records by a fraction of a percentage point after deploying this raw compute power. As if this was not enough " … vast resources of several leading companies are pouring into academia and shaping the teaching and research missions of leading universities. It is no surprise that the best minds in the current generation are gravitating toward computer science, AI and machine learning, but with a heavy emphasis on automation."2


How do we combat the "wrong kind of AI"?


Reich's proposal toward making the universities the research hubs again is twofold. First, the obvious suggestion the government should invest massively to make this raw compute and data power available to university research. In Sweden, there is such an effort to create a national compute infrastructure for AI research. Equipping universities with such resources can make them less dependent on corporate influence, so they can set their own independent research goals and priorities. His other proposal is for universities to use their advantage of having a wide pool of expertise in different disciplines and to integrate them into powerful and stimulating interdisciplinary environments that have close contact with the needs of wider society. We should also change the values prioritized in research contributions from a single-minded focus on simply "beating benchmarks" to work that contributes to long lasting insight and fundamental advances.

To at least partially reverse the brain drain in AI from academia to corporations, we should also return to the classical ideals of a university. T.H. Huxley declared in 1894 that "the primary business of universities has to do with pure knowledge … and not with the increase of wealth," and Ernest Rutherford in 1927 warned of "an unmitigated disaster [that is] the utilization of university for research bearing on industry." In contrast to the mantra of some Silicon Valley titans to "move fast and break things," the traditional university spirit has been to "move slow and build," to take the time to ruminate on fundamental problems in science and engineering and build up long lasting knowledge and insight. The long-term view brings stability and calmness that is attractive to bright young researchers, as opposed to constant disruption and reorganizations in corporations. It is thus extremely misguided of recent attempts to import corporate practices into univer-sities:3 far too many performance evaluations based on "metrics" related to perceived "impact" and far too much time wasted in constantly writing and evaluating research proposals. Collini points out that some of the changes in the U.K. are based on the U.S. system and its purported successes, and unfortunately, they seem to be spreading rapidly across the world.

The university does not live in isolation of course, but is part of the wider society and inevitably reflects the values prevailing there. While the contributors to the Boston Review Forum were all convinced of the need to redirect AI for social good, this is against the Zeitgeist of the time which is centered on individual choice and leaving the market free of allegedly "distorting" government interventions. If the only valid model of human endeavor is business, then students are customers and lecturers and researchers are sellers. To change this worldview is the bigger war that must be won before the smaller battles of how to implement a different vision within universities are tackled.

Back to Top

References

1. Acemogluu, D. Redesigning AI: Work Democracy and Justice in the Age of Automation. Boston Review Forum 18, 2021.

2. Acemoglu, D. and Restrepo, P. The wrong kind of AI? Artificial intelligence and the future of labour demand. Cambridge Journal of Regions, Economy and Society 13, 1 (2020).

3. Collini, S. Speaking of Universities. Verso, 2017.

4. Crawford, K. Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

5. Dubhashi, D. and Lappin, S. Scared about the threat of AI? It's the big tech giants that need reining in. Guardian (Dec. 16, 2021).

6. Ford, M. Rise of the Robots: Technology and the Threat of a Jobless Future. Basic Books, 2015.

7. Frey, C.B. The Technology Trap: Capital, Labor, and Power in the Age of Automation. Princeton University Press, 2019.

8. Susskind, D. A World Without Work: Technology, Automation and How We Should Respond. Allen Lane, 2020.

Back to Top

Author

Devdatt Dubhashi (dubhashi@chalmers.se) is a professor in the Division of Data Science and AI, Department of Computer Science and Engineering, Chalmers University, Sweden.


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: