acm-header
Sign In

Communications of the ACM

Computing ethics

What To Do About Deepfakes


two mirrored face images, illustration

Credit: Getty Images

Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called "deepfakes" (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography,11 political disinformation,19 and financial fraud.3 Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions.7 What can be done to mitigate these harms?

It will take the efforts of many different stakeholders including platforms, journalists, and policymakers to counteract the negative effects of deepfakes. Technical experts can and should play an active role. Technical experts must marshall their expertise—their understanding of how deepfake technologies work, their insights into how the technology can be further developed and used—and direct their efforts to find solutions that allow the beneficial uses of synthetic media technologies and mitigate the negative effects. While successful interventions will likely be interdisciplinary and sociotechnical, technical experts should play a role by designing, developing, and evaluating potential technical responses and in collaborating with legal, policy, and other stakeholders in implementing social responses.

Back to Top

The Responsibilities of Technical Experts

Deepfakes pose an age-old challenge for technical experts. Often as new technologies are being developed, their dangers and benefits are uncertain and the dangers loom large. This raises the question of whether technical experts should even work on or with a technology that has the potential for great harm. One of the most well known and weighty versions of this dilemma was faced by scientists involved in the development and use of the atomic bomb.18 The dilemma also arose for computer scientists as plans for the Strategic Defense Initiative were taking shape14 as well as when encryption techniques were first debated.13

Although some technical experts may decide not to work on or with the synthetic media technologies underlying deep fakes, many will likely attempt to navigate more complicated territory, trying to avoid doing harm and reap the benefits of the technology. Those who take this route must recognize they may actually enable negative social consequences and take steps to reduce this risk.

uf1.jpg
Figure. A deepfake video from a December 25, 2020, posting "Deepfake Queen: 2020 Alternative Christmas Message" (source https://youtu.be/IvY-Abd2FfM).

Responsibility can be diffuse and ambiguous. Any deepfake involves multiple actors who create the deep-fake, develop the tool used to make it, provide the social media platform for amplification, redistribute it, and so on. Since multiple actors contributed, accountability is unclear, setting the stage for a dangerous blame game where no one is held responsible. Legal interventions will also be stymied by difficulties in determining jurisdiction for punishing deepfake creators,5 and by the need to strike a balance with free speech concerns for platform publication.18 Still, ethically, each actor is responsible for what they do as well as what they fail to do, particularly if a negative consequence might have been averted. Technical experts have an ethical responsibility to avoid or mitigate the potential negative consequences of their contributions.

Consider DeepNude, an app that converts images of clothed women into nude images. It is not only end users that are doing harm with the app. The developer is reported to have said that he did not expect the app to go viral, and later withdrew it from the marketplace.6 In defense of the developer, some could consider him thoughtless but not ill-intended. This, however, misses the fact that the tool was designed for a purpose that inherently objectifies women. The negative outcome of the app was not difficult to foresee, and the designer bears some responsibility for the harm caused.

Many technical experts will work on more generic synthetic media technologies that have diverse applications and uses even they cannot foresee. But despite the uncertainty of future uses they still are not entirely off the hook ethically. Responsibility in this case is less about blame than about making conscientious efforts to identify the potential uses of their creations in the hands of a variety of users with ill as well as good intent.4 NeurIPS, a premier conference in the field of AI, is trying to enforce this ethical responsibility by requiring submissions to include a "Broader Impact" section that addresses both potential positive and negative social impacts.b Technical experts must go a step further though: not to just think or write about social impacts, but to design tools and techniques that limit the possibility of harmful or dangerous use.

Back to Top

How to Be Part of the Solution

Individually and collectively, the behavior of technical experts in the field of synthetic media is coming under scrutiny. They should be expected to, and should expect one another to, behave in ways that diminish the negative effects of deepfakes. Research and development of synthetic media will be better served if technical experts see themselves as part of the solution, and not the problem. Here are three areas where technical experts can make positive contributions to the development of synthetic media technologies: education and media literacy, subject defense, and verification.

Education and Media Literacy. Technical experts should speak out publicly (as some already have) about the capabilities of new synthetic media. Deepfakes have enormous potential to deceive viewers and undermine trust in what they see, but the possibility of such deception is diminished when viewers understand synthetic media and what is possible. For example, were individuals taught to spot characteristic flaws that might give deepfakes away, they would be empowered to use their own judgment about what to believe and what not to believe. More broadly, media literate people can verify and fact check the media they consume and are, therefore, less likely to be misled. While many stakeholders, from journalists to platforms and policymakers, can contribute to increased education and media literacy, technical experts are crucial.


Deepfakes pose an age-old challenge for technical experts. Often as new technologies are being developed, their dangers and benefits are uncertain and the dangers loom large.


Because of their knowledge, technical experts are in the best position to identify the limitations of deepfakes and recommend ways that viewers and fact checkers can learn to recognize those limitations. For example, some of the early deepfake methods were not able to convincingly synthesize eyes, and so individuals could be taught to carefully examine eyes and blinking. Of course, the technology is changing rapidly (newer methods can synthesize eyes accurately), so technical experts must be at the forefront of translating the latest technical capabilities into guidelines. Technical experts could also facilitate media literacy by pushing a norm that those who publish new methods for media synthesis always include a section specifying how synthesis using the new method could be detected. Including this information in publicly available publications would facilitate media literacy.

Subject Defense. Technical experts should contribute to the development of technical strategies that help individuals avoid becoming victims of malicious deepfakes. While viewers can be deceived by deepfakes, those who are depicted in deepfakes can also be harmed. Their reputations can be severely damaged when they are falsely shown to be speaking inappropriately or engaged in sordid behavior. As well, the subjects of deepfakes have their persona (their likeness and voice) taken and used without their consent, resulting in misattribution that either exploits or denigrates their reputation according to the goals of the deepfake creator. Deepfakes may also be used to threaten and intimidate subjects.

Here there are a variety of technical approaches that experts could take. They can develop more sophisticated identity monitoring technology that could alert individuals when their likeness appears online. An individual could enroll using a sample photo, video, or audio clip, and be notified if their likeness (real or synthetic) appeared on particular platforms. Of course, this type of response would come with difficult sociotechnical challenges, including obtaining the cooperation of platforms to provide data for monitoring and addressing the resulting privacy implications. Other approaches to subject defense could involve everything from water-marking and blockchain to new techniques to limit the accessibility, usability, or viability of training data for deepfake model development. Chesney and Citron5 suggest the development of immutable life logs tracking subjects' behavior so that a victim can "produce a certified alibi credibly proving that he or she did not do or say the thing depicted." These are only a few suggestions; the point is that technical experts should help develop ways to counteract the negative effects of deepfakes for individuals who may be targeted.

Verification. Technical experts should develop and evaluate verification strategies, methods, and interfaces. The enormous potential of deepfakes to deceive viewers, harm subjects, and challenge the integrity of social institutions such as news reporting, elections, business, foreign affairs, and education, makes verification strategies an area of great importance.

Verification techniques can be a powerful antidote because they make it possible to identify when video, audio, or text has been manipulated. While state-of-the-art detection systems may reach accuracy in the 90%+ range,1 they are also typically limited in scope, that is, they may work on familiar datasets but struggle to achieve comparable accuracy on unseen data or media "in the wild."8 For instance, a reduction in visual encoding quality, or the fine-tuning of a model on a new dataset may challenge the detector.2,16 Technical research on automated detection continues, with the recent Deepfake Detection Challenge drawing thousands of entries and resulting in the release of a vast dataset to help develop new algorithms.8 To spur work on in this area NIST has organized the Media Forensics Challenge over the past several years,c and other workshops on Media Forensics have also convened to advance research and share best practices."d Another avenue for further technical work is in building human-centered interactive tools to support semiautomated detection and verification workflows.9,10,17


Research and development of synthetic media will be better served if technical experts see themselves as part of the solution, and not the problem.


In practice a combination of automated and semiautomated detection may be most prudent.15 Ultimately, once verification tools are developed there will be yet another layer of sociotechnical challenges for tool deployment, from considering adversarial scenarios and access issues, to output explanations and integration with broader media verification workflows.12

There is no doubt that synthetic media can be used for beneficial purposes, such as in entertainment, historical reenactment, education, and training. The pressing challenge is to reap the positive uses of synthetic media while preventing or at least minimizing the harms. We are encouraged by efforts in industry and academia to grapple directly with ethics and societal impact as new innovations in synthetic media advance.e And, as we laid out in this column, there are numerous opportunities to direct effort in buttressing against some of the worst outcomes. The challenge can only be met with the sustained efforts of technical experts. Let's get to it!

Back to Top

References

1. Agarwal, S. et al. Protecting world leaders against deep fakes. Workshop on Media Forensics at CVPR. (2019).

2. Bakhtin, A. et al. Real or fake? Learning to discriminate machine from human generated text. (2019) https://bit.ly/3iyl5Q9

3. Bateman, J. Deepfakes and synthetic media in the financial system: Assessing threat scenarios, cyber policy initiative working paper series. Carnegie Endowment for International Peace, July 2020; https://bit.ly/3sN2DYM

4. Brey, P. Anticipatory ethics for emerging technologies. NanoEthics 6, 1 (2012), 1–13.

5. Chesney, R. and Citron, D.K. Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review 107. (2019)

6. Cole, S. Creator of DeepNude, app that undresses photos of women, takes it offline. Motherboard (June 27, 2019); https://bit.ly/393MMgy

7. Diakopoulos, N. and Johnson, D. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society (2019).

8. Dolhansky, B. et al. The DeepFake Detection Challenge Dataset (2020); https://bit.ly/3sNMGRN

9. Gehrmann, S., Strobelt, H., and Rush, A.M. GLTR: Statistical Detection and Visualization of Generated Text. (2019).

10. Groh, M. et al. Human detection of machine manipulated media. (2019); https://bit.ly/3p6L1ot

11. Harris, D. Deepfakes: False pornography is here and the law cannot protect you. Duke L. & Tech. Rev. 17 (2013), 99.

12. Leibowicz, C., Stray, J., and Saltz, E. Manipulated Media Detection Requires More Than Tools: Community Insights on What's Needed. July, 2020; https://bit.ly/3iCsUV2

13. Levy, S. Battle of the Clipper chip. New York Times Magazine 44, (1994).

14. Parnas D.L. SDI: A violation of professional responsibility. In Weiss, E.A., Ed. A Computer Science Reader. Springer, New York, NY, 1988.

15. Partnership on AI. A Report on the Deepfake Detection Challenge. (2020); https://bit.ly/39STDZo

16. Rössler, A. et al. FaceForensics++: Learning to Detect Manipulated Facial Images. In Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019).

17. Sohrawardi S.J. DeFaking Deepfakes: Understanding journalists' needs for deepfake detection. In Proceedings of the Computation + Journalism Symposium. (2020).

18. Schweber, S. In the Shadow of the Bomb: Bethe, Oppenheimer, and the Moral Responsibility of the Scientist 39. Princeton University Press, 2000.

19. Tsukayama, H. McKinney, I., and Williams, J. Congress should not rush to regulate deepfakes. Electronic Frontier Foundation (June 24, 2019); https://bit.ly/396rW03

20. Vaccari, C. and Chadwick, A. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society 6, 1 (2020).

Back to Top

Authors

Deborah G. Johnson (dgj7p@virginia.edu) is Olsson Professor of Applied Ethics, Emeritus, in the Department of Engineering and Society University of Virginia in Charlottesville, VA, USA.

Nicholas Diakopoulos (nad@northwestern.edu) is an Associate Professor in Communication Studies and Computer Science (by courtesy) at Northwestern University in Evanston, IL, USA.

Back to Top

Footnotes

a. See https://bit.ly/3qY0Lua

b. See https://bit.ly/3qh8AuC

c. See https://bit.ly/3qduL5c

d. Workshop on Media Forensics; https://bit.ly/2KCWVYb

e. For industry, see for example: https://bit.ly/3iDIVdk; for academia, see for example Fried, Ohad, et al. Text-based editing of talking-head video. ACM Transactions on Graphics 38, 4 ACM (2019), 1–14; doi:10.1145/3306346.3323028


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


 

No entries found