Sign In

Communications of the ACM


Regulating Information Technology

globe-shaped shackles

Credit: Getty Images

In the spring of 2018, Facebook CEO Mark Zuckerberg was called to testify before Congress, largely in response to the news that British consulting firm Cambridge Analytica had captured and used the data of more than 87 million Facebook users to influence elections without the social media giant's knowledge or consent, as well as the admission that Facebook itself had been used by Russians to spread fake news and propaganda. During his testimony, Zuckerberg laid out his views on both the need for, and inevitability of, regulation of technology companies and their products and services.

Zuckerberg has since published a call for governments to regulate the Internet by limiting harmful content, addressing long-standing privacy concerns, securing the integrity of elections, and ensuring data portability. However, as of the writing of this article, there has been little to no substantive action on the part of the U.S. federal government to address these and other IT-related concerns.


Duncan Hall

"On 15 March 2019, people looked on in horror as, for 17 minutes, a terrorist attack against two mosques in Christchurch, New Zealand, was live streamed. 51 people were killed and 50 injured and the live stream was viewed some 4,000 times before being removed."
Two months later to the day, on 15 May 2019, New Zealand Prime Minister, Jacinda Ardern, and French President, Emmanuel Macron brought together Heads of State and Government and leaders from the tech sector to adopt the Christchurch Call. The Christchurch Call is a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online. It rests on the conviction that a free, open and secure internet offers extraordinary benefits to society. Respect for freedom of expression is fundamental. However, no one has the right to create and share terrorist and violent extremist content online."
For more information:

Keith Kirkpatrick

Thank you for reading and for your comments. One of the challenges faced by technology companies is being able to quickly scan for and identify objectionable content online. The use of computer vision and machine learning/deep learning algorithms is likely to help in this process, but humans will still be required to write the algorithms, as well as determine the parameters of what may or may not be considered objectionable content.

Displaying all 2 comments

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account