In 2020, Tyler, a popular Mumbai-based street artist, asked Instagram users to call out public figures guilty of spreading propaganda. Tyler then painted a Walk of Shame (aping Hollywood Boulevard) on a sidewalk—each name was encased in a golden circle above an illustrated pile of excrement. Instagram suspended Tyler’s account and images of Tyler’s artwork were swiftly removed by the very platform that ushered its genesis. Tyler was not the first artist to fall victim to digital censorship on Instagram: Australian street artist Lushsux was suspended after posting a photo of his mural of Hillary Clinton wearing a swimsuit, while artist Betty Tompkins’s Instagram account was deactivated after she posted an exhibition catalogue that included a reproduction of a photorealistic work depicting sexual intercourse, titled Fuck Painting #1 (1969).
In the digital age, artists often rely on a social media presence to share and sell their art. Opaque content moderation policies not only eliminate certain artistic expressions but can have a detrimental impact on both emerging and prominent artists. Though artists and art advocates have called for greater transparency about when posts are removed and why some accounts are given less visibility on the platform’s Explore and hashtag features, it has become common practice for social media platforms to not provide reasons for such actions.
Since their genesis, social media platforms have relied on user-generated content, from artworks to make-up tutorials. However, user-generated content can also result in unwanted content being posted, including hate speech, harassment, and graphic violence. In response, social media platforms developed content moderation processes, which have become increasingly automated, relying on artificial intelligence and machine learning to determine whether user-generated content violates the platform’s community guidelines.
A 2019 Pew Research Center study found that about 66 percent of Americans say social media companies have a responsibility to remove offensive content from their platforms, but just 31 percent have at least “a fair amount” of confidence in these companies to determine what offensive content should be removed. This wide margin demonstrates the challenge in defining what is offensive. Take for instance the oft-quoted US Supreme Court Justice Potter Stewart, who, while undertaking the precarious task of defining hardcore pornography, wrote: “I know it when I see it.” If developing a fine-tuned standard of offensive content is difficult for a nation’s highest court jurist, one can imagine the complexity of developing a machine for the task. Technologists warn that even the most sophisticated AI-powered tools have a hard time figuring out what constitutes “offensive” or “sexual” content. Automated tools are limited in their ability to discern context from content. As a result, image recognition software can identify an instance of nudity, such as a post including an image of a breast. However, the tool’s inability to determine whether the post depicts pornography or breastfeeding results in female nudity being disproportionately removed compared to male nudity.
In the United States, the legal debate around companies content moderation practice is rooted in Section 230 of the Communications Decency Act, a federal law passed in 1996. Section 230 largely immunizes tech companies from legal liability for content created by their users while permitting them to develop their own moderation policies without being subject to legal scrutiny. This broad protection allows tech companies to enjoy the power to choose what to moderate and how to moderate it.
That tech companies have proven to be poor arbiters of content on their own platforms, combined with the recognition of Big Tech’s insurmountable power, has motivated legislators to take on a major overhaul of Section 230. Republicans accuse Facebook and Twitter of disproportionately suppressing conservative political speech while Democrats criticize platforms for their lack of accountability amid the rise of hate speech, revenge porn, and misinformation perpetrated upon vulnerable populations. The decision by Twitter and Facebook to ban former President Donald J. Trump after the Capitol riots reignited discussions about content moderation and Section 230 reform.
However, despite its pitfalls, free speech advocates warn that Section 230 is what prevents the government from coercing platforms to curate content that aligns with their political interests. The Electronic Frontier Foundation, a nonprofit advocating for online civil liberties, calls Section 230 “the most important law protecting internet speech.” On the other hand, proponents of Section 230 reform argue that giving Big Tech free rein to moderate content on the internet endangers citizens’ speech. Without mechanisms in place to monitor how companies and their machines moderate content, there is no way for citizens to challenge companies that remove their content.
For the foreseeable future, content moderation practices will be scrutinized by both the courts and lawmakers. As new rules governing content moderation are introduced, government and corporations alike must incorporate the views of artists and art advocates. Artistic expression will thrive upon thoughtful policies that rein in censorship in the sphere of art and focus efforts on eliminating internet-based violence, crime, and harassment.
Juyoun Han is a partner at Eisenberg & Baum LLP based in New York City where she leads the firm’s Artificial Intelligence Fairness & Data Privacy Department. Han’s litigation practice also includes art law and anti-discrimination cases. She thanks her artist clients for opening her eyes to the world of art.
Patrick K. Lin is a second-year law student at Brooklyn Law School, where he is the executive notes editor of the Brooklyn Law Review. In his spare time, he enjoys writing, oil painting, and going to art exhibitions. Lin is currently writing a book about bias in artificial intelligence and how it impacts civil liberties, policing, and criminal justice.
SUBSCRIBE NOW to receive ArtAsiaPacific’s print editions, including the current issue with this article, for only USD 100 a year or USD 185 for two years.
ORDER the print edition of the May/Jun 2021 issue, in which this article is printed, for USD 21.