Social media companies would have to make public their policies for removing problem content and give detailed accounts of how and when they remove it, under a proposal being considered by California legislators who blame online chatter for encouraging violence and undermining democracy.
The bipartisan measure stalled last year over free speech concerns, but Democratic Assemblyman Jesse Gabriel said Tuesday he hopes to revive his bill by adding amendments he said will make it clear that lawmakers don’t intend to censor or regulate content.
“We think we’ve found a way to thread that needle,” Gabriel said during a news conference promoting what he said is first-of-its-kind legislation. “We’re not telling you what to do—but tell policymakers and tell the public what you are doing.”
“There’s nothing in this bill that requires companies to censor speech,” he added. “There’s nothing that requires them to silence certain voices or to amplify other content. It simply requires them to be honest and transparent about when they are amplifying certain voices and when they are silencing others.”
The digital rights group Electronic Frontier Foundation, a nonprofit promoting free expression online, was among those opposing the bill on free speech grounds. But the group said it can’t say if Gabriel addressed those concerns until he amends his bill.
The proposal sailed through the state Assembly more than a year ago on a 64-1 vote, then ground to a halt in the Senate Judiciary Committee. It faces a crucial hearing in that committee next week, days before its deadline for moving bills to the full Senate.
The California Chamber of Commerce opposed the bill along with trade groups including the Consumer Technology Association, Internet Association, Internet Coalition, Netchoice and TechNet.
The bill requires such complete disclosure that it would provide “bad actors with roadmaps to get around our protections,” a coalition of the opponents told lawmakers. Its requirement of detailed quarterly reports to the state attorney general is “unworkable and unreasonable,” even with proposed amendments, the coalition said.
And the enforcement allowed under the bill is “onerous and problematic,” subjecting companies to possible civil penalties and investigation over the filing of a report. The potential for lawsuits would be counterproductive, the groups said, and could “suppress ongoing efforts to protect users from harmful content online.”
Companies would be required say if their policies cover various categories of online content and if so, how they enforce those policies in each category, how quickly and how often, all broken down by how it was shared—for instance by text, video or images.
Categories would include hate or racist speech; extremism or radicalization; disinformation or misinformation; harassment; and foreign political interference. They would have to say how many items were flagged, how many resulted in some action, and how often those items were viewed and shared.
“Consumers—all of us as we’re looking at our social media feeds—deserve to know how social media is amplifying and spreading hate and misinformation and disinformation, and unfortunately even fomenting violence in our society,” said Democratic Sen. Richard Pan, who heads the California Asian & Pacific Islander Legislative Caucus that has seen harassment grow during the coronavirus pandemic.
“They know what their algorithms do. We don’t know,” Pan added. “We need to know what’s going on inside that black box. We need to know what those billions of dollars they’ve invested in researching how we think actually drives the decisions they make.”
Gabriel said that as the home to many social media companies, “California has a special obligation and a special opportunity to lead” at a time when federal politicians “can’t even agree on what day of the week it is.”
Advocates and the companies themselves agree that whatever California does will become a model for the rest of the nation, as have many of its other policies, Gabriel said.
The bill is one of several addressing social media that are moving through the Legislature this year. Among them is one that would allow parents to sue social media platforms alleging harm to children who have become addicted to online content. Another would require online companies to meet certain standards if they market to children.
Another would allow Californians targeted in violent social media posts to seek a court order to have the posts removed.
© 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Citation: California may make social media firms report enforcement (2022, June 22) retrieved 22 June 2022 from https://techxplore.com/news/2022-06-california-social-media-firms.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.