About a year ago, after Mark Zuckerberg floated the idea on a podcast with Ezra Klein, I argued that Facebook needed some kind of Supreme Court. “A perfect content moderation regime likely is too much to hope for,” I wrote at the time. “But Facebook could build and support institutions that help it balance competing notions of free speech and a safe community. Ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.”
In the months since, Facebook’s vision for its Supreme Court — which it has decided to give the rather less glamorous name of independent oversight board — has rapidly come into focus. Today, the company offered us the most details we’ve had on the plan to date. In a series of blog posts, the company unveiled its draft charter for the organization, summarized some of the key decisions that went into it, and described some of the rationale for its design. Finally, Mark Zuckerberg published a letter in which he reiterated the need for a kind of judicial branch for Facebook:
We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don’t believe private companies like ours should be making so many important decisions about speech on our own. That’s why I’ve called for governments to set clearer standards around harmful content. It’s also why we’re now giving people a way to appeal our content decisions by establishing the independent Oversight Board.
Facebook’s independent oversight board is a subject that I find very exciting, but I understand if the very idea of it makes your eyes glaze over. Viewed from a far enough remove, the idea can seem rather quaint. The board’s primary authority will be to decide which Facebook posts stay up and which come down, and you can imagine all manner of petty disputes on which the board will be asked to weigh in.
But we also know now that Facebook and its moderators currently police the boundaries of speech on an enormous portion of the internet. And for those who feel that the company made the wrong decision about a post, there has historically been very little recourse. You could fill out a little text box and pray, but you were unlikely to ever receive much more than an automated message in response. The system might work in the majority of cases, but it never felt particularly just — which is to say, open and accountable.
As laid out in today’s materials, the board is designed to create a feeling of justice where none has existed before. The board will have meaningful independence from Facebook, and while its decisions will not be legally binding, Facebook is highly incentivized to follow its recommendations. Its decisions will be public, and will serve as precedents — meaning that a kind of case law will develop over time. And the board will be able to go beyond decisions to offer advice on policymaking — Facebook will obligate itself to respond in public.
“I have no idea if the Board will gain legitimacy. Maybe it will disappear overnight like Google’s AI Ethics Board,” said Kate Klonick, a law professor who has spent the past several months studying Facebook’s board plan, in a Twitter thread. “But at the very least, so far, it’s a bigger & more rigorous commitment of time, money, & platform power than anything that’s come before.”
Why would Facebook put itself through this? I think Zuckerberg is being sincere when he says he doesn’t want to make important speech decisions by himself. There’s very little upside in doing so — when you run a platform that the whole world uses, every high-profile speech decision that you make can alienate millions of people. Better to entrust those decisions to a board, and give it just enough independence that you can credibly say you had nothing to do with the decision.
Then again, we live in a time when trust in institutions is on the decline. Many people are already disinclined to trust Facebook, for a variety of reasons; it’s not clear how an entity as strange as Facebook’s oversight board can gain legitimacy in the eyes of the public. And even if it does, the highly political nature of many board decisions will make it a lightning rod for controversy. It’s hard to imagine that Facebook won’t continue to take collateral damage.
All that said, the design of Facebook’s board is thoughtful and even clever. Board members with domain expertise, rendering decisions in public, could bring a legitimacy to Facebook’s content moderation operations that it has never had before. And even if falls short of Facebook’s highest ideals, on the surface this board charter looks much better than the system we live under today.
Today in news that could shape public perception.
Trending up: Snap might pay publishers for news content, bringing high-quality content to an app whose news offerings have largely been lowbrow and shallow.
Trending down: The fall guy for some of Facebook’s policy missteps over the past two years never actually took the fall.
⭐ The Facebook page ‘Vets for Trump’ was taken over by a North Macedonian businessman and the owners couldn’t get it back for months. After taking over the page, the Macedonians began asking the page’s 100,000 followers for donations. (Craig Timberg / The Washington Post)
Foreign actors — some seeking profit, some seeking influence and some seeking both — haven’t flagged in their efforts to reach U.S. voters through online information sources such as Facebook, Twitter and YouTube. Veterans and active-duty military personnel are especially valuable targets for manipulation because they vote at high rates and can influence others who admire their records of service.
“Veterans as a cohort are more likely than others to participate in democracy. That includes not only voting but running for office and getting others to vote,” said Kristofer Goldsmith, chief investigator for Vietnam Veterans of America. He was the first to discover the takeover of Vets for Trump during research for a report to be released Wednesday that documents widespread, persistent efforts by foreign actors to scam and manipulate veterans over Facebook and other social media.
Facebook updated its policy for dangerous individuals and organizations in response to the Christchurch massacre in New Zealand. The company will now target content from hate groups with the same AI techniques used against ISIS and al-Qaeda. Facebook also expanded its definition of “terrorist organization” to include groups that even attempt acts of violence against civilians — such as the white supremacists. (Facebook)
Facebook removed 244 accounts and 269 Pages for engaging in coordinated inauthentic behavior originating in Iraq and Ukraine. The people behind the scheme used fake accounts to amplify content and manage pages. In Iraq, they typically posted about religion, Saddam Hussein, and US military action. In Ukraine, they posted about celebrities and sports. (Nathaniel Gleicher / Facebook)
Moderating Facebook content continues to be highly traumatic for some contractors — months after the company committed to improve working conditions. This story includes conversations with current and former moderators in Berlin. (Alex Hern / The Guardian)
Facebook executive Elliot Schrage stepped down as policy chief in the wake of the Cambridge Analytica scandal — but he never left the company. Schrage has remained as vice president of special projects, where he is now working on the Libra mess. (Kurt Wagner / Bloomberg)
President Trump returned to the Bay Area for the first time since he was elected, for a fundraising event in Palo Alto. Only 5 percent of donations tech workers have made to presidential candidates have gone to Trump since 2017. (Rebecca Ballhaus and Chad Day / The Wall Street Journal)
Russia undertook a ‘stunning’ breach of FBI communications, resulting in diplomats’ expulsion from the country in 2016. Among other things, the Russians were trying to stop the bureau from tracking spies. (Zach Dorfman, Jenna McLaughlin, and Sean D. Naylor / Yahoo)
Surveillance in the U.K., already much greater than in most western democracies, is ramping up even further thanks to facial recognition software installed in some of the country’s many public surveillance cameras. In May, San Francisco went the opposite route and banned this technology altogether. (Adam Satariano / New York Times)
⭐ Facebook is partnering with Ray-Ban parent company Luxottica to develop augmented-reality glasses. The company hopes to having something to sell to customers by 2023, CNBC’s Salvador Rodriguez reports:
The glasses would allow users to take calls, show information to users in a small display and live-stream their vantage point to their social media friends and followers.
Facebook is also developing an artificial intelligence voice assistant that would serve as a user input for the glasses, CNBC previously reported. In addition, the company has experimented with a ring device that would allow users to input information via motion sensor. That device is code-named Agios.
The company has hundreds of employees at its Redmond offices working on technology for the AR glasses, but thus far, Facebook has struggled to reduce the size of the device into a form factor that consumers will find appealing, a person who worked on the device told CNBC.
Snapchat rolled out 3D Camera Mode, which adds a new dimension to photos. Users with an iPhone X or newer can apply 3D effects, lenses, and filters to their photos. The effect requires an iPhone X or newer, and replicates a upcoming feature in the next iteration of Spectacles, which are going on sale soon. (Ashley Carman / The Verge)
Snapchat is also exploring a new bet on news, courting publishers for a dedicated news tab in the app. My dream of high-quality news publishers being paid what are essentially carriage fees by the platforms is rapidly coming into focus. (Facebook is doing something similar.) (Alex Heath and Jessica Toonkel / The Information)
In a beautiful essay, Tavi Gevinson interrogated her own rise to fame, beginning with a fashion blog at age 12 and a magazine at age 15, and the conflicted relationship with Instagram she developed along the way. Make time for this one. (Tavi Gevinson / The Cut)
We reviewed Apple’s new phones and concluded that the iPhone 11 is the phone most people should buy (if they’re planning to upgrade). I got the green iPhone 11 Pro, though. (Nilay Patel / The Verge)
Repo men are scanning and uploading the locations of every car they drive by into Digital Recognition Network — a surveillance database of 9 billion license plate scans accessible by private investigators. Although the network isn’t run by the government, law enforcement have access to it. (Joseph Cox / Vice)
Scientists predict sea waters could rise 4 feet or more by 2100, inundating tech headquarters in Silicon Valley. But Google and Apple are among those still investing heavily in real estate in the area. (Marketplace)
Indiana University’s Observatory on Social Media introduced a tool that claims to instantly detect the use of fake accounts to manipulate public opinion. It’s called BotSlayer, and I’m curious to see how well it works — detecting bots is notoriously difficult. (Indiana University)
Eve Peyser talks to people who modify their bodies in extreme ways for Instagram followers:
Earlier this year, Louise — who ultimately aspires to be a reality television star — made a bombastic appearance on Dr. Phil, wherein she played a cartoon of herself, proclaiming herself a “skinny legend,” and remarking, “I’d rather die hot than live ugly.” She’s earned over 70,000 followers since her TV appearance, and now claims to earn around $3,000 per month hawking products like Flat Tummy Tea.
“I could say that social media puts a lot of pressure on me, but I’m thankful for that. I wonder if Instagram didn’t exist, what I would look like at this point?” she mused. Ultimately, she’s in it for the money. “Back when I had, like, 200 followers, I was still on the app all day looking at people,” she explained. So why not capitalize on that?
I can think of some reasons!