In the interview, the CEO likened fake news to “being wrong on the internet.”
However, Zuckerberg also showed that he’s grappling with the darker side of his creation.
He discussed tools Facebook has put in place to help people and reduce harm, as well as what Facebook is doing with the lessons it has learned.
Differentiating between intentions
First and foremost, Zuckerberg said that the company’s approach to false news isn’t to say “you can’t say something wrong on the internet.”
“I think that would be be too extreme. Everyone gets things wrong,” Zuckerberg said. Specifically, the CEO outlined that taking something down just because it’s wrong doesn’t allow people to have a voice. That being said, Zuckerberg also noted that Facebook stops “provably wrong” content from going viral.
Zuckerberg also discussed the issue of intent. He explained that Facebook will take something down if someone posted it with the intent to cause harm.
The issue is that Facebook doesn’t differentiate between news outlets that try to post factually correct stories and sometimes get things wrong, and people who post blatantly wrong information. Intention is clearly important to the CEO, but seemingly not in the case of false news.
Using the platform for good
One issue of intent the CEO clearly cares about is how people use the platform.
“I want to make sure that our products are used for good,” he told Swisher. Zuckerberg acknowledged that the platform is a tool and that people can use tools for good or evil. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad.”
An example Zuckerberg used was Facebook Live. When the service first launched, a small number of people used the platform to show themselves doing self-harm. There were also a few cases of suicide.
Zuckerberg says that they built AI tools and hired a team of 3,000 people to respond to those live videos within 10 minutes. Comparatively, most content on the platform gets a response time of a few hours to a day.
“If someone’s gonna harm themselves, you don’t have a day or hours,” said Zuckerberg.
The AI system can quickly flag potential content for review teams. According to Zuckerberg, these teams helped first responders get to more than 1,000 people in the last six months.
The mistake with Cambridge Analytica
The CEO took a moment to acknowledge that the Cambridge Analytica issue was a mistake. It stemmed from a certification — one that Facebook trusted.
According to Zuckerberg, Facebook has systems that checks if developers request information in a weird way. Facebook also does spot checks and audits of developers’ servers. Additionally, Facebook takes a lot of notes from community or law enforcement flags.
In the case of Cambridge Analytica, when Facebook was made aware of the issue, it immediately shut down the app and took away the developer’s profile. The company also demanded the developer, Alexander Kogan, provide certification that the data was deleted.
The problem is, according to Zuckerberg, Facebook believed the certification.
“You know, we didn’t know what Cambridge Analytica was there, it didn’t strike us as a sketchy thing. We just had no history with them. Knowing what I know now, we obviously would not have just taken their certification at its word and would’ve gone in and done an audit,” said Zuckerberg.
Interestingly, Zuckerberg said if Facebook fired someone because of it, it should be him.