In recent years, there’s been a massive clampdown on comments on news publications. The spewing of hate speech and vitriol became too much for these sites. In an effort to stop the spread of hatred, news sites shut down comments sections. But many readers, those who were actually decent people, missed the opportunity to share their thoughts and become part of a community. And, so, these sites have sought better options. That’s where Google AI comes into the picture.
The New York Times is the first to use an artificial intelligence tool, developed by Google AI and known as Moderator, to enable comments on more articles. The software is able to identify “toxic” language, allowing the publication’s human moderators to quickly move onto non-offensive posts. The intention is to have eight times as many articles with comments enabled by year-end. Currently, just 10% of articles allow comments.
Google AI beats the expense of moderators
Popular Science, Motherboard, Reuters, National Public Radio, Bloomberg, and The Daily Beast are just some of the sites which have shut down comments. This is partly due to the expense of keeping a full team of moderators on staff.
The software’s algorithms have been trained by being fed 16 million posts that previously underwent comment moderation. This is an example of machine learning at its finest and shows just what this type of technology is capable of. In this way, Google AI is able to learn as it sees what human moderators are doing.
“The best part about machine learning models is that they get better with time,” Jared Cohen, chief executive of Jigsaw, told The Times.
Labor intensive moderation process
Bassey Etim, the community editor for nytimes.com, said that the publication was forced to close its comments section far sooner than it would have liked due to the labor-intensive moderation process. With Google AI, that was about to change.
"As The Times gains more confidence in this summary score model, we are taking our approach a step further – automating the moderation of comments that are overwhelmingly likely to be approved.
"In the long run, we hope to reimagine what it means to 'comment' online. The Times is developing a community where readers can discuss the news pseudonymously in an environment safe from harassment, abuse, and even your crazy uncle. We hope you join us on the journey."
Wikipedia has also been involved in testing Google’s AI. It was used in the development of the tool to look at millions of comments from Wikipedia's editorial discussions. Five other unnamed partners were also used in this testing phase.
Experimenting with the system
Both the UK’s Guardian and the Economist are experimenting with the system to see how it can be used to improve their comment sections.
"Ultimately, we want the AI to surface the toxic stuff to us faster," said Denise Law, the Economist's community editor. "If we can remove that, what we’d have left is all the really nice comments. We’d create a safe space where everyone can have intelligent debates."
According to Perspective, it's an API that makes it easier to host conversations – and better ones at that.
“The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give real-time feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information... We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as ‘toxic’ to a discussion.”
Follow the example
More news publications are expected to follow the New York Times’ example but it’s unclear for now which will be next. If technology like the usage of machine learning in comment moderation piques your interest, you might be curious about why more students should focus on artificial intelligence. It highlights why young people should consider entering this field for a better chance of employment and career success.