- Written by Compudata
- Published: 28 Dec 2020
There are theoretically countless ways that artificial intelligence can have practical applications. One that you may not have initially considered: as a content filter for Internet comments. With trolling, spamming, and other misuse of these platforms getting far out of hand, let’s consider how some platforms are using AI to fight back.
The State of the Comments Section
You could navigate to just about any article, news piece, or video online and find the same result: a comments section littered with spam, bigotry, and/or arguments. Obviously, this is far from what the developers of these platforms intended when the comments section was first implemented.
The organizations providing the content aren’t so pleased about it, either, and many have taken various routes to try and eliminate the problem. While some have gone with the nuclear option and disabled comments entirely, others have sought a solution in the advanced technologies we have access to today… like, for instance, artificial intelligence.
How AI is Being Used to Moderate Comments
Seeing as Google is a sizable player online, it makes sense to look to them and their efforts. These efforts have included the development of an entire AI conversation platform referred to as Perspective API. Teaming up with OpenWeb to evaluate its real-time functionality as part of a study, Google put Perspective AI to work cleaning up the comment sections of some news platforms.
How Perspective API Performed
As part of their study, Google tested a few different responses to comments that violated the community standards these websites had in place. When the API flagged a potentially problematic comment, the poster was given a suggestion before their comment was published: “Let’s keep the conversation civil. Please remove any inappropriate language from your comment,” or “Some members of the community may find your comment inappropriate. Try again?” Some commenters served as a control group and would see no intervention upon posting.
By the time the study concluded, the feedback was shown to induce about a third of commenters to go back and make changes—however, these changes varied somewhat.
Half of those that went back and made revisions accepted the feedback and removed the inflammatory part of their comment. The other half was split—a quarter of those going back sincerely not understanding what about their original statement was deemed problematic and editing the wrong part. The remaining quarter of these users would instead revise their comment enough to make it past the filter without actually changing the message or adopt a new term as a kind of code the filter wouldn’t pick up on.
For instance, if one of them were to be flagged for using a word like “poopyhead”, they would instead write it “p o o p y h e a d,” or perhaps substitute in something more innocuous that people knew meant something else.
Another study reflected many of these results when Google teamed up with Coral and problematic language was removed from comments in about 36 percent of cases. Yet another study proved very promising: The Southeast Missourian saw such feedback contribute to a 96 percent reduction in “very toxic” commentary.
Google isn’t alone in these efforts, either. Instagram uses machine learning to improve the content filtering that users can opt into.
Regardless of whichever study you consider, the ultimate result is clear—the number of people who posted their flagged comment with no changes or simply didn’t post a comment at all indicates that these kinds of gentle reminders are only effective up to a certain point, when used with people who sincerely don’t want to make waves.
Of course, these studies primarily reveal that there are far fewer Internet “trolls” than we perceive from the vocal minority—a conclusion that yet another study, conducted by Wikipedia, supports. In their findings, most offensive comments were isolated and directly reacted to something else.
It is also important to consider that the 400,000 comments that OpenWeb and Google sampled for their survey are hardly statistically significant, based on the size of the Internet.
Are Internet Comments Such a Big Deal?
Simply stated, yes.
Comment sections have long been problematic, giving scammers and cyberbullies an additional outlet. While meant to encourage conversation and discourse, it is no longer rare to see articles posted with no comments enabled or a social media account with the comments actively disabled. Many people now have a policy of not looking at comments in general.
Due to the economics of the Internet, this is a very real problem for platforms like Google and the rest. The Internet is financially supported by advertisements, which means that the longer a user spends on a website that presents ads, the more money that website can make. If there is content that is alienating users or otherwise upsetting them, those users are less likely to return, meaning that fewer eyes will be on the advertisements. User experience is a huge part of a website’s financial stability, which makes comments that take away from that experience a liability… and with an issue of this scale, the most effective means to resolve it lies in the right technology solutions.
This is just one example of what AI is doing to automate massive amounts of information. There are countless applications of this type of technology for businesses that we’ll be seeing in new software that will help you get more done.
As an MSP serving the Ontario area, we know all about solving business problems by applying the right technology. If you’re interested in learning about how our solutions can benefit your operations moving forward, make sure you reach out to us by calling 1-855-405-8889.
While you’re here, leave a comment and give us your take on the situation. All we ask is that you keep it civil!
Comments Off on How Comment Sections Can Be Cleaned Up with AI
Posted in Blog, Technology
Tagged Innovation, Internet, Miscellaneous
1-855-405-8889
How Comment Sections Can Be Cleaned Up with AI
There are theoretically countless ways that artificial intelligence can have practical applications. One that you may not have initially considered: as a content filter for Internet comments. With trolling, spamming, and other misuse of these platforms getting far out of hand, let’s consider how some platforms are using AI to fight back.
The State of the Comments Section
You could navigate to just about any article, news piece, or video online and find the same result: a comments section littered with spam, bigotry, and/or arguments. Obviously, this is far from what the developers of these platforms intended when the comments section was first implemented.
The organizations providing the content aren’t so pleased about it, either, and many have taken various routes to try and eliminate the problem. While some have gone with the nuclear option and disabled comments entirely, others have sought a solution in the advanced technologies we have access to today… like, for instance, artificial intelligence.
How AI is Being Used to Moderate Comments
Seeing as Google is a sizable player online, it makes sense to look to them and their efforts. These efforts have included the development of an entire AI conversation platform referred to as Perspective API. Teaming up with OpenWeb to evaluate its real-time functionality as part of a study, Google put Perspective AI to work cleaning up the comment sections of some news platforms.
How Perspective API Performed
As part of their study, Google tested a few different responses to comments that violated the community standards these websites had in place. When the API flagged a potentially problematic comment, the poster was given a suggestion before their comment was published: “Let’s keep the conversation civil. Please remove any inappropriate language from your comment,” or “Some members of the community may find your comment inappropriate. Try again?” Some commenters served as a control group and would see no intervention upon posting.
By the time the study concluded, the feedback was shown to induce about a third of commenters to go back and make changes—however, these changes varied somewhat.
Half of those that went back and made revisions accepted the feedback and removed the inflammatory part of their comment. The other half was split—a quarter of those going back sincerely not understanding what about their original statement was deemed problematic and editing the wrong part. The remaining quarter of these users would instead revise their comment enough to make it past the filter without actually changing the message or adopt a new term as a kind of code the filter wouldn’t pick up on.
For instance, if one of them were to be flagged for using a word like “poopyhead”, they would instead write it “p o o p y h e a d,” or perhaps substitute in something more innocuous that people knew meant something else.
Another study reflected many of these results when Google teamed up with Coral and problematic language was removed from comments in about 36 percent of cases. Yet another study proved very promising: The Southeast Missourian saw such feedback contribute to a 96 percent reduction in “very toxic” commentary.
Google isn’t alone in these efforts, either. Instagram uses machine learning to improve the content filtering that users can opt into.
Regardless of whichever study you consider, the ultimate result is clear—the number of people who posted their flagged comment with no changes or simply didn’t post a comment at all indicates that these kinds of gentle reminders are only effective up to a certain point, when used with people who sincerely don’t want to make waves.
Of course, these studies primarily reveal that there are far fewer Internet “trolls” than we perceive from the vocal minority—a conclusion that yet another study, conducted by Wikipedia, supports. In their findings, most offensive comments were isolated and directly reacted to something else.
It is also important to consider that the 400,000 comments that OpenWeb and Google sampled for their survey are hardly statistically significant, based on the size of the Internet.
Are Internet Comments Such a Big Deal?
Simply stated, yes.
Comment sections have long been problematic, giving scammers and cyberbullies an additional outlet. While meant to encourage conversation and discourse, it is no longer rare to see articles posted with no comments enabled or a social media account with the comments actively disabled. Many people now have a policy of not looking at comments in general.
Due to the economics of the Internet, this is a very real problem for platforms like Google and the rest. The Internet is financially supported by advertisements, which means that the longer a user spends on a website that presents ads, the more money that website can make. If there is content that is alienating users or otherwise upsetting them, those users are less likely to return, meaning that fewer eyes will be on the advertisements. User experience is a huge part of a website’s financial stability, which makes comments that take away from that experience a liability… and with an issue of this scale, the most effective means to resolve it lies in the right technology solutions.
This is just one example of what AI is doing to automate massive amounts of information. There are countless applications of this type of technology for businesses that we’ll be seeing in new software that will help you get more done.
As an MSP serving the Ontario area, we know all about solving business problems by applying the right technology. If you’re interested in learning about how our solutions can benefit your operations moving forward, make sure you reach out to us by calling 1-855-405-8889.
While you’re here, leave a comment and give us your take on the situation. All we ask is that you keep it civil!
Posted in Blog, Technology
Tagged Innovation, Internet, Miscellaneous
Latest Blog
What You Need from Your Disaster Recovery Strategy
Have you ever accidentally deleted a file or had your computer crash? It can be pretty frustrating, right? Now, imagine if a big disaster (like a hurricane, fire, or data breach) happened and wiped out all your important data. That's where disaster recovery comes in. Read More
Contact Us
Learn more about what Compudata can do for your business.
1-855-405-8889
Compudata
4-85 Midpark Rd
London, Ontario N6N 1B2
About Compudata
Compudata strives to provide the best comprehensive IT, Computer, and Networking services to small businesses. We can handle all of your organization's technology challenges, starting with planning, implementing, and supporting the IT solutions that are critical to your growth and success.
About Us
Navigation
Trust