Intelligence, Inside and Outside.

Why AI As Social Media Content Moderators Is Not Yet Feasible

With the huge torrent of content coming to life in social media each second, moderation is quite the challenge. Right now, there are means to automate this process using artificial intelligence (AI) with humans manually screening the content.

Makes you wonder if there will ever be a time where humans will be completely out of the picture — an era full blown AI content moderation, so to speak. While this might be a great thing, the odds for it are bleak in the foreseeable future.

The need for  manual screening

Human content moderation can take into account factors which current AI tech cannot — and in the future still might not be able to — take into account.

First among these factors is context. While AI is fully capable of analyzing content at hand, it is not able to gain a full grasp of the context. It can be as simple as cultural differences or as complicated as national laws.

Given the variation of these cultures and laws in the world, it is difficult, if not impossible, to take all these things when training AI systems. These things however, are clear as day for human moderators.

Another is the ease of analysis. Photos and videos posted online will be an easy target for AI to analyze. Put against humans in live content and AI will lag behind, since even seemingly harmless scenes can turn out to be disturbing.

A painful process

This need for human moderation is a bigger problem than you think. For starters, the millions of content coming in social media platforms is something that humans cannot completely monitor. This is why for the most part, AI systems still do the process of flagging the content, then humans provide the context and the nuances AI cannot take into consideration.

Read More  A Communication Tool For People With Speech Impairments

A rather disturbing consequence of human moderation is how it affects the moderators themselves. Exposed to violence, hate, and graphic content, some of them are reportedly developing PTSD-like symptoms.

How AI can step in

Right now, AI is sufficiently powerful to analyze content and videos in tremendous speeds. It is quite accurate in detecting violence and graphic content in the picture. This definitely takes off a lot of load from human moderators.

However, AI can also help in preserving the well-being of moderators. UK’s communications regulator, Ofcom, suggests using AI to automatically blur parts of the content which a moderator can opt to view if needed. They also suggest using AI to ask questions to the moderators to determine which content will be difficult for them given their personal experiences.

As with its other applications, AI still needs some level of supervision from humans to maximize their power. However, it doesn’t have to be painful for the humans supervising these systems, we can also take steps to ensure the protection of their well-being.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

The Weird Ones: How To Handle Outliers In Your Data

Next Post

MIT Researchers Develops A Slimy Worm Robot For Less Invasive Surgeries

Read next