Content moderation helps to monitor spammers and supervise user generated content in order to control their impact. It is about applying pre-determined guidelines and code of behaviour to the content to verify, whether a comment of feedback is publicly allowed or not. There are different types of content moderation techniques. Website managers can use them on their pages or social media accounts.
Simple said, pre-moderation is about checking and verifying content by a moderator before it is made available to the public. This method can be also used to protect social media`s community dynamics and deployed in every situation when the content does not seem to be controversial or linked with a particular moment. A huge benefit of the technique is that it gives high sense of control on the comments, photos, etc. that are going to be published.
Deals with the monitoring and supervising of the comments that have been already posted by users. Unwanted comments are immediately passed on to the team of moderators who verify whether or not a post can be published. Usually a post-moderated conversation takes place on real-time basis and a moderator identifies intentions of users and responds if comments are not appropriate. Post-moderation can be also done automatically when a system flags unwanted content. Reactive moderation This technique relies on users themselves to report content that can be inappropriate for the community. With the use of a reporting button, they let a moderator know about pieces to be verified.
Deploys various technical tools in order to process user generated content in accordance with pre-defined categories. On their basis, a particular software may accept or reject a post and filter for inappropriate or offensive words in various languages without human intervention.