I just watched Jack Dorsey, CEO of Twitter, reveal he’s beginning to recognize social media has a censorship problem.

Senator Cruz: How would you like to replace political censoring with highly objective content tagging and user-specified filtering?

Here’s HOW, but first, let’s ensure we all realize just how many major players there are in this game.

In decreasing order of MAUs, social media websites include:

  • Facebook (2.23 billion MAUs)
  • YouTube
  • WhatsApp
  • Messenger
  • WeChat
  • Instagram
  • QQ
  • Tumblr
  • QZone
  • TikTok
  • Twitter
  • Reddit
  • Baidu Tieba
  • LinkedIn
  • Viber
  • Snapchat
  • Pinterest
  • Line
  • Telegram
  • Medium (60 million MAUs)

THE PROBLEM: In an attempt to limit objectionable content, social media platforms have resorted to censorship. The problem with censorship as it’s currently being used today, relying heavily on other users to report objectionable content, is that users themselves are often objectionable, not to mention being highly biased in what they consider objectionable, and platform “moderators” (censors) aren’t much better. When users themselves aren’t S.W.A.T.-ing other users, then moderators themselves start S.W.A.T.-ing users. This usually occurs when one user or moderator thinks content is wrong. The problem is, they do not understand it merely appears that way because they’re the ones who’re wrong, lead astray by agenda-driven propaganda.

THE SOLUTION: The only way to stop S.W.A.T.-ing users is to eliminate it completely.

“Then how are we supposed to censor…”

You DON’T.

“But we can’t just let people run wild…”

You DON’T do that, EITHER.

“What then?”

Aha! At last, and intelligent question.

Answer: Require all users to properly tag their own content for any potentially objectionable material via one of the following major categories:

  • sales
  • politics
  • religion
  • viral (overdone)
  • health
  • vice
    • alcohol
    • tobacco
    • marijuana
    • drugs
    • illicit
  • defamatory
    • demeaning
    • insulting
  • editorial/personal opinion
  • racism
  • violence
  • death
  • gore
  • sexual
    • suggestive
    • explicit
  • hashtag-stuffed
  • terrorism

That’s quite a list! Users can be fairly general, or they can become rather granular. Obviously, various social media platforms are free to set their own major categories and sub-catetory tags, but I do recommend they use something close to what’s listed above. Either way, all posts would require users to fully and completely categorize their own posts by checking one or more of the aforementioned categories.

Here’s how this works in practice:

  1. Require all users to tag their own content using the aforementioned list.
  2. Allow all users to set filters on their own content using the aforementioned list. If, for example, they check “violence, death and gore,” then they will never see content whose AUTHOR has taggedhis/her own content as “violence, death and gore.”
  3. Allow other users to tag any author’s content for potentially objectionable material (same list).
  4. The system constantly compares author’s own tags with how the content is tagged by others. If the author is really good about tagging his or her own content the same way others tag it, then they sore high in consistency on that post. Cumulative consistency i.e. “reputation” is scored on a percentage scale. High consistency increases reputation (high 90’s). Low consistency decreases reputation (low 00’s). Changes in consistency can raise and lower one’s reputation in just a few weeks. Let’s call it three months, and be sure to let users know this time limit, but it would require at least 100 posts, total, to achieve this in as little as three months.
  5. Users can always see their own reputation score, but can either show it to or hide it from others. Remember, this score only represents how well an author tags their own content as compared to how others tag their content, subject to the following non-biased review:
  6. If the mis-tag per post rate crosses a high enough threshold, samples are presented to professional reviewers whose SOLE PURPOSE is to review the material themselves, and without seeing either the author’s or other user’s tags, to review the material as accurately as possible. If an author’s tags consistently match the reviewers’ tags, the author’s reputation increases AND the complainant’s reputation DECREASES. This is the disincentive for filing false complaints and/or falsely selecting tags. If an author’s tags consistently diverge from the reviewers’ tags, then the author’s reputation decreases while their complainants’ and taggers’ reputations increase.
  7. Consistantly poor performers are culled!!! Seriously — The bottom 3% to 30% of consistently low-reputation users are warned once, twice, thrice with both tips as well as opportunities for change and even rebuttal to be considered before a final review board, but if they can’t bring themselves to grow up, they’re gone.

The ENTIRE POINT of this approach is that it eliminates subjective, maximally-biased determination of acceptable content i.e. POLITICAL CENSORSHIP and replaces it with objective, minimally-biased reviews of content tagging i.e. CONTENT CATEGORIZATION and USER-SPECIFIED FILTERING.

Naturally, we’ll need one more input from Congress to make this work: An absolute prohibition against platforms selecting or limiting the display of posts by content other than the aforementioned author-tagging and user-filtering systems. If Facebook cannot handle showing posts of all 1,400 of my friends, then they can RANDOMLY select a certain percentage of their posts and show me that. Users should, however, be able to select a user’s page and see all their posts to which they have the appropriate share/read permissions.

Updated: January 13, 2021 — 8:56 pm