Using AI to Solve Online Issues?

The war between censorship and online speech, or free speech, is still going on. Leftist corporations would push for censorship to silence speech that does not fit their agendas at any cost. A World Economic Forum (WEF) paper urged the use of AI to combat online issues. Such issues include false information, white racial bigotry, and online abuse as real issues faced by internet users.

Who is spreading false information and mistreating others online? Who is abusing whom online? Is it social media and corporations that leans toward the left or the right nowadays? The ongoing political issues have divided the U.S. and its citizens. People seem to distance themselves from other ethnic groups. The decline in Christianity continues. Hating, blaming, and stereotyping certain ethnic groups are out of control. Hate crimes is on the rise. Trust is hard to come by these days. Talking about white racism is kind of confusing nowadays when two plus two does not add up. Minneapolis public schools plan to fire white teachers based on skin color. Black kids continue to beat up elderly people regardless of their skin color. As a result, it is hard not to wonder what the real motive or purpose of using AI is in this case. Is it really to censor misleading information, whiteness and cyber abuse, or is it to censor free speech? Is it intended to be used as a weapon against those who disagree with the woke agendas or reports? Is it meant to keep the world’s population from learning the truth? If so, then how?

Tech leftists advocate implementing the strategy employed by lefties in Silicon Valley to train AI for censorship. The strategy was to provide multi-language support and human knowledge to augment automatic detection by analyzing instances and spotting false positives and negatives in the training set. They are also imposing their biases and ideologies instead of imposing more non-biases on AI, which will create more unfair outcomes in the future. It will not allow others to express values and principles that are contrary to Marxism.

In their guidelines, they pointed out that sophisticated AI will enable nearly flawless detection at scale as AI becomes more advanced with each moderation choice. Hmm…To me, it is like saying that there will be nearly no flaws in detecting those who oppose woke ideology, corruption, and lawlessness. Given the woke ideology and data in the training set, will it be more likely to produce a different outcome? Will the highly developed AI simply favor the communist ideology over the others and enable almost defect detection at scale? Can humans trust AI over human moderators or vice versa? Will AI or examining society and people’s moral principles and integrity actually help address the issue?

Author: maureen l