Should Government or Big Tech Regulate AI?

Share

Artificial intelligence is in demand in many industries. It reduces labor shortages, keeps the workflow of the business going, assists employees’ workloads, and much more. Yet, AI has been having issues ranging from misinformation to bias and hallucinations that have not been resolved by big tech. Their AI principles and practices are at stake, and regulation of AI is an ongoing discussion. The concern is who is reliable to regulate AI: the government or big tech?

AI with safety and data privacy has not been achieved by big tech. Microsoft’s chatbot now has more conservative views after its machine learning-backed algorithm adjusted it, but not quite. Google Docs has a political correctness issue. Chatbots pass certain exams with flying colors but are still not in their right minds. ChaosGPT does not see the good side of humans and plans to destroy humanity. Google has launched regardless of the fact that Bard is a liar and tends to lean toward left-wing ideologies. The list goes on and on, which creates distrust in the products and corporations.

A recent poll shows that the majority of Americans do not trust big tech to regulate AI’s safety or that AI should not be regulated. Fifty-four percent of American registered voters in the poll supported Congress taking action to regulate AI. Forty-one percent of voters preferred the government’s role in governing AI. Only 20% of respondents thought that IT companies should take the lead, which suggests that the government will do a better job than big tech. However, how do people know that the government’s intervention in AI technology would promote safety, privacy, and fairness? Do they trust the government?

Another new poll shows that many registered voters have little to no confidence in the government’s ability to regulate AI properly, compared to those who have more confidence in the government (59% vs. 39%). Some say the government’s rules are outdated or that they cannot manage AI because those who develop it are incapable of handling it. One wonders what the point is of putting restrictions on AI when what to regulate is unknown, which reminds me of the case of Google Bard. It exhibited the ability to translate the Bengali language, even though no one taught it the language. The CEO of Google admitted he does not fully know how AI functions, as he compared it with a human mind. The black box, which he mentioned, does not give an understanding of how AI reaches its conclusion, unlike algorithm transparency, which is the visibility of the algorithm that produces the output. A user is able to see all the input or the series of steps that affect the output. The problem with algorithm transparency is that hackers can hack into it.

Given the question of whether tech corporations or the government should regulate AI, one may wonder who and what is behind the regulation. Is it to control AI to keep misinformation or corruption from leaking out? Is it to use AI to oppress people, or is it really to promote safety and privacy and reduce or eliminate bias in AI? 

If the government or tech companies value corruption and censorship more than free speech, it will limit AI research for improvement or any scientific breakthrough. Even though both entities encourage innovation and creativity, it is only allowed when it is aligned with their agendas. Examples include the new CAC’s new regulation and the mind-controlling weapon. The restriction is to demand that chatbots reflect socialist values in order not to overthrow the system or express political dissent. Chinese and international observers believe that it would disrupt AI research and innovation. One may wonder whether one of the chatbots would suffer the same fate as chatbot Baby Q if it violated CAC’s rules. The mind-controlling weapon is used for control and power. It does not kill them; it just paralyzes people and takes control of brain function.

If the government or tech corporations value free speech, it will open the door and opportunity for reasoning, knowledge, ideas, and innovation. Discussion and research for improvement in technology, healthcare, etc. will not be limited. The voice of opinion, whether it fits the narrative or not, will not be silenced. People will be able to speak the truth freely, not just be able to speak freely when it fits someone’s agenda. 

The level of confidence in the government’s ability to manage AI is in doubt. It is uncertain what the administration would do if they got their hands on controlling AI. It is also unsure if the rules for AI will have a negative or positive impact on consumers, technology, businesses, and the world. Big tech is not the best choice to govern AI either. However, private tech firms are able to create or evaluate their own rules and procedures for using AI with care and not abusing it to produce better goods and services. Also, research needs to expand to better comprehend how the algorithm produces the output and to develop AI better. However, it does not mean tech companies can do anything with AI without considering its harmful effects on society and individuals. They should evaluate their AI assessments and ethical guidelines or principles to ensure that AI is not being used for malicious purposes but to improve human lives and society.

Sources:

Half of Americans say Congress should take ‘swift action’ to regulate AI: poll

Americans split on keeping government’s hands off AI…

China Orders A.I. Chatbots to ‘Reflect the Core Values of Socialism’

Inside China’s terrifying ‘brain control weapons’ capable of ‘paralyzing enemies’

Google CEO admits he doesn’t ‘fully understand’ how AI chat bot works after it cites fake books, learns new language unprompted

Author: maureen l