Global Regulation of AI is Puzzling

Share

The problems with AI have gained worldwide attention. Governments around the world have been trying to find ways to regulate AI. However, Open AI’s CEO has a different take on this issue. His call for international AI regulation and collaboration with China to reduce AI risks is puzzling.

Can countries come to the same agreement on how to govern AI so that it is used ethically? Not all legislative bodies around the world agree with each other, let alone the use of AI for various purposes. Europe’s Parliament’s proposal for AI discussions offers a glimpse of the struggle to agree on laws to balance the benefits and risks of artificial intelligence. Such discussions include the definition of AI, what should be regulated, and which applications of AI are so risky that they should be prohibited. China’s new regulation requires AI chatbots to reflect socialist values. The Japanese Minister of Education, Sport, Science, and Technology declared that AI systems will have access to any data in the country. Copyright law for AI using intellectual property has no effect in this situation.

This brings to mind that not all AI experts agree that AI is an existential threat. The current state of AI is narrow, meaning that it can only do specific tasks. Studies show that AI is not near human-level intelligence, nor will the development of general artificial intelligence be possible. Therefore, it is not whether AI should be regulated but rather how organizations and governments use it for good or bad. China uses AI to monitor its citizens. The NYPD enlisted robots on patrol to safeguard the city. Digidog, a robotic dog is designed to assist in risky circumstances or areas where people may be at risk or need inspection, which received backlash. It violates people’s privacy and squanders public resources while also failing to adequately protect New York, according to a group that is against local and national surveillance. In addition, malfunctioning robotic dogs or cybercriminals seizing control of these dogs’ systems could cause havoc.

That being the case, why not fix the AI’s current flaw first? Why continue to release the new version of ChatGPT? GPT-4 has the same problems that other ChatGPTs have, but it is still being released this year. The technical report only shows test results and compares them to previous GPTs. It does not show the training dataset, methods, or other architecture of this version, and it is disappointing.

AI researchers and professors pointed out their criticism, from safety to ethical issues, of the report and the company. With the lack of information in the report, it makes it hard to build confidence in using the product. It also makes it hard for the public to look into the problems and fix them if the product goes wrong. The company has shifted its business model and strategies. One professor pointed out that it proclaimed that it was working toward the benefit of humanity but ignored the strategy for reducing the risk faced by businesses.

Open AI used to be a non-profit from 2015 to 2018. Its goal was to advance artificial intelligence to benefit humanity. Research was free from financial obligation to better focus on the positive impact on humans. The public can access information and exchange ideas with others to fix or advance AI technology thanks to open-source research. The company then took a turn in 2019. It changed from a non-profit to a for-profit and got an investment from Microsoft.

It is not a surprise, then, that the technical report for GPT-4 is closed-sourced. Open AI’s chief scientist commented that the company wants to keep it hidden from its competitors and that open source turns out to be a mistake. He also pointed out that they let some academic and research institutions have access to the data, which is good. At least some researchers will be able to repair or fix the problems with GPT-4 and enhance the security vulnerability. However, the institutions they choose raise concerns about whether they might be run on an ideological basis to advance political power. Also, it is uncertain whether their studies will be open source given that Open AI is currently closed source and for-profit.

In conclusion, why tech companies, such as Open AI, is able to just develop and release the new version of GPT but not able to fix its problems? Why call for global regulation AI when the level of AI does not pose existential risks? Every legislative body has a different point of view, let alone values, beliefs, and culture. Even if they come together on global regulation of AI, will it give more power and control to the government and tech companies than to citizens? Will it give rise to digital authoritarianism? Will AI in a safe and ethical manner and not violate human rights, free speech and religious freedom under such regulation?

Sources:

Sam Altman Is ‘Optimistic’ He Can Get the AI Laws He Wants 

The Global Battle to Regulate AI Is Just Beginning

OpenAI’s GPT-4 Is Closed Source and Shrouded in Secrecy

Japan: AI Systems Can Use Any Data, from Any Source-Even Illegal Ones

The NYPD is bringing back its robot dog

Alternative View: AI Does Not Pose an Existential Risk

OpenAI CEO Calls for Collaboration With China to Counter AI Risks

Author: maureen l