Clearview AI Fined For Violating Privacy Rights

Share

Some technology companies may develop technology that has the intent of stopping crime without considering ethical and legal issues. Such issues include the function of the algorithm and how the government or companies use the technology. Others include the violation of certain rights in certain countries and the use of technology that may cause more harm than benefit. An example is the Clearview AI’s case.

The Information Commission Officer (ICO) of the United Kingdom fined Clearview AI $10 million for violating people’s privacy, according to the Wall Street Journal. Furthermore, the corporation was forced to delete all personal information from its database. Other countries such as France, Italy, and Australia were among the nations that had taken similar regulatory action against Clearview AI. The company was collecting and using facial images and personal data of U.K. citizens without their consent. The same thing happened to the people who live in Illionis when the company collected their biometric data without their consent. The American Civil Liberties Union (ACLU) filed a complaint in 2020 against Clearview AI for disregarding the Biometric Information Privacy Act. Critics criticized the company for providing the potency of facial recognition to the police force. They claimed the algorithm might produce unjustified and arbitrary outcomes that favor certain groups over others.

The critics are not wrong about the facial detection algorithmic decision. Research and news show the technological errors of recognizing faces in various demographic groups. According to the Daily Mail, in 2020, the police arrested the wrong person due to the use of facial recognition technology. The man identified as Nijeer Parks was in the system because of his former crime of selling drugs, and his face matches the suspect of a candy shoplifter. Despite the fact that he did not commit the crime, he was sentenced to ten days in prison. His accusation was later withdrawn during the pretrial hearing because the judge required more evidence than merely a facial match.

An algorithm bias in facial detection was noted by William Crumpler, a research assistant at the Center for Strategic and International Studies. According to a 2003 study by the National Institute of Standards and Technology (NIST), the algorithm has trouble distinguishing between young and senior participants. This is the same as for female and male participants. The 2019 NIST revealed a “false positive rate” and “false negative rate” in demographic groups. Crumpler further added that the problem with the error rate is the selection of the training data to develop a step-by-step computational procedure for the system. Algorithm programs with a tiny sample of a population in a set of data will make more mistakes at detecting members of that population in a real-world circumstance. That is why the EU proposed having a larger dataset that includes various demographic groups to improve the accuracy rate.

Joan Ton-That, CEO of Clearview AI, stated that one must draw the line between the use of publicly available data to improve the AI accuracy of recognizing faces. U.K. officials restricted the police forces’ use of advanced technology to solve crimes. He followed all privacy and legal criteria. He obtained facial images and personal data that were already on the internet. However, when dealing with a facial detection system, there are many risks and concerns that need to be addressed. How to improve this technology to reduce or avoid algorithm bias? What is the regulations for this technology? What is this technology used for and how will it affect society and people? How much surveillance is used on individuals and why? Is using this technology really going to keep society safe and sound or to keep tabs on everyone regardless? Is it to build trust or fear? An example to address these concerns is how China uses this technology. According to the Daily Mail, the country used facial detection cameras to detect jaywalkers. Once they were detected, their pictures, names, and IDs would be uploaded on the screen in public.

On account of publicly available personal data from the internet, it does not mean the firm can use it without the individual’s consent. Even though they are out in public, a consent form is needed as an agreement between the corporation and the people. It gives the company permission to use an individual’s personal data for certain purposes. It gives individuals the right to know how and why the company collects their personal data. It’s comparable to completing a medical consent form for a different type of treatment for a health problem.

Experts said the U.K. regulation against Clearview AI just encouraged the company to innovate. Others agreed, adding that the company and the country’s authorities needed to come to an agreement on what data collection should be restricted.

By and large, a consent form is needed to inform individuals what the company’s purpose is for collecting their personal data on the internet. It maintains an individual’s right to refuse or accept the company’s offer. It also reduces lawsuits. Datasets need to broaden the demographic groups as technology advances to avoid bias, risks, and the effects of using face detection programs. Nevertheless, can companies, especially law enforcement, really depend on the program to catch criminals and terrorists? Should facial recognition technology be utilized in places other than the police force? What specific laws should be made not to hinder the advancement of this technology and prevent the violation of privacy rights?

Author: maureen l