Misusing deepfake technology is another way to turn this piece of technology into a nightmare. It is a threat to society, businesses, and individuals. It is a method of bullying or harming people for personal advantage or profit. It’s a way to deceive or deceive others into not knowing the truth. It has an impact on people’s reputations and public perceptions. The negative aspects of deepfakes were demonstrated in a case.
Stanford researcher Renée DiResta did not know that she had received a message from a human-like bot until she looked at the sender’s profile. Keenan Ramsey, a sender, sent her a message, saying that they were in the same LinkedIn community for entrepreneurs. At first, she replied to the message since she thought the message was sent by a human. Ramsey then messaged her a question that triggered her to look up the sender’s profile, which included her occupation, workplace, contact information, and education. She had human facial features and a smile on her face. Ramsey had a human face that could be hard to distinguish from real faces without going into detail.
A study looked at whether participants could tell the difference between AI-generated faces and real human faces, as well as which faces were more trustworthy. In each of the three studies, participants were asked to classify 128 faces out of a total of 800 as real or synthetic. The control group in the first experiment was not trained, whereas the experimental group in the second experiment was trained and given advice on how to recognize people’s faces in the second experiment. In the third experiment, participants were asked to rate the trustworthiness of 128 faces from the same group of 800 faces on a scale of 1 (very untrustworthy) to 7 (extremely trustworthy).
The result for the first experiment showed that participants’ accuracy in identifying faces created by AI from real faces was only 48.2%. The second experiment showed that participant accuracy was 59%, with performance only slightly improving. The last experiment showed that participants trusted StyleGan2-generated faces 7.7% more than genuine ones. The researcher suggested the more ordinary appearance was attributable to the synthesis technique’s propensity for average faces when creating a face. It evoked trust by providing a sense of familiarity or resembling faces.
Continuing DiResta’s case, she felt suspicious of Ramsey’s questions, followed by two more employees from the same company as Ramsey: RingCentral. In addition, her face in the profile raised a red flag for various reasons, ranging from the position of her facial features, missing accessories, and blurred background. At this point, she knew that Ramsey’s face was created by deepfake technology.
Ramsey’s profile by contacting her workplace and college to confirm her identity. There was no evidence that she worked at Ring Central or received an undergraduate degree from NYU. Same thing happened to the profiles similar to her. DiResta and her colleague, Josh Goldstein, also discovered numerous LinkedIn profiles that were similar to Ramsey’s.
The situation was brought to the company’s attention by Stanford researchers. LinkedIn planned to conduct its investigation without clarifying how. In 2021, LinkedIn removed fake accounts that were detected by its automatic system or during enrollment. The company also continued to update the company’s technology to prevent the situation from occurring again as well as other companies. Other companies, such as Renova Digital, stopped selling packages that included bots and avatars.
When deepfake technology falls into the wrong groups or individuals, it could harm others and society. A criminal can commit a crime using the face of someone else and then accuse the person who did not commit the crime. A person could use this technology to perplex others about what is true and what is not. Zelensky in the deepfake video asking Ukrainians to lay down their arms is one of those cases. To combat deepfake technology from being used for malicious purposes, train employees to learn how to spot fake images, videos, news, and audio to prevent deception. Update new software to detect fake footage or other digital information in the same way that other businesses do. Use critical thinking skills as well as multiple sources to prevent false information. Stay informed about recent news related to deepfakes to prepare or develop new strategies, methods, policies, and regulations.
Sources:
That smiling LinkedIn profile face might be a computer-generated fake
AI-synthesized faces are indistinguishable from real faces and more trustworthy