Safety and Trust Issues of AI in Healthcare

Share

AI has become popular in the healthcare industry. It plays a role in transforming health workflows, ranging from eliminating nurses and doctors workloads to assisting patients. It also increases the test score as time passes. Results have shown that BioGPTLarge and Med-PaLM 2 scored higher than 60% accuracy on the exams they were given. ChatGPT’s response received its seal of approval because it was better than doctors’ answers in terms of quality and empathic tendencies. Still, there are concerns about AI in healthcare, such as safety and trust.

According to a study, Americans distrust AI for securing patients’ medical records (37% vs. 22% out of 11,004 American participants). This is expected. The breach of data privacy by bots happens. Italy banned ChatGPT until it fixed the privacy issue. It leaked a user’s private information. Investigators were concerned that OpenAI was using users’ personal information for chatbot training without age verification. A study demonstrates that faked medical images that have been hacked could have life-and-death consequences. Hackers have the ability to alter mammograms’ proof of malignancy to target a specific person or hospital for ransom. This could lead to false positives and false negatives about the patients’ medical condition. In turn, the treatment options may not be effective.

The impact on patient-healthcare provider relationships using AI for treatment recommendations and diagnosis is another concern. Will the impact be negative or positive? It depends on trust. On the positive side, the use of AI will give providers and patients time to establish confidence. Physicians can spend more time helping patients understand the treatment options and their health benefits. They can find ways to improve patients’ health and well-being. Patients can figure out if they can rely on doctors or not. On the negative side, the use of AI will give them time to establish distance. Some patients do not trust doctors for various reasons. These include providers who do not have empathy, provide health misinformation to the public, and choose profit over care. Some patients may also have a tendency to trust AI to diagnose illnesses and prescribe remedies.

More Americans believe using AI to do these tasks would improve health outcomes for patients than not (38% vs. 33% vs. 27% who are not sure). Is that true? A study found a similar result in three experiments examining trust in AI or human doctors when diagnosing patients. Participants trusted human doctors more than AI for diagnosing both low- and high-level diseases. They were also less likely to have confidence in AI recommendations. On the other hand, when they were given the choice between the models of AI and human doctors, their degree of trust grew. The same was true for participants’ confidence in AI diagnosis when they had the choice of choosing a doctor.

This suggests that, despite AI’s widespread use, individuals still choose human medical practitioner over AI when seeking medical advice. AI still provides clinical misinformation. BioGPT gives inaccurate medical answers and information. It makes up absurd claims and conspiracy theories, such as ghost hauntings at American hospitals and the cause of autism by childhood vaccines. It has the potential to also make up a citation to fit the claim as well as modify it. ChatGPT does not offer the latest research. Its responses to the questions compared to the human doctor’s replies show that the chatbots make quite a lot of errors.

Given the potential harm, the speed at which AI is being integrated into healthcare poses a challenge. Most Americans are worried that health professionals implement AI too fast, compared to those who think it is too slow (75% vs. 23%). They are concerned that physicians may use AI before ensuring patient safety. Only a tiny percentage of them believe it won’t help patients’ health, but this depends on the AI’s input and output quality. Up until now, it has been hard to depend on it. A study showed the favorable effect of AI-based support systems’ suggestions on mammogram findings. Radiologists’ accuracy dropped when they opted for the AI-recommended BI-RADS (Brest Imaging Reporting and Data System) score rather than their own.

It is important to carefully implement ethical AI that combines moral compass, safety, and privacy without government interference. It is also crucial to use artificial intelligence with caution and to fully understand and evaluate its performance. Taking advice from chatbots is risky. They provide misinformation and biased responses. Their misdiagnosis can lead to insufficient treatment and inaccurate medical information. They may favor left-wing ideology, even though medical chatbots have not shown any signs, unlike Google Bard and the new ChatGPT. However, the thought of it cannot be ruled out. Some medical schools already implement critical race theory as mandatory training or curriculum, which raises concern about whether they will move toward this theory in medical AI.

Sources:

60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care

Trust in artificial intelligence for medical diagnoses

We Fact-Checked ChatGPT’s Medical Advice

AI bias may impair radiologist accuracy on mammogram

How Reliable Is Microsoft’s Medical AI? New BioGPT Seems Pretty Impressive But Might Be Inaccurate

ChatGPT Answers Patients’ Online Questions Better Than Real Doctors, Study Finds

ChatGPT data leak has Italian lawmakers scrambling to regulate data collection

Critical race theory programs are mandatory in 58 of top 100 medical schools: Report

ChatGPT data leak has Italian lawmakers scrambling to regulate data collection

Author: maureen l