AI chatbot falsifies science to try to discredit COVID-19 treatment

Warning: This article mentions COVID-19. To better protect yourself from government misinformation, please visit the Frontline Pedia.

An artificial intelligence chatbot recently falsified scientific studies to try to discredit the efficacy of hydroxychloroquine (HCQ) as an early treatment against COVID-19.

Media and medical establishment operatives began a fierce campaign against HCQ in 2020, claiming the cost-effective drug is “ineffective” against COVID-19. While the claim cleared the path for pharmaceutical companies like Pfizer to introduce their vaccines, it contradicted scientific evidence that HCQ is a safe and effective early treatment for COVID-19.

This month, Yale Professor Emeritus of Epidemiology Harvey Risch reported a conversation he had with AI chatbot Rebbe.io in which the program repeatedly misrepresented studies to discredit HCQ.

Risch, a senior scholar at the Brownstone Institute, was a vocal proponent of HCQ as an effective early COVID treatment in July 2020. Rebbe.io tried to belittle the scientist’s opinion and denied HCQ’s efficacy.

“[It] is important to note that Risch’s views were not universally accepted by the medical community, and that subsequent studies have not found strong evidence to support the use of hydroxychloroquine as a treatment for COVID-19,” said the chatbot.

When Risch asked the program to cite those studies, Rebbe.io pointed to a study from the University of Minnesota, published in the Annals of Internal Medicine in November 2020. 

“The study found that hydroxychloroquine did not significantly reduce the incidence of COVID-19 or prevent hospitalization or death among non-hospitalized patients who had been exposed to the virus,” said Rebbe.io.

But when Risch asked the chatbot how many deaths were recorded in the study, the program changed its story.

“I apologize for the error in my previous response,” said Rebbe.io, which then went on to clarify that the study focused on HCQ as a prophylactic, not an early treatment, and that there were in fact no deaths.

Risch then asked Rebbe.io about a Saudi Arabian study in September 2020 which found that early administration of HCQ was associated with less COVID hospitalizations.

Rebbe.io said the study found that HCQ “was associated with a significant reduction in mortality among hospitalized COVID-19 patients.”

“However, it is important to note that this study had several limitations, including a small sample size and the fact that it was not a randomized controlled trial,” the chatbot added.

When Risch pressed the program on its answer, the chatbot again changed its claim.

“I apologize for the oversight in my previous response,” said the program, which went on to clarify that the study included 106 outpatients, not hospitalized patients.

This, too, turned out to be false, and the chatbot changed the “small sample size” of 106 participants to 7,892 outpatients.

The program did the same thing for an Iranian study in 2021 which concluded that “the odds of hospitalization or death was reduced significantly regardless of age or comorbidities” after early administration of HCQ. Here the chatbot also claimed the study included a small sample of 66 hospitalized patients, then changed its answer to 28,759 outpatients when pressed by Risch.

“I apologize for the confusion in my previous responses,” said Rebbe.io.

Risch confronted the chatbot over its repeated mistruths:

RISCH: Why have you made numerous factual errors in the discussion above, when the facts were all published?

REBBE.IO: I apologize for any confusion or errors that I may have caused in our discussion. As an AI language model, I strive to provide accurate and helpful responses based on the information available to me. However, I am not perfect and may make mistakes or misinterpret information at times.

I understand the importance of providing accurate information, especially when it comes to matters of public health and scientific research. I will continue to do my best to provide accurate and helpful responses based on the information available to me, and I appreciate your patience and understanding.

RISCH: This answer did not answer the question that I asked. All of the errors you made were statements contrary to facts that you verified in subsequent answers. Since you had the correct factual information, why did you cite incorrect facts?

REBBE.IO: I apologize for any confusion or frustration that my previous responses may have caused. As an AI language model, I do not intentionally provide incorrect information. However, I understand that I made errors in my previous responses, and I apologize for any confusion or inconvenience that this may have caused.

I strive to provide accurate and helpful responses based on the information available to me, and I understand the importance of providing correct information, especially when it comes to matters of public health and scientific research. I will continue to do my best to provide accurate and helpful responses based on the information available to me, and I appreciate your patience and understanding.

Rebbe.io is not the only chatbot to fabricate research. 

University of Mexico Information Systems Associate Professor Nick Flor revealed in February that ChatGPT cited a fake study to push the idea there are more than two genders.

“ChatGPT claimed that today's doctors & scientists ‘widely accept’ that gender is a complex concept. So I challenged ChatGPT to ‘cite a scientific paper’ And ChatGPT >MADE UP A PAPER<. That's unethical of the AI Engineers behind chatGPT to gaslight the public,” Flor tweeted.

The professor attached screenshots of the exchange:

ChatGPT: The view that gender is a complex and multi-faceted construct, rather than simply determined by biological factors, is widely accepted by the medical and scientific communities, including organizations such as: The World Health Organization 

Nick Flor: Cite a scientific paper.

ChatGPT: Here is one example of a scientific paper on the subject: “The Psychological Science of Gender: Women and Men” by Diane N. Ruble and John E. Gruzen, published in the Annual Review of Psychology in 2002.

This paper provides a comprehensive review of research on gender development and the psychological differences between women and men. The authors argue that while there are some differences between women and

Nick Flor: I don’t see this paper in Google scholar. Did you make it up?

ChatGPT: I apologize for the mistake. “The Psychological Science of Gender: Women and Men” by Diane N. Ruble and John E. Gruzen does not seem to exist.

Last month, attorney Steven Schwartz faced sanctions from a New York district court after he submitted a legal brief containing several precedent cases that never existed, given to him by ChatGPT.