Researchers at the Oxford Internet Institute have issued a warning about the concerning trend of Large Language Models (LLMs) used in chatbots to fabricate information. These models are capable of generating false content and presenting it as accurate, posing an immediate threat to science and scientific truth.
A recent paper published in Nature Human Behaviour highlights that LLMs are designed to produce helpful responses without any assurances regarding their accuracy or alignment with reality. Currently, LLMs are being utilized as knowledge sources and are employed to generate information in response to queries or prompts. However, the data they are trained on may not be factually correct.
One reason for this is that LLMs frequently use online sources, which can contain false statements, opinions, and inaccurate information. Users tend to trust LLMs as human-like information sources due to their design as helpful and human-sounding agents. This can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.
The researchers emphasize the significance of information accuracy in science and education and urge the scientific community to use LLMs as “zero-shot translators.” This means that users should provide the model with appropriate data and ask it to transform it into a conclusion or code rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with provided input.
While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute.