OpenAI suggests a low likelihood of ChatGPT assisting in bioweapon creation

OpenAI’s GPT-4 only provided a slight advantage over traditional internet research methods in the context of bioweapon creation, according to a self-conducted study. The research aimed to evaluate the risks and potential misuses of AI models, particularly in light of concerns around their use in creating bioweapons.

The study involved 100 participants, split evenly between advanced biology experts and college-level biology students. These participants were divided into two groups: one with access to an unrestricted version of GPT-4 and the other limited to standard internet resources. They were tasked with completing research related to bioweapons, including synthesizing and rescuing the Ebola virus, and their responses were graded for accuracy, innovation, and completeness.

Findings indicated that while the GPT-4 group had slightly higher accuracy scores, the difference was not statistically significant. However, responses from the GPT-4 group were more detailed, and students using GPT-4 reached levels of proficiency close to that of the expert group in certain tasks.

Despite these insights, the study noted limitations, including information hazard concerns that prevented the disclosure of specific task details. The research is part of a broader effort by OpenAI’s preparedness team to assess AI’s implications in cybersecurity and belief shaping. While OpenAI aims to mitigate AI-related threats, the study’s findings contrast with external research and OpenAI’s own assertions about GPT-4’s capabilities, suggesting a nuanced view of AI’s potential risks and benefits.

Next
Previous