OpenAI Taps Lab Where the Atomic Bomb Was Built for Bioscience Study

OpenAI is partnering with historic research facility New Mexico-based Los Alamos National Laboratory to study the use of artificial intelligence in bioscience research, the company announced on Wednesday. 

According to OpenAI, the White House’s Executive Order on AI safety tasks the U.S. Department of Energy’s labs, which includes Los Alamos, with evaluating the capabilities of “frontier” AI models across disciplines like science and biology. This initiative will explore how GPT-4o can help with physical lab tasks using vision and voice models.

“People don’t realize the long history of bioscience research here at Los Alamos,” Nicholas Generous, deputy group leader for information systems and modeling at the facility, told Decrypt. “It all originally started back in the aftermath of the Manhattan Project, trying to understand the health effects of radiation. 

“From there, over the years, [the research] evolved and they learned that a lot of that was centered in DNA, and then from there, Los Alamos was involved with the Human Genome Project,” he continued.

The proliferation of AI tools since the launch of ChatGPT in 2022 has been compared to a nuclear arms race, with companies including Apple, Microsoft, Google, and Amazon pouring billions into generative AI technology. Perhaps it’s fitting, then, that the leading AI developer would partner with the agency founded by American theoretical physicist J. Robert Oppenheimer, known as “the father of the atomic bomb,” in 1943. 

A fictionalized version of Oppenheimer’s life was portrayed in the 2023 award-winning film “Oppenheimer” by director Christopher Nolan. Last summer, during the press tour for the movie, Nolan expressed his concerns about artificial intelligence as the technology surged into the mainstream.

“I don’t think of AI as a threat or even sort of weapon… [it’s] just like any other technology tool or approach, and the threat is really when it gets misused,” Generous said. “So I would say that trying to understand both the benefits and the context of that misuse is what’s important.”

While the research labs’ origins lie with atomic energy, Generous said, Los Alamos has been working on AI model evaluation inspired by rapid AI advancements since March 2023.

“Maybe I’m a techno-optimist, but I generally believe that technological advancement usually creates more good than bad,” he said.

The evaluation with Los Alamos will assess how researchers, both experts and novices, perform and troubleshoot standard tasks after artificial intelligence has been introduced, including genetic transformation, cell culture, and cell separation.

“This AI test effort, this larger one, is being spearheaded by Los Alamos’s AI Risk and Technical Assessment Group,” Generous added. “We see this as being kind of a broader initiative, thinking about AI, both benefits and then understanding the risks, as part of that.”

As Generous explained, the hope is that the test with OpenAI will lead to a broader experiment with other large language models.

“Our vision, at least for Los Alamos, and I think more broadly with the Department of Energy, is to establish an AI testbed where anybody can bring a model—whether it’s open source or a public-private partnership, like the one we have with OpenAI—and we can be able to evaluate it to understand both the benefits and what are the potential risks and use that information to help people use AI more responsibly, or build better safeguards,” Generous said.

OpenAI did not respond to requests for comment from Decrypt.

Edited by Ryan Ozawa.

Source link

About The Author

Scroll to Top