News
OpenAI says it could adjust its AI safety policy if a rival ships a high-risk model without safeguards. Critics say it's ...
OpenAI has yet to release a safety report for GPT-4.1, suggesting the company's model releases are running ahead of its ...
A group of ex-OpenAI employees have filed a proposed amicus brief opposing the company's transition to a for-profit entity.
If a competitor releases an AI model that has a high level of risk, OpenAI says it will consider adjusting its safety ...
ChatGPT maker OpenAI has updated its Preparedness Framework to adjust requirements in response to high-risk AI models from ...
OpenAI’s updated AI safety framework drops key pre-release testing requirements—including for persuasive or manipulative ...
The brief, filed by Harvard law professor and Creative Commons founder Lawrence Lessig, names 12 former OpenAI employees: Steven Adler, Rosemary Campbell, Neil Chowdhury, Jacob Hilton, Daniel ...
"OpenAI is quietly reducing its safety commitments," Steven Adler, a former OpenAI safety researcher, posted on X Wednesday in response to the updated framework. OpenAI is quietly reducing its ...
Last month, OpenAI launched a model, deep research, weeks prior to publishing the system card for that model. Steven Adler, a former OpenAI safety researcher, noted to TechCrunch that safety ...
Musk’s lawsuit against OpenAI accuses the company of forsaking its nonprofit objective, which was to ensure that AI ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results