OpenAI is considered a global leader in the race to create strong artificial intelligence as powerful as humans. Meanwhile, company employees regularly appear in the press and on podcasts, expressing strong concerns about the security of the systems it develops.
Recently, OpenAI released a product whose safety testing was carried out hastily, but the company celebrated the release of the product itself on a grand scale. “They planned the launch party before they knew if the product was safe. We basically failed the process,” an anonymous source told the Washington Post. This is not the first message of this kind. Former and current OpenAI employees previously signed an open letter calling for improvements to the company’s security and transparency practices shortly after the departure of those in charge of these issues, Jan Leike and Ilya Sutskever.
OpenAI’s Public Statements and Internal Actions
On paper, however, everything looks completely different. One of the provisions of OpenAI’s charter states that if another player achieves strong AI, the company will assist in ensuring security rather than compete with a competitor. The closed nature of OpenAI’s AI models is also due to security concerns. “We are proud of our track record of delivering the most effective and secure AI systems and believe in our scientific approach to solving threats. A robust debate is critical given the importance of this technology, and we will continue to engage with governments, civil society, and other communities around the world on behalf of our mission,” OpenAI spokesperson Taya Christianson told The Verge.
The company “didn’t cut any corners” on the security side when launching its cutting-edge GPT-4 model, Lindsey Held, another OpenAI spokesperson, assured the Washington Post. However, an anonymous source said that the examination period for this product was reduced to just one week. Previously, the company announced the launch of a joint project with Los Alamos National Laboratory to identify potential risks of using AI in scientific work. It also became known about the existence of a system for assessing the progress of AI in the company, adds NIX Solutions.
Public Concerns and Future Updates
The public and authorities are concerned that the development of AI technologies is controlled by a small handful of companies, and the average person has no leverage over them. If the claims made by numerous anonymous sources about OpenAI’s security are true, then such a company’s control over life-changing technology is indeed somewhat alarming. We’ll keep you updated on any further developments in this ongoing debate.
Overall, while OpenAI positions itself as a leader in safe AI development, internal voices and public concerns suggest a need for greater transparency and robust security practices.