Ex Employees of AI Firms Sign Open Letter About The Risks of AI

Estimated read time 5 min read

  • 13 people including former employees of top AI firms such as OpenAI and DeepMind have signed an open letter, warning about the risk of AI.
  • They also called out the companies for not being transparent enough and said that they should cultivate a culture that encourages employees to voice out their concerns about AI without fearing repercussions.
  • OpenAI responded to this letter by saying that it has already taken steps to mitigate the risks of AI and also has an anonymous hotline in place for workers to share their concerns.

Ex Employees of AI Firms Sign Open Letter About The Risks of AI

Former employees of top Silicon Valley firms such as OpenAI, Google’s DeepMind, and Anthropic have signed an open letter, warning about the risks of AI and how it could even lead to human extinction.

The letter has been signed by 13 such employees. Neel Nanda of DeepMind is the only one among them who is still employed in one of the AI firms they wrote against.

To clarify his stance on the issue, he also wrote a post on X where he said that he only wants companies to guarantee that if there’s a concern with a certain AI project, the employees will be able to warn against it without repercussions.

He further added that there’s no immediate threat that he wants to warn about. This is just a precautionary step for the future. However, the content of the letter paints a different picture.

What Does the Letter Say?

The letter acknowledges the benefits AI advancement can bestow upon society but it also recognizes the numerous downsides that tag along.

The following risks have been highlighted:

  • Spread of misinformation
  • Manipulation of the masses
  • Increasing inequality in society
  • A loss of control over AI could lead to human extinction.

In short, everything we see in an apocalyptic sci-fi movie can come to life.

The letter also argued that the AI firms are not doing enough to mitigate these risks. Apparently, they have enough “financial incentive” to focus more on innovation and ignore the risks for now.

It also added that AI companies need to foster a more transparent work environment where employees should be encouraged to voice out their concerns instead of being punished for it.

This is in reference to the latest controversy at OpenAI where employees were forced to choose between losing their vested equity or signing a non-disparagement agreement that would be forever binding on them.

The company later retracted this move, saying that it goes against its culture and what the company stands for, but the damage was already done.

Among all the companies mentioned in the letter, OpenAI is in more trouble owing to the string of scandals it has landed in lately.

For example, in May this year, the company disbanded a team that was responsible for researching the long-term risk of AI less than a year after it was formed. However, the company did form a new Safety & Security Committee last week headed by CEO Sam Altman.

Several high-level executives have also left the company recently, including co-founder Ilya Sutskever. While some were left with grace and sealed lips, others such as Jan Leike revealed that OpenAI has digressed from its original objectives and is no longer prioritizing safety.

Openai’s Response to This Letter

Addressing this letter, an OpenAI spokesperson that it understands the concerns surrounding AI and firmly believes that a healthy debate over this matter is crucial. So the company will continue to work with the government, industry experts, and communities around the world to develop AI safely and sustainably.

‘We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.’ – OpenAI

It was also pointed out that whatever new regulations have been imposed to control the AI industry, it has always been supported by OpenAI. Quite recently, OpenAI disrupted 5 covert operations backed by China, Iran, Israel, and Russia that were trying to generate content and debug websites and bots to spread their own propaganda.

Speaking of giving employees the freedom to voice out their concerns, OpenAI highlighted the fact that it already has an anonymous hotline for workers for that exact reason.

While this response might sound reassuring to some, Daniel Ziegler, a former OpenAI employee who organized the letter said it’s still important to remain skeptical.

Despite what the company says about taking steps, we never completely know what’s going on within their walls.

For example, although these companies have policies against using AI to create election-related misinformation, there’s evidence that OpenAI’s image-generation tools have been used to create misleading content.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

#Employees #Firms #Sign #Open #Letter #Risks

You May Also Like

More From Author

+ There are no comments

Add yours