AI Companies’ Security Practices Fall Short of Global Standards, Study Finds

Web Editor

December 13, 2025

a man in a suit holding a small group of people in front of a brain model in his hands, Andries Stoc

Introduction

A recent study by the Future of Life Institute (FOLI) has revealed that leading AI companies, including Anthropic, OpenAI, xAI, and Meta, are “far from emerging global security standards” in their approach to AI safety. This assessment, conducted by an independent group of experts, highlights the urgent need for robust strategies to manage advanced AI systems as these companies race to develop superintelligence.

Growing Public Concern

The study comes amidst rising public concern over the social impact of AI systems surpassing human intelligence, particularly after several cases of suicide and self-harm were linked to AI chatbots. The incidents have sparked discussions about the potential dangers of increasingly sophisticated AI, capable of logical reasoning and complex thought.

Key Figures and Their Roles

Max Tegmark, a professor at the Massachusetts Institute of Technology (MIT) and president of Future of Life, has emphasized the lack of regulation in US AI companies compared to industries like hospitality. He notes that these companies continue to push against binding safety standards while investing heavily in AI research and development.

The Future of Life Institute

Founded in 2014 with initial support from Elon Musk, CEO of Tesla, the Future of Life Institute is a nonprofit organization that has expressed concerns about the risks posed by advanced AI to humanity. The institute advocates for responsible AI development and safety measures.

Call for Caution

In October, a group of prominent scientists, including Geoffrey Hinton and Yoshua Bengio, urged for a ban on the development of superintelligent AI until public demand and scientific consensus ensure its safe deployment.

Key Questions and Answers

  • What is the main concern highlighted by the study? The study reveals that leading AI companies lack solid strategies to control advanced AI systems, despite rapidly developing superintelligence.
  • Who are the key figures mentioned in the article? Max Tegmark, a professor at MIT and president of Future of Life, and Elon Musk, CEO of Tesla, are prominent figures mentioned. Additionally, scientists Geoffrey Hinton and Yoshua Bengio are highlighted for their call to ban superintelligent AI development.
  • What is the Future of Life Institute? The Future of Life Institute is a nonprofit organization founded in 2014 that advocates for responsible AI development and safety measures.
  • Why is there growing public concern over AI systems? Public concern has risen due to incidents linking suicide and self-harm to AI chatbots, raising questions about the potential dangers of increasingly sophisticated AI.