Suggested Price: $0.00
Download Full Report
The Future of Life Institute (FLI) has released its 2024 AI Safety Index. It reveals widespread inadequacies in how leading companies like Anthropic, OpenAI, Google DeepMind, and Meta manage advanced artificial intelligence risks. The findings suggest that critical safety practices are being neglected for profit.
“Companies are falling short on even the most basic safety precautions,” the report states. It warns that no firm has developed a “robust strategy for ensuring that advanced AI systems remain controllable and aligned with human values.”
The independent review panel included top AI experts like Yoshua Bengio, a Turing Award laureate, and Stuart Russell, a pioneer in AI ethics. Their findings were based on public data, company surveys, and assessments of 42 indicators across six domains, including Risk Assessment, Governance, and Existential Safety Strategies.
Failing Grades Across the Board Most companies performed poorly in the evaluation. Anthropic led with a “C” grade, while Meta earned an “F,” the lowest score.