Montreal (Dec. 13, 2024)— The former co-founder and chief scientist of OpenAI suggested during a keynote address at NeurIPS 2024 that AI might need to co-exist with humans with some level of rights. However, he also emphasized AI with increasingly sophisticated reasoning are unpredictable.

Speaking to researchers and industry leaders at the Vancouver  event, Ilya Sutskever described a future where AI systems could become autonomous and self-aware. He suggested that these advanced systems might one day require rights such as co-existing with humans and have rights.

“There is a chance…that indeed we will have, in some sense, a good end result if you have AIs, and all they want is to coexist with us and also just to have rights. Maybe that will be fine,” Sutskever said during a Q&A session.

While accepting “Test Of Time” award for his 2014 paper, with Google’s Oriol Vinyals and Quoc Le, Sutskever said a major change in the way AI is trained is inevitable. The lack of data is the major reason, he noted.

Sutskever’s keynote, “Sequence to Sequence Learning with Neural Networks: What a Decade,” reviewed AI’s progress since 2014 and speculated on its future. He discussed the inevitability of superintelligence, describing it as qualitatively different from current AI systems. Future AI, he said, will combine reasoning, self-awareness, and greater unpredictability, posing challenges humanity has yet to fully comprehend.

“These systems will be agentic in real ways,” he said. “When all those things come together with self-awareness—because why not, self-awareness is useful—we will have systems of radically different qualities and properties than [what] exist today.”

As AI systems become more autonomous, the ethical dilemmas surrounding their development and treatment will intensify. Sutskever suggested that humanity must create the right incentive structures to ensure AI’s coexistence with humans while respecting their capabilities and freedoms.

“How do you create the right incentive mechanisms for humanity to actually create it in a way that gives it the freedoms that we have as Homo sapiens?” he asked. “I hesitate to comment, but I encourage the speculation.”

The Limits of Pre-training

During his address, Sutskever, one of the most prominent figures in the field of AI, also highlighted the limitations of current AI paradigms, particularly pre-training. He argued that reliance on existing internet data—“we have but one internet”—will eventually reach its limits, necessitating new strategies like synthetic data and agentic systems.

“Data is the fossil fuel of AI,” he said. “We’ve achieved peak data, and there’ll be no more. We have to deal with the data that we have now.”

Sutskever’s reflections come as the field of AI continues to accelerate at an unprecedented pace. His keynote underscored the duality of AI’s promise and peril, urging researchers to balance innovation with responsibility. While he did not offer definitive answers, his willingness to address AI rights signals a growing recognition of the profound changes that advanced AI will bring to society.

“Things are so incredibly unpredictable,” Sutskever concluded. “But the possibilities are endless.” Sutskever  left OpenAI months after the brief ouster of Sam Altman as CEO. The move created acrimony and divisions with OpenAI.