MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)’s Post

In WIRED: MIT assistant professor and CSAIL member Yoon Kim says that the ways LLMs solve problems is mysterious at the moment & their step-by-step reasoning process could differ from human intelligence: “These are systems that would be potentially making decisions that affect many, many people,” he says. “The larger question is, do we need to be confident about how a computational model is arriving at the decisions?”

OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step

OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step

wired.com

Christian Omlin

Professor at University of Agder (UiA)

1mo

There is nothing mysterious about LLMs: they are plagiarizers albeit clever ones. The notion that an AI system is able to learn and understand logic, ethics, morality by simply throwing more data and computing power at it is both ludicrous and naive; there is absolutely no evidence for that.

To view or add a comment, sign in

Explore topics