In WIRED: MIT assistant professor and CSAIL member Yoon Kim says that the ways LLMs solve problems is mysterious at the moment & their step-by-step reasoning process could differ from human intelligence: “These are systems that would be potentially making decisions that affect many, many people,” he says. “The larger question is, do we need to be confident about how a computational model is arriving at the decisions?”
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)’s Post
More Relevant Posts
-
This Machine Learning Paper from Stanford and the University of Toronto Proposes Observational Scaling Laws: Highlighting the Surprising Predictability of Complex Scaling Phenomena https://buff.ly/4aCdqcW
This Machine Learning Paper from Stanford and the University of Toronto Proposes Observational Scaling Laws: Highlighting the Surprising Predictability of Complex Scaling Phenomena
https://www.marktechpost.com
To view or add a comment, sign in
-
Staff AI / Machine Learning Software Engineer | machine learning / AI, model pipelines and agents, cross-functional collaboration | I help architect, build, and deploy data and model pipelines.
Great paper discussing Questionable Practices in ML research. However, I think it is important to note that this is true of #ML in industry and products as well. Which is why good experiment tacking and monitoring practices is importan Thanks to @Daniel A. for his post (https://lnkd.in/eQXDsCu8) paper: https://lnkd.in/eNpVEtzk
2407.12220
arxiv.org
To view or add a comment, sign in
-
I’m happy to share that I’ve obtained a new certification: Artificial Intelligence and Machine Learning: Business Applications from the University of Texas at Austin ! Focusing on building and deploying machine learning models.( EDA, Regression Models, Classification Models, Decision Trees, NLP , Computer visions and many other topics )
University of Texas - Austin, Digital Verification
vrfy.digital
To view or add a comment, sign in
-
Our 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝗻 𝗮 𝗠𝗶𝗻𝘂𝘁𝗲 series spotlights some of the most active and accomplished researchers across various disciplines at the UofM. Dr. Vasile Rus’ research focuses on transforming the learning ecosystem through the use of artificial intelligence, machine learning and data science to better understand how students learn. The overall goal is to improve learning environments that use technologies such as intelligent learning systems. 📰 | lnk.bio/uofmemphis
To view or add a comment, sign in
-
Just finished " Introduction to Artificial Intelligence " Course.
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Just finished the course “Introduction to Artificial Intelligence”! Check it out: https://lnkd.in/dVFkPaSk This course was much more intense than I thought it would be, but I learned a lot!
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
Contingent and SOW Management Leader | Procurement Strategy | Program Strategy | Workforce Optimization #futureofwork #humancapital #MSP #VMS #TotalTalentManagement
As I have a bit of time on my hands it's the perfect time to continue to learn! I just finished the course “Introduction to Artificial Intelligence”! Check it out: https://lnkd.in/ekYh_S6m
Certificate of Completion
linkedin.com
To view or add a comment, sign in
-
🔬✨ Incorporating symmetry in machine learning processes offers a revolutionary approach to reducing complexity and improving accuracy. In a groundbreaking study, an MIT PhD student and her advisor explore the concept of using symmetry to enhance machine learning. Learn more about their findings and the transformative potential of symmetry in machine learning at Wolf Consultings: [https://lnkd.in/ghKmnAiz) #MachineLearning #SymmetryRevolution Read the full article here: [https://lnkd.in/gzjRfBWw) Original Source: [MIT News - How Symmetry Can Aid Machine Learning](https://lnkd.in/gzPCaZ7C) #GeometricDeepLearning #InnovativeResearch 📚🔍
How symmetry can come to the aid of machine learning
news.mit.edu
To view or add a comment, sign in
-
Food for thought from Mel Andrews: 'Philosophers of science have argued that the widespread adoption of the methods of machine learning (ML) will entail radical changes to the variety of epistemic outputs science is capable of producing. Call this the disruption claim. This, in turn, rests on a distinctness claim, which holds ML to exist on novel epistemic footing relative to classical modelling approaches in virtue of its atheoreticity. We describe the operation of ML systems in scientific practice and reveal it to be a necessarily theory-laden exercise. This undercuts claims of epistemic distinctness and, therefore, at least one path to claims of disruption.'
The Devil in the Data: Machine Learning & the Theory-Free Ideal
philsci-archive.pitt.edu
To view or add a comment, sign in
-
🚀Research Paper Highlights: Let's explore a self-correct reasoning framework that operates independently of human feedback, external tools, or handcrafted prompts! 🌟in 'Learning From Correctness Without Prompting Makes LLM Efficient Reasoner ' by Yuxuan Yao et al. 🔍 Self-Correct Reasoning Framework: The LECO framework enhances LLMs' reasoning abilities by focusing on correct reasoning steps and measuring confidence intrinsically. 💡 Experimental Effectiveness: LECO significantly improves reasoning tasks, reducing token consumption while effectively addressing issues like hallucination, unfaithful reasoning, and toxic content. Key Contributions: 1️⃣ A novel multi-step reasoning paradigm, Learning from Correctness (LECO), that builds on correct steps to reach final answers. 2️⃣ Challenges the traditional view that high-quality feedback must come from external sources, presenting a unique intrinsic method for measuring confidence in each reasoning step. 3️⃣ Enhances both off-the-shelf and open-source models across various multi-step reasoning tasks while reducing token consumption. Impressively, LECO eliminates the need for prompt engineering. LECO introduces an intrinsic self-correct reasoning framework, learning from successful reasoning steps and measuring confidence through generation logits. This approach improves accuracy and reduces token consumption, showing promise for optimizing complex reasoning structures and advancing LLM capabilities. 🌐✨ 🔗Check out the paper for more insights! 📄✨ https://lnkd.in/dUddG3C9 #LECO #LLMs #research #innovation #generativeai
To view or add a comment, sign in
156,647 followers
Professor at University of Agder (UiA)
1moThere is nothing mysterious about LLMs: they are plagiarizers albeit clever ones. The notion that an AI system is able to learn and understand logic, ethics, morality by simply throwing more data and computing power at it is both ludicrous and naive; there is absolutely no evidence for that.