A groundbreaking study from the Wharton School at the University of Pennsylvania, published on February 24, 2026, reveals that immediate, unrestricted access to AI assistance significantly hinders long-term learning and skill acquisition, even when learners are fully aware that such reliance is detrimental to their development. The comprehensive three-month investigation, detailed in the paper "Self-Regulated AI Use Hinders Long-Term Learning," challenges conventional assumptions about AI’s role in education and underscores the critical need for thoughtful system design to foster genuine learning.
Professor Hamsa Bastani of Wharton likens the allure of on-demand AI to a readily available jar of cookies. "You tell yourself that you’re just going to eat one, but it’s a slippery slope," she explains. "Self-regulation is hard, even when you know something isn’t good for you." This human tendency toward immediate gratification, even at the expense of future benefit, forms the core paradox explored by Bastani and her co-authors, Stefanos Poulidis, a doctoral student at INSEAD, and Osbert Bastani, a computer and information science professor at Penn. Their findings suggest that the pervasive integration of AI in educational tools, while promising efficiency, may inadvertently be cultivating a generation of learners who struggle to develop deep, foundational skills.
The Expanding Role of AI in Education: Promise and Peril
The landscape of education has been rapidly transformed by artificial intelligence, with tools ranging from intelligent tutoring systems to generative AI assistants becoming increasingly common in classrooms and remote learning environments. Proponents argue that AI can personalize learning experiences, provide immediate feedback, and democratize access to educational resources, thereby bridging knowledge gaps and accelerating skill acquisition. The market for AI in education is projected to grow substantially, driven by innovations aiming to make learning more efficient and engaging. However, the Wharton study introduces a crucial counter-narrative, highlighting that the manner in which AI assistance is delivered is paramount to its effectiveness. Without careful design, the very tools meant to empower learners may instead disempower them by removing the essential cognitive friction necessary for true mastery.
Historically, educational theory has emphasized the importance of active learning and problem-solving. Concepts like "productive struggle" – the effortful process of grappling with challenging tasks – are central to cognitive development. The emergence of AI, capable of providing instant solutions or hints, threatens to bypass this crucial phase, offering a shortcut that, while appealing in the short term, undermines the neural pathways required for long-term retention and application. This research provides robust, empirical evidence for what many educators have intuitively feared: ease of access does not equate to depth of learning.
Methodology: A Three-Month Deep Dive into Chess Learning
To rigorously test their hypothesis, the research team designed an experiment involving over 200 students from various chess clubs. Chess was chosen as the domain for several strategic reasons. Firstly, it is a sequential decision-making environment, allowing for precise measurement of both immediate tactical choices and long-term strategic skill development. Secondly, unlike rapidly evolving generative AI models such as ChatGPT, the principles of chess remain stable, providing a consistent learning environment for a longitudinal study. This stability was crucial for observing long-term effects without confounding variables introduced by shifting AI capabilities.
The participants were randomly assigned to one of two conditions for the three-month training period:
- System-Regulated Group: These students received automatic, context-sensitive tips from the AI tutor at strategic moments during their games. The assistance was deployed by the system based on predefined learning algorithms, without direct student initiation.
- Self-Regulated Group: This group also received the same automatic tips but, critically, possessed the option to request additional help at any time. This included "move reveal" tips, which directly showed the optimal next move in any given situation.
The study carefully tracked not only the performance outcomes of both groups but also the patterns of AI usage within the self-regulated cohort.
Stark Findings: Performance Gaps and the Erosion of Self-Regulation
The results were unequivocal and concerning. At the conclusion of the three-month training, the system-regulated group achieved performance gains of 64%, demonstrating significant improvement in their chess skills. In stark contrast, the self-regulated group, despite having ostensibly more resources at their disposal, realized less than half of those gains, improving by only 30%. This substantial disparity persisted in follow-up testing conducted weeks after the training period concluded, indicating that the learning deficit was not merely temporary dependence but a lasting impairment in skill acquisition.
The research meticulously documented the mechanism behind this performance gap: the breakdown of self-regulation. Initially, students in the self-regulated group exercised restraint, using on-demand help sparingly. However, as the study progressed, a gradual but consistent increase in AI reliance was observed. By the end of the three months, students in this group were requesting "move reveal" tips every three to four moves. This pattern effectively transformed the AI from a supportive tutor into a decision-making outsourced agent, with students increasingly relinquishing their independent thought processes.
"They knew it wasn’t good for them," Professor Bastani emphasized, recounting follow-up interviews. Students explicitly acknowledged that overusing AI assistance would harm their long-term learning. One student candidly stated, "Using the option won’t win me games against humans later on." Yet, in the immediate pressure of a difficult chess position, the urge for an instant solution proved too strong, leading them to "click anyway." This illustrates the profound challenge of self-control in the face of readily available cognitive offloading, mirroring human behavior in other areas where convenience often trumps long-term well-being.
The Crucial Role of Productive Struggle and the Zone of Proximal Development
The study definitively identifies the reduction of "productive struggle" as the primary mechanism through which on-demand AI assistance impedes learning. Productive struggle refers to the cognitively demanding process of grappling with challenging problems, analyzing alternatives, making mistakes, and learning from them. This is the intellectual heavy lifting that builds expertise and deep understanding.
When students could access immediate solutions, they increasingly bypassed this essential struggle. Instead of investing mental effort in analyzing complex positions or developing strategic thinking, they opted for the shortcut, allowing the AI to solve the problem for them. Each "move reveal" click represented a missed opportunity for the deep cognitive processing necessary for skill development.

Professor Bastani clarified, "Productive struggle is about working at the edge of your ability – tasks that are challenging but achievable. AI assistance that makes tasks too easy pushes you out of that learning zone. You’re no longer practicing at the level where skill development happens." This aligns directly with Vygotsky’s concept of the Zone of Proximal Development (ZPD), which describes the optimal learning space where learners are challenged just beyond their current capabilities but with appropriate support. The study demonstrates that unrestricted AI assistance, rather than providing "appropriate support," often pushes learners outside their ZPD, making tasks too simple to stimulate growth.
Beyond objective performance, the study also captured the subjective experience of learners. Students in the self-regulated condition reported lower engagement and less enjoyment, with one student expressing, "I want to think for myself, not use the button." This indicates that on-demand AI not only undermines cognitive development but can also erode intrinsic motivation, transforming a potentially enriching learning experience into a passive, unfulfilling one.
Universal Vulnerability: Skill and Motivation Are Not Sufficient Guards
One of the study’s most striking revelations challenges a common assumption in educational psychology: that high-performing students, armed with effective learning strategies, will inherently self-regulate successfully when given access to advanced tools. The Wharton research found that over-reliance on AI was not confined to struggling students; even high-skilled chess players, who ostensibly possessed stronger cognitive control and strategic foresight, succumbed to the temptation of unrestricted help.
"There’s a common belief that if you just teach students effective learning strategies, the high performers will self-regulate successfully," Hamsa Bastani noted. "But skill alone doesn’t ensure good self-regulation. Even students who were performing well fell into the pattern of over-requesting help." This finding has profound implications for how AI is integrated into advanced educational settings and professional development, suggesting that the problem of over-reliance is a fundamental human cognitive bias, not merely a deficiency in certain learners.
While skill did not provide immunity, the study did identify intrinsic motivation as a mitigating factor. Students who were genuinely passionate about learning chess, driven by internal enjoyment rather than external rewards, exhibited somewhat better self-regulation. However, even these highly motivated individuals still showed greater reliance on AI assistance compared to their peers in the system-regulated group. This underscores that while individual characteristics play a role, they are insufficient to completely counteract the pervasive influence of readily available AI shortcuts. The implication is clear: systemic design solutions are necessary for all learners, regardless of their innate ability or motivation.
Designing for Deeper Learning: A Call for System-Level Constraints
The research moves beyond merely identifying a problem; it offers concrete, evidence-based principles for designing AI-assisted learning platforms that genuinely support long-term skill development. Rather than simply advocating for vague "guardrails," the study provides specific approaches derived from understanding the mechanisms of harm.
-
Rate-Limiting and Delays: Implementing constraints such as requiring students to wait a specified period (e.g., 30 seconds) before receiving a hint, or limiting the number of help requests per session, can effectively preserve productive struggle. These friction points encourage students to attempt problems independently first, thereby engaging in the necessary cognitive effort.
-
Adaptive and Personalized Assistance: AI systems should be sophisticated enough to adapt not only to a student’s current skill level but also to their motivational state. A highly intrinsically motivated learner might be afforded slightly more autonomy, while others might benefit from tighter, more structured constraints. The crucial insight is that even skilled and motivated learners require some degree of system regulation.
-
ZPD-Calibrated Support: AI assistance should be delivered precisely within each student’s Zone of Proximal Development. Providing help for tasks that are too easy (below the ZPD) offers no learning benefit, while assistance for impossibly difficult tasks (above the ZPD) can lead to frustration and disengagement. The challenge for AI developers lies in dynamically calibrating this zone for each individual by continuously monitoring their performance, engagement, and learning progress.
Professor Bastani articulates a vital shift in perspective for the education technology community: "We need to move beyond just making powerful AI tutors. We need to build them in ways that preserve the struggle necessary for learning. That means system-level constraints." This necessitates a fundamental re-evaluation of current design philosophies, moving away from maximizing convenience and immediate problem-solving toward optimizing for deep, enduring skill acquisition.
Broader Implications for Education and the Future Workforce
The longitudinal nature of this study, tracking a significant cohort of students over three months and measuring persistent effects weeks later, lends unusual robustness to its findings in a field often characterized by shorter-term experiments. By dissecting not just whether AI access matters, but how and for whom, the research provides actionable insights for a wide array of stakeholders, including educators, EdTech developers, policymakers, and parents.
The implications extend far beyond chess clubs. In an era where critical thinking, problem-solving, and adaptability are paramount for the future workforce, the erosion of productive struggle through unchecked AI reliance could have profound societal consequences. Educational institutions must consider how AI tools are integrated into curricula, emphasizing pedagogical strategies that prioritize active learning and cognitive effort. EdTech companies bear a significant responsibility to design tools that genuinely support human thriving rather than inadvertently fostering dependency.
As AI continues to advance and permeate every facet of life, including learning, the findings from Wharton serve as a crucial early warning. The immediate gratification offered by on-demand AI assistance, while superficially appealing, carries a hidden cost in the form of diminished long-term learning and skill development. The research provides a clear path forward: intelligent design, rooted in a deep understanding of human psychology and learning principles, is essential to harness the power of AI for educational good. "What worries me is that these models are already undermining human learning," Professor Bastani concludes. "We have a responsibility to build these AI tools in a way that supports, rather than undermines, humans thriving. This research shows us a path to achieving that goal."
