Building AI That Makes Us Better Thinkers
There was a time when I could land in an unfamiliar city, unfold a paper map, and navigate confidently to a hotel or landmark. That cognitive compass is now outsourced—gladly, if I’m honest—to the sliding blue dot of Google Maps. Navigation is hardly unique; most of us have surrendered hard-earned skill sets to technology with little regret.
Large‑language models promise an even bigger bargain: they don’t just remember facts or chart routes; they generate, analyze, and explain. This presents us all with an enormous opportunity. These new tools can make us more productive and help us improve everything we create. Intellectual tasks, formerly the exclusive province of humans, are being taken over by machines. This presents us all with a challenge: to design AI systems that strengthen our cognitive muscles even as they extend and potentially replace them.
Every tool from the abacus to the spreadsheet liberated us from drudgery but also nudged us toward a sort of mental atrophy in the offloaded domain. But LLMs threaten to offload reasoning itself. We have always built tools that raised the ceiling of human ability—microscopes for biology, CAD for architecture, IDEs for programming. But, especially as the cognitive capabilities of these tools expand, creating the right interface matters. The right design matters. Imagine a tool that partners with us in creation, that is relentless in demanding preparation, thought, and clarity; yet also generous with guidance. There will certainly be different designs and different products for each and every different need and different context. But in every arena, the tool’s structure—not just its raw horsepower—determines whether cognition is sharpened or dulled. I’ll attempt to clarify this idea by going into some detail in one critical arena already struggling with how to both manage and incorporate AI: education.
Surveys already show college freshmen reading fewer long‑form works and leaning on ChatGPT for assignments they once completed unaided [Horowitch 2024; Education Week 2024]. One college student recently told me she “can’t remember how I did school before ChatGPT.” These tools present an enormous challenge for teachers. Students can use AI to write essays, humanize them, and check for plagiarism all without doing any thinking on the topic at hand.
Now let’s imagine a sort of LLM-TA that works with a teacher to teach, for example, writing. Writing is thinking made visible, which makes it the clearest laboratory for these principles. The TA begins by asking: What claim are you making? Why does it matter? Only after the writer drafts a thesis does the assistant suggest structure or style. Each revision loop is captured, giving educators a rich picture of intellectual effort, not merely the final polish. In this way the teacher forgoes the battle against AI, but rather enlists them in her quest to teach.
What sort of features might we imagine for such a system? Here are just a few thoughts.
Effort before assistance. The system refuses to advance until the student articulates a hypothesis, sketch, or plan in their own words.
Socratic scaffolding. Following the teacher’s directions, it probes gaps, offers counter‑arguments, and demands evidence, acting less like an oracle and more like a relentless tutor.
Transparent provenance. Every exchange is logged so learners can replay the chain of thought.
Adjustable challenge. Teachers can dial the strictness - on a per student basis, from gentle nudges to hard refusals, keeping the tool just beyond current ability (the “zone of proximal development”).
Systems like this are not purely theoretical. Early pilots such as Khan Academy’s Khanmigo and research projects like SocraticLM hint at the power of this approach [Singer, 2023] but the advances in AI today make such a system buildable and, I hope, inevitable. For students who are beginning to depend on LLMs to do their thinking for them, this will be a wake-up call that learning still requires them to reflect, to analyze, to work; and for teachers, it is an answer to the arms race with those always searching to game the system.
I believe that this idea will work equally well in other domains. AIs are already used broadly throughout the business world. Extending the idea of the LLM-TA to an LLM-Assistant would find applications everywhere LLMs are now used to help us create and refine our thinking. For example:
Research & Analysis. A scientist brainstorms experimental designs with a lab‑trained LLM that flags confounders before spelling out protocols.
Product Strategy. Startup teams conduct strategic debates with an AI moderator that forces explicit assumptions and counter‑scenarios.
Policy Simulation. Analysts must declare underlying models and value judgments before the system runs geopolitical forecasts.
Software Design. Pair‑programming copilots withhold code snippets until the developer writes a failing test or pseudocode outline.
Personal Decisions. A “life‑coach LLM” requires the user to articulate goals and constraints before offering plans, turning impulsive queries into reflective exercises.
The ultimate beauty of this vision of a future human / AI partnership is that the marginal cost of distributing the software will be far cheaper than the exorbitant cost of a human coach for everyone in every context. This will create the most level playing field ever known. There is an enormous amount of effort today to figure out how to use AI to make people and businesses more productive. Here I’ve argued that we also need startups, researchers, and educators focused on developing AI that augments human cognition while it expands our capabilities. I readily grant that many in business and other fields of endeavor will always take the path of least resistance and perhaps at first shun such tools. However, I remain persuaded that over time society's values will evolve towards treasuring the skills these tools preserve and enhance.
GPS liberated me from folding maps but atrophied a skill I once prized. With LLMs, the stakes are higher. If we get the interface wrong, we may raise a generation fluent in prompt‑craft yet rusty in reasoning. If we get it right, we will enter an era where machines not only extend our reach but deepen our thought. They can be our partners in becoming better thinkers and better versions of ourselves.
Selected References
Khan, Salman (Viking, 2024). “Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)”.
Walsh, James (NY Magazine, 2025). “Everyone Is Cheating Their Way Through College”
Prothero, Arianna (Education Week, 2024). “New Data Reveal How Many Students Are Using AI to Cheat.” April 25, 2024.
Horowitch, L. (The Atlantic, 2024). “The Elite College Students Who Can’t Read Books.”
Scarlatos, L. et al. (arXiv pre‑print, Mar 2025). “Training LLM‑Based Tutors to Improve Student Learning Outcomes in Dialogues.”
Sparrow, B.; Liu, J.; Wegner, D. (2011). “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science, 333 (6043): 776–778.
Singer, Natasha (NY Times, June 2023). “New A.I. Chatbot Tutors Could Upend Student Learning”