Is AI Making Us Dumber and Underconfident?

It’s the year 2035. Office life is governed not by humans but by prompts. Every slideshow, memo, and brainstorming session is created...

May 13, 2025 - 14:08
 0
Is AI Making Us Dumber and Underconfident?

It’s the year 2035. Office life is governed not by humans but by prompts. Every slideshow, memo, and brainstorming session is created not by collaborative minds but by generative tools. The Top 40? Dominated by AI-generated music. Films are churned out in days. Students no longer cram for tests; they learn to engineer better prompts. In this world, knowledge isn’t earned, it’s generated. And yet, beneath the surface-level convenience lies a pressing concern: Is the overuse of AI technology gradually dulling our intellect and eroding our confidence?

Convenience Comes at a Cost

AI has brought undeniable benefits. In medicine, it's accelerating drug discovery. In climate science, it models complex environmental systems. But in day-to-day life, AI is also replacing low-effort mental tasks, and that's where things get murky. A 2020 study revealed that frequent GPS users had diminished spatial memory, even though they believed their sense of direction remained unaffected. The overuse of even non-AI tools, like GPS and spell check, has already shown how passive tool reliance can reduce specific cognitive capabilities. Now imagine the implications when entire thoughts, decisions, and communications are generated by AI.

The Rise of Cognitive Offloading

"Cognitive offloading" is the psychological term for what happens when we outsource mental work to a tool. That sounds harmless—until you consider the long-term effects. A recent multi-demographic study of over 600 participants found that heavy users of AI tools like chatbots and autocomplete engines were significantly less likely to engage in complex problem solving. They showed a reduced ability to critically evaluate information and often relied on AI suggestions without question. Like muscle groups that atrophy with disuse, the brain adapts to its environment, and ours is now padded with predictive algorithms.

Algorithmic Complacency and the Death of Curiosity

Alec Watson calls it "algorithmic complacency": the idea that algorithms are quietly taking control over what we consume and what we do. This isn’t just about doomscrolling through Instagram. It’s about trusting YouTube recommendations more than our own interests. It’s about letting Netflix decide our weekend plans. Over time, we stop making decisions and start receiving them.

Social platforms now function as AI-driven echo chambers. AI-generated Google overviews have repeatedly been shown to misrepresent facts—yet more than 70% of users report trusting these summaries. A BBC investigation revealed that summaries from tools like Gemini, ChatGPT, and Copilot often contain distortions, hallucinations, or significant factual errors. If we don't question these outputs, we begin to accept fiction as fact.

Education: From Learning to Prompt Engineering

In higher education, the shift is dramatic. Professors like David Rafo saw a sharp improvement in student writing post-pandemic—not due to better comprehension, but due to better AI use. One professor noted: “It was the tools that improved their writing, not their skills.” This isn’t limited to essays; it extends to coding, presentations, even ideation.

The implications are troubling. Skills that are only developed through practice—argumentation, synthesis, originality—are being bypassed. The next generation of professionals could be fluent in prompting but ill-equipped in decision-making, problem-solving, or independent thought.

Corporate Dependency: Productivity or Passivity?

In the workplace, AI tools have become omnipresent. Over 90% of Gen Z and Millennial employees now use at least two AI tools weekly. For many, these tools feel like lifelines in a demanding corporate world. AI can draft emails, analyze tone, summarize meetings, and even generate marketing copy. While this boosts efficiency, it risks fostering dependency.

A growing number of companies now find their younger workforce unable to articulate ideas without AI assistance. Team synergy briefs, project updates, and ideation meetings increasingly rely on generative input. This doesn’t just dull communication—it affects confidence. Employees second-guess themselves, worrying that their own wording won't measure up to AI-polished prose.

The Knowledge Age and Its Cracks

We’ve shifted from the Information Age to the Knowledge Age, where AI doesn’t just retrieve information — it synthesizes it. But when knowledge is built on flawed foundations, the cracks widen with each iteration. This is called "model collapse."

Researchers at Oxford University showed that when AI reads and rewrites AI-generated content repeatedly, the quality nosedives. By the ninth cycle, the text often becomes incoherent. As of 2025, an estimated 60% of internet content has been generated or translated by AI. If this trend continues unchecked, we risk an internet flooded with self-referential, factually weak content—a digital ouroboros devouring its own tail.

When Mistakes Become Normalized

This isn't a hypothetical issue. In 2023, the Detroit Police Department wrongly arrested a pregnant woman named Porcha Woodruff based on an AI facial recognition match. Despite her obvious innocence, officers trusted the AI analysis over human logic. This incident, and others like it, illustrate the real-world consequences of cognitive offloading in high-stakes environments. When people delegate judgment to machines, errors multiply—and so do injustices.

AI and Mental Strength: A Neurocognitive Perspective

Neurologists warn that high cognitive activity is essential for long-term mental health. Dr. Anne McKee, a leading Alzheimer’s researcher, stated that people who stay mentally active build a stronger "cognitive reserve," making them more resilient against age-related brain diseases. If AI tools reduce the amount of critical thinking we do daily, they may not just be making us lazier thinkers—they might be paving the way for early cognitive decline.

So, Are We Getting Dumber and Underconfident?

Not in the traditional sense. We’re not losing IQ points. But we are losing the ability to solve problems independently, to trust our reasoning, and to tolerate intellectual discomfort. The risk is not stupidity but stagnation. AI won’t destroy intelligence, but it can weaken our mental stamina, like using a wheelchair when you don’t need one.

How to Stay Mentally Sharp in the AI Era

  • Delay the Prompt: Try answering a question or solving a task yourself before consulting AI.
  • Train Your Brain: Read long-form articles like this one, engage in debates, and write without help.
  • Cross-check Everything: AI makes mistakes. Always verify with trusted sources.
  • Reignite Creativity: Draw, journal, ideate. Even badly. It keeps your thinking muscles limber.
  • Embrace Human Limitations: You don’t have to be perfect. You just have to be thinking.

We Think, Therefore We Are

AI is not inherently dangerous. Like a calculator, it can enhance what we do. But if it replaces our thinking altogether, we lose the very essence of what makes us human. Descartes' iconic phrase, "I think, therefore I am," reminds us that thought is not optional. It is fundamental.

Let’s not outsource the one thing machines still can't replicate: authentic, original human thought.