Featured image of post Is AI Making Us Dumber?

Is AI Making Us Dumber?

A GenAI engineer's take on the MIT study claiming heavy LLM users show reduced brain connectivity, and my personal take on the issue.

The Irony Hits Different When You’re Writing About It

You know that moment when you read a headline that feels like it’s calling you out personally? That was me scrolling through my feed and seeing: “Heavy LLM users saw a 47% reduction in brain connectivity 😳”

Here I am, a full-stack GenAI engineer who spends literally half my workday crafting prompts and the other half critiquing AI output, reading about how my brain might be turning to mush. The kicker? I’m using ChatGPT to help me write this very response about the dangers of using ChatGPT.

Chef’s kiss to the irony gods.

But before you close this tab to go do brain push-ups or whatever, let me share why this MIT study—while raising important questions—might be missing the forest for the trees.

Let’s Start With What the Study Actually Found

The MIT Media Lab study, led by Nataliya Kosmyna, tracked brain activity across 32 regions and found that participants who relied on ChatGPT to write essays showed:

  • The lowest brain engagement
  • Underperformance at neural, linguistic, and behavioral levels
  • Increasingly passive behavior, “often just copy-pasting AI output by the end”

Teachers described their essays as “soulless,” lacking “originality, curiosity, and critical thinking.”

Scary stuff, right? But here’s where my Spidey senses started tingling.

Enter Yann LeCun With the Technical Reality Check

Before we all throw our laptops in the ocean, let’s ground ourselves in what these tools actually are. AI researcher Yann LeCun puts it perfectly:

“An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning…”

Translation: LLMs are pattern-matching machines, not thinking machines. They excel at quickly generating plausible text based on patterns they’ve seen, but they’re not doing the deep, reflective thinking we associate with human cognition.

Apple’s research backs this up, noting that while LLMs can handle some logical reasoning, they “fail to develop generalizable reasoning capabilities beyond certain complexity thresholds.”

In other words: they’re amazing at certain tasks, terrible at others. The question isn’t whether to use them—it’s how.

My Day as a “Heavy LLM User” (Spoiler: It’s Not What You Think)

Let me paint you a picture of what being a “heavy LLM user” actually looks like in practice:

The 50% Prompting: This isn’t mindless typing. It’s understanding problems deeply enough to articulate them clearly. It’s strategic thinking about how to break down complex tasks. It’s iterative refinement based on outputs.

The Other 50%: Reading generated code and content. Making sense of it. Criticizing it. Optimizing it. And yes—sometimes getting blown away by solutions I wouldn’t have thought of myself.

Here’s the crucial bit: You never use AI-generated content directly.

At least, you shouldn’t. Every piece of output gets filtered through human judgment. Sometimes I discover amazing approaches. Sometimes I laugh at the nonsense. Often, it’s somewhere in between.

Ancient Wisdom for Modern Problems

There’s a Chinese saying that’s been rattling around my brain since this whole debate started:

学而不思则罔,思而不学则殆

“Study without thinking and you become lost; think without learning and you fall behind.”

(Yes, I’m keeping my translation because, as I told ChatGPT when it tried to “improve” it—I understand Confucius’s mindset and cultural context better than an AI does. Point made.)

This isn’t a new dilemma. Humans have always struggled with the balance between absorbing information and developing wisdom. AI has simply turbocharged the “learning” side of the equation.

Need to know literally anything humans have documented? Ask away. But the wisdom questions—what makes elegant code, how to live meaningfully, what constitutes good writing—those still require human judgment.

The Calculator Defense (And Why It Actually Matters)

Here’s where I might lose some of you, but stick with me.

There’s no point knowing how to do complex math in your head when you have a calculator. You don’t memorize phone numbers anymore. Digital calendars handle your schedule better than your brain ever could.

We’ve been “cognitively outsourcing” for decades. Our brains have limited capacity—we’re constantly trimming obsolete skills to make room for what matters now.

Don’t believe me? Try this thought experiment:

Drop yourself naked in the wilderness. Can you:

  • Build shelter from materials you can identify?
  • Create medicine from plants you recognize?
  • Make clothes from… anything?

Your ice-age ancestors who migrated across continents during actual ice ages would laugh at your “incompetence.” Yet here you are, contributing meaningfully to society, making the world better in ways they couldn’t imagine.

The measure isn’t what our individual brains can do in isolation—it’s whether our cognitive abilities fit the world we’re collectively building.

“AI Dependency” Isn’t the Boogeyman You Think It Is

Let’s address the elephant in the room: being “AI-dependent” might not be the catastrophe we imagine.

Nobody calls you “clothes-dependent” even though humans survived naked for millennia. We’re all “phone-dependent” now, despite memorizing numbers for centuries. These dependencies became normal because they genuinely improved our lives.

Think about it—would you tell someone they’re “too dependent” on:

  • Eyeglasses for seeing?
  • Cars for transportation?
  • Antibiotics for not dying from infections?

AI dependency is just the next step in our tool-using evolution. Darwin didn’t say “survival of the strongest” or “survival of the smartest”—he said “survival of the fittest.” And fitness means adapting to your environment.

The Real Story About “Soulless” AI Writing

Before we accept that AI produces inherently “soulless” writing, let’s talk about an uncomfortable truth: current AI detection tools are laughably unreliable. They routinely flag human writing as AI-generated, especially punishing non-native English speakers.

So when teachers claim they can spot “soulless” AI writing, what are they really detecting?

The issue isn’t that AI lacks a soul (whatever that means). It’s that AI doesn’t understand you—your voice, your values, your standards. That’s what creates the generic feeling.

But here’s the key distinction:

  • AI-generated content: Copy-paste without review = soulless
  • AI-assisted writing: Using AI to accelerate while maintaining your standards = can absolutely have soul

The difference? One word: ownership.

AI Is Your Employee, Not Your Replacement

This reframe changed everything for me: treat AI like an employee you’re managing, not a magic oracle.

When you delegate tasks to an employee, you don’t just accept whatever they produce. You:

  • Set clear expectations
  • Review their work critically
  • Send it back if it’s not up to standard
  • Take responsibility for the final result

The people who become passive copy-pasters aren’t suffering from “AI brain drain”—they’re demonstrating poor leadership skills. It’s not an AI problem; it’s a quality control problem.

If you wouldn’t put your name on work from a human employee, why would you accept it from AI?

The Age Factor (It’s Not What You Think)

You might assume younger “digital natives” adapt better to AI tools. The reality is more interesting.

It’s not about age—it’s about environment and incentives. College students experiment with AI because their environment encourages constant learning. Everything is new, exciting, expected.

But if you’re 35 with established skills that already help you thrive, why learn AI? Your environment whispers: maintain the status quo, focus on family, save for retirement.

The pattern isn’t age-dependent; it’s context-dependent. Put a 50-year-old in a situation where AI skills mean survival, and watch how fast they learn. Keep a 20-year-old in an environment that doesn’t value AI, and they’ll ignore it too.

What Being a Smart AI User Actually Looks Like

After years of daily AI collaboration, here’s what I’ve learned works:

Think Like a Leader, Not a Secretary

  • Set quality standards before you start
  • Review everything with a critical eye
  • Be willing to scrap AI work that doesn’t meet your bar
  • Own the final result completely

Maintain Your Learning Loop

  • Always understand why AI produces certain outputs
  • Test edge cases AI might miss
  • Build intuition for AI’s blind spots
  • Keep asking “but why?” until it makes sense

Preserve Your Core Judgment

  • Regularly do key tasks manually to maintain baseline skills
  • Trust your instincts when something feels off
  • Stay curious about AI’s reasoning patterns
  • Know which cognitive tasks to keep in-house

The Plot Twist Nobody Talks About

Here’s what might blow your mind: I believe heavy AI users are becoming sharper, not duller.

Remember learning to code before AI? Hours of searching for basic concepts, piecing together Stack Overflow answers, hoping you understood correctly. Now? Immediate feedback, clear explanations, multiple approaches to compare.

This accelerated feedback loop doesn’t make you dumber—it makes you learn faster. But (and this is crucial) only if you maintain quality control.

The trap is using AI code without understanding it. Sure, it works today. But when you need to debug, extend, or modify? Your lack of understanding becomes a ticking time bomb.

Your Brain on AI: The Real Story

The MIT study shows what happens when people become passive consumers of AI output. But that’s a choice, not destiny.

The most effective AI users I know treat these tools as sophisticated thinking partners, not replacement brains. They’re having rich dialogues, not receiving stone tablets.

Next time you use AI, pay attention: Are you passively consuming or actively collaborating? Are you outsourcing your thinking or augmenting it?

Because here’s the thing—we’re not just adapting to AI. We’re co-evolving with it. And the humans who thrive won’t be those who avoid AI or surrender to it, but those who learn to dance with it.

The Question That Changes Everything

Instead of asking “Is AI making us dumber?” try asking: “Am I using AI to become more capable?”

The difference between those questions might just determine your cognitive future.

So here’s my challenge: For the next week, track not just what you delegate to AI, but how you manage that delegation. Are you being a passive recipient or an active director?

Share your experience in the comments. I’m genuinely curious—have you noticed changes in how you think since becoming an AI user? What strategies help you maintain that crucial balance?

One last thought to flip your perspective: We’re not losing our minds to AI. We’re discovering new ways to use them. The question isn’t whether to embrace this evolution, but how to shape it consciously.

After all, as Darwin might say if he were around today: it’s not about being the smartest human in the room. It’s about being the human who best fits the room we’re building together.

What kind of room are you building?