NEWS

How your brain changes when you outsource it to AI

by | Mar 10, 2025

Creative engineering, vintage illustration of the head of a man with an electronic circuit board for a brain, 1949. Screen print.

My job, like many of yours, demands more from my brain than it is biologically capable of.

For all its complexity, the human brain is frustratingly slow, running at about 10 bits per second — less bandwidth than a 1960s dial-up modem. That’s not enough to keep up with the constant firehose of information we’re exposed to every day. “Raw-dogging” cognition while competing in today’s economy is like bodybuilding without steroids: a noble pursuit, but not a way to win.

Inside this story:

  • The philosophical argument that phones, the internet, and AI tools are extensions of our minds
  • Why humans love to outsource thinking 
  • How relying on devices changes our brains
  • What happens when technological tools both enhance and undermine our ability to think for ourselves

Humans have never relied on sheer brainpower alone, of course. We are tool-using creatures with a long history of offloading mental labor. Cave paintings, for example, allowed our prehistoric relatives to share and preserve stories that would otherwise be trapped in their heads. But paleolithic humans didn’t carry tiny, all-knowing supercomputers in their loincloths.

Using tools — from hand-written texts to sophisticated navigation apps — allows humans to punch above our biological weight. Even basic applications like spellcheck and autofill help me write better and faster than my monastic ancestors could only dream of. 

Today’s generative AI models were trained on a volume of text at least five times greater than the sum of all books that existed on Earth 500 years ago. A recent paper by researchers at Microsoft and Carnegie Mellon University found that higher dependence on AI tools at work was linked to reduced critical thinking skills. In their words, outsourcing thoughts to AI leaves people’s minds “atrophied and unprepared,” which can “result in the deterioration of cognitive faculties that ought to be preserved.”

The mind is so deeply attached to the self that it can be unsettling to consider how much thinking we don’t do ourselves. Reports like this may trigger a sense of human defensiveness, a fear that the human brain — you, really — is becoming obsolete. It makes me want to practice mental math, read a book, and throw my phone into the ocean.

But the question isn’t whether we should avoid outsourcing cognition altogether — we can’t, nor should we. Rather, we need to decide what cognitive skills are too precious to give up.

The extended mind, explained

In 1998, philosophers Andy Clark and David Chalmers published their theory of the extended mind, positing that the mind extends beyond the “boundaries of skin and skull,” such that the biological brain couples with the technology, spaces, and people it interacts with. Following this logic, by outsourcing my cognitive faculties to my phone, it becomes part of my mind.

I call my friends without knowing their phone numbers, write articles without memorizing source texts, and set calendar reminders to juggle more tasks than I could remember myself. The intimate coupling between my brain and my devices is both self-evident and extremely normal.

In fact, Clark and Chalmers point out that the brain develops with the assumption that we will use tools and interact with our surroundings. Written language is a prime example. Reading isn’t hard-coded into our genome, like the capacity for speech is, and until recently, only a small minority of humans were literate. But as children learn to read and write, neural pathways that process visual information from the eyes reorganize themselves, creating a specialized visual word form area — which responds to written words more than other images — about an inch above the left ear. The process physically reshapes the brain.

And as the tools we use evolve, for better or for worse, the mind appears to follow. Over the last 40 years, the percentage of 13-year-olds who reported reading for fun almost every day dropped from 35 percent to 14 percent. At the same time, they are doing worse on tests measuring critical thinking skills and the ability to recognize reliable sources. Some cognitive neuroscience research even suggests that shifting from deep reading to shallower forms of media consumption, like short-form videos, can disrupt the development of reading-related brain circuits. While evidence is still limited, several studies have found that short-form video consumption negatively impacts attention, an effect sometimes called “TikTok Brain.” 

Ned Block, Chalmers’ colleague at New York University, says that the extended mind thesis was false when it was introduced in the ’90s, but has since become true. For the brain to be truly coupled with an outside resource, the authors argue, the device needs to be as reliably accessible as the brain itself. To critics, the examples Clark and Chalmers came up with at the time (e.g., a Filofax filled with notes and reminders) felt like a bit of a stretch.

But today, my phone is the first thing I touch when I wake up, and the last thing I touch before going to bed. It’s rarely out of arm’s reach, whether I’m at work, a bar, or the beach. A year after the first iPhone was released, a study coined the term “nomophobia,” short for “no-mobile-phone-phobia”: the powerful feeling of anxiety one gets when they’re separated from their devices. 

In their paper, Clark and Chalmers introduced a thought experiment: Imagine two people, Inga and Otto, both traveling to the same familiar place. While Inga relies on her memory, Otto — who has Alzheimer’s disease — consults his notebook, which he carries everywhere. (Today, we could imagine Otto consulting his smartphone.) “In the really deep, essential respects, Otto’s case is just like Inga’s,” they write. “The information is reliably there, easily and automatically accessible, and it plays a central role in guiding Otto’s thought and action.” That, they argue, is enough.

Of course, if it doesn’t matter whether my cognitive faculties live in my skull or my smartphone, why bother using my brain at all? I could simply outsource the work, keep up appearances in society, and let my brain rot in peace.

The potential side effects of the extended mind are difficult to study. Our reliance on digital tools is relatively new, and the tools neuroscientists have to observe human brain activity are imprecise and confined to labs. But emerging research points to a reality as uncomfortable as it is self-evident: Allowing digital prosthetics to think for us may compromise our ability to think on our own.

Why we outsource our mental labor

Humans generally don’t like thinking too hard. One recent analysis of over 170 studies spanning 29 countries and 358 different tasks — from learning how to use new technology to practicing golf swings — found that in all cases, people felt greater frustration and stress when they had to use more brainpower. When given the option, lab rats and humans alike usually choose the path of least resistance. Human study participants have even opted to squeeze a ball really hard or get poked by a burning hot stick to avoid mental labor.

Still, people choose to do challenging things — for fun, even! — all the time. Working harder tends to lead to better outcomes, like earning a promotion or resolving a time-sensitive problem. And when cognitive effort is rewarded, people learn to value mental labor itself, even in the absence of an obvious short-term payoff.

But the world gives us plenty of reasons to work smarter, not harder. When external pressures, like tight deadlines or intense competition, raise the stakes, we’re forced to triage our cognitive resources. The demands of always-on capitalism compel the mind to rely on cloud storage, calendar reminders, and chatbots.

Julia Soares, an assistant professor of cognitive science at Mississippi State University, said this tendency aligns with the decades-old social science concept of the cognitive miser. “People get a little bit cheap with their cognitive resources,” she said, especially “when they get stuck on using digital devices.” That’s why rather than constantly juggle an overwhelming to-do list in my mind, for example, I choose to set reminders, alerts, and events for everything short of brushing my teeth. 

There’s a word for this habit: “intention offloading,” or the act of using external tools to help us remember to do things in the future. These tools can be low-tech, like leaving a package by the door so you remember to return it. They can also be digital and relatively hands-off, like recurring Google Calendar events or Slack reminders at work. Either way, “we can notice the information disappearing from people’s brains after they know that it’s also stored outside,” said Sam Gilbert, a cognitive neuroscientist at University College London.

A decade ago, his research group ran an experiment where people had to remember a to-do item while lying in an fMRI scanner. In different conditions, they either tried to remember it on their own or were instructed to set an external reminder. Gilbert observed that brain activity in the part of the prefrontal cortex that normally reflects future plans was strongly reduced when an external reminder was used.

“That sounds scary, but I think that’s exactly as it should be,” Gilbert told me. “Once you know that information is duplicated outside the brain, you can use your brain for something else.”

Something similar seems to happen when people follow turn-by-turn directions instead of navigating on their own. Networks of cells in the posterior hippocampus — part of a seahorse-shaped brain region best known for its role in memory and navigation — form our mental map of the world. This map literally grows with practice. London taxi drivers, who have to memorize all possible routes across tens of thousands of roads to earn their license, have a larger and more developed posterior hippocampus relative to London bus drivers, who simply follow pre-set routes. 

a hand holds a smartphone with map app open

Growing up zillennial, I remember watching my parents print out and memorize MapQuest directions before heading off on a long drive. By the time I could get behind the wheel, I had a smartphone equipped with GPS. As a new driver, I let my phone handle directions while I handled singing along to Arctic Monkeys songs. It may be no coincidence that my sense of direction today is awful. I can’t remember a parking spot to save my life, and one tiny detour or a dead phone can have me accidentally taking the road less traveled.

My posterior hippocampus probably isn’t withering away — the “navigation” it’s involved in extends to more abstract scenarios, like navigating social media networks — so the cognitive liberation provided by GPS feels worth the cost. Navigation doesn’t feel central to my identity. I’m willing to outsource it. 

But as newer technologies take over more of our intimate thought processes, it’s worth carefully weighing the consequences of relinquishing control, lest we lose things we truly value.

Do our devices actually make us less smart? 

Back in 2004, Google co-founder Sergey Brin told Newsweek, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.”

We essentially live in that world today, but it’s not clear that we’re better off. As I write this, I have the power to answer nearly any question imaginable using one of the two incredibly powerful computers in front of me. The internet provides instant access to a sea of information, and AI search can save me the trouble of having to wade through it. All of the knowledge we need lives in data centers, which increasingly makes storing any of it in my brain feel like an unnecessary luxury.

As Nicholas Carr wrote for The Atlantic nearly 17 years ago, when early Google was our main cognitive partner: “My mind expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.”

A couple of years later, the leading academic journal Science published a study declaring that Google does indeed make us less intelligent. Researchers found that when people expect to have future access to information — as one does when the entire internet lives in their pocket — their memory retention and independent problem-solving skills decline. 

This sparked broader conversations about what some experts call “digital dementia,” essentially the academic term for “brain rot”: the theory that overusing digital devices breaks down cognitive abilities. One group of Canadian researchers even published a paper predicting that excessive screen time will cause rates of Alzheimer’s disease and related dementias to skyrocket by 2060.

However, long-term studies tracking older adults over time show that seniors who use their phones to help them remember things are actually less likely to develop dementia. Technology that automates recurring, mundane tasks — the stuff our brains struggle with anyway — isn’t the problem. What should concern us is surrendering our intellectual autonomy by letting devices think for us, rather than with us. And that’s precisely what appears to be happening with AI.

Ten years ago, a series of experiments led by Matthew Fisher, now a marketing professor at Southern Methodist University’s Cox School of Business, found that people who searched the internet for information felt smarter than they actually are. Fisher suspects that this is because old-school internet searching, following hyperlinks and stumbling across information, feels like following your own native train of thought. But it’s important to know what you don’t know.

The conversational nature of AI chatbots draws a clear psychological boundary that traditional web searches don’t. While the internet feels like an extension of the mind, “When I’m talking to ChatGPT, it doesn’t feel like it’s a part of me. If anything, I feel kind of dumb talking to it,” Fisher told me. “It highlights my own ignorance.”

Recognizing AI as separate from ourselves could theoretically inspire us to question its responses. But if interacting with AI as if it’s an oracle — like many do — risks blindly accepting its outputs. As soon as ChatGPT was released, students began submitting AI-written essays filled with hallucinated references. AI-powered hiring tools regularly review AI-generated job applications, and some doctors use ChatGPT in their practice, despite its not always reliable ability to cite its sources.

This tension between preserving our cognitive integrity and embracing technological assistance permeates the workplace, where today, the brain alone is rarely enough.

What’s the real trade-off?

There’s something in economics called the Jevons paradox: the idea that increased efficiency leads to increased consumption. When applied to AI, it suggests that as digital tools make workers more efficient, it increases demand for their labor. Given the opportunity to expand our minds with automated workflows and generative AI, we’ll take it. And as technology advances, expectations expand to match, leaving us with higher baseline demands.

To keep up with the requirements of a knowledge sector job in 2025, you need more than your own mind. The standard for productivity has shifted dramatically in recent years. Under-resourced newsrooms, for example, require journalists to not only report and write, but also fact-check, monitor trends, and maintain a personal brand across multiple platforms. Software engineers face ever-tightening sprint deadlines while creating the very tools upending their jobs. Across fields, the processing limits of the human brain can’t compete with expectations of constant availability, instant information recall, and perpetual content creation.

How to fight brain rot:

  • Immerse yourself in reading. If not a book, try sitting with a magazine feature. It’s one of the best ways to improve focus, imagination, and overall brain health. 
  • Multitask less. Our brains are horrible at it. Ron Swanson was right when he said, “Never half-ass two things. Whole-ass one thing.” 
  • Think before handing AI the wheel. Ask yourself: “Would solving this problem myself be a total waste of time? Or will it help me understand something more deeply?” If the answer leans more towards the latter, take a stab at it yourself before passing the question off to a chatbot. 
  • Try a digital detox (yes, really). Taking intentional time away from social media, your phone, or screens altogether can help reset your relationship with your devices.

Insisting on avoiding the tools in front of you can mean failing to meet increasingly high expectations. “If I’m going to see my doctor,” said Fisher, “I don’t want them to only give me information they’ve memorized. I want them to have as many resources at their disposal as possible to find the correct answer.” 

In high-stakes situations, prioritizing accuracy over cognitive self-reliance seems obvious. The challenge becomes knowing where to draw the line.

Some tasks, like memorizing phone numbers and drafting insurance appeal letters, we’ve happily surrendered without much consideration. The patience and focus required to solve hard problems, however, seems worth holding onto. As a kid, I could sit and read a book for hours without even thinking about getting up. Now, I can barely read a single 800-word news article without feeling a physical compulsion to check Instagram.

It’s increasingly difficult to convince myself that solving a hard problem is actually worth solving when easier alternatives are just a click away. Why bother taking the time to write a LinkedIn post promoting my work, when AI can do it faster (and likely better)? Everyone else is doing it.

But taking the time to wrestle with challenging ideas on your own “can give you surprising insights or perspectives that wouldn’t have been otherwise available to you,” said Fisher. For example, Soares told me that putting pen to paper, while a kind of analog offloading itself, “exponentially increases my ability to think by creating a change in the world — the writing on the page in front of me.”

The connections we make between seemingly unrelated concepts often come when we’re showering or taking a walk, alone with our wandering thoughts. This can’t happen when the information lives elsewhere. Soares cautioned that we should be mindful about allowing tech to “steal something away from us that we would not have otherwise” — like mind wandering.

When used with intention and discernment, you can reap the benefits of AI without compromising your cognitive integrity. It’s similar to today’s food environment: in theory, we have unprecedented access to healthy options, but only if you’re informed, deliberate, and in many cases, wealthy. But the food environment, like the digital tool environment, is built to push you toward options that are highly palatable and cheap to produce — often, not what’s actually good for you.

The ability to use AI selectively, without losing your mind, might be an elite privilege. While wealthier households generally have more digital devices, poor teens spend more time on their devices than those from rich families. It seems that as people’s access to technology increases, so does their ability to restrict that access. The same may be true for AI: while people with higher incomes and education levels are more aware of examples of AI use in daily life, a study published earlier this year found that less educated people are more likely to blindly trust AI. The more people trust AI, the more likely they are to hand over their mental workload, without bothering to evaluate the outcome. They write, “This trust creates a dependence on AI for routine cognitive tasks, thus reducing the necessity for individuals to engage deeply with the information they process.”

We won’t know for many years exactly what our devices are doing to our brains; we don’t have the neurological tools, and there hasn’t been enough time for longitudinal studies to track the full impact. But we have an intuitive sense of what our devices are doing to our psyche, and it’s not great. The scattered attention, the weakened ability to focus, the constant urge to check for updates — these are tangible changes to how we experience the world.

Even more worrying than brain rot is the fact that a handful of very rich people are developing AI at breakneck speed, without asking society for permission. As my colleague Sigal Samuel has written, Sam Altman, CEO of OpenAI, literally said his company’s goal is to create “magic intelligence in the sky” — without attempting to seek buy-in from the public. The question isn’t just how these tools reshape our individual cognition, but how they will irrevocably change society.

“Plagiarism, misinformation, and power imbalances worry me 100 times more than I worry that we might be losing our cognitive abilities by overusing technology,” Gilbert said. The real risk may not be that we outsource too much thinking, but that we surrender our agency to decide which thoughts are worth thinking at all.

This post was originally published on this site