I once started a lecture to a group of business students by asking them this question: “Why do we have five-fingered gloves?” I waited to see if anyone would take a stab at the answer. When all I got were a few laughs and confused looks, I answered my own question: “Humans make five-fingered gloves because humans have five-fingered hands!” This prompted more laughter and, I’m sure, a little more confusion. After all, they were there to listen to a neuroscientist talk about the business world, their business world in a few short years. What would gloves and fingers have to do with the workplace or their brains or both?
“Well, duh,” I intoned. “Your brain has the cognitive equivalent of five fingers. The organ is shaped to respond with great productivity to certain environments and to other environments, with no productivity at all.” I reasoned that ergonomics applies to the mind just as much as to the hand. “If you’re designing a workplace and want to optimize output, you better keep the cognitive shape of the brain in mind,” I warned.
I went on to explain that the typical place of business is not designed with the cognitive equivalent of a five-fingered glove. And so I invited my temporary class to a thought experiment: What if the workplace were tailored to the brain, the way a glove is tailored to a hand? What would organizations look like if the business of profit-making took the business of brain function seriously? How would we design management structures? What would physical workspaces look like? What environment would best aid creativity, productivity, and the simple ability to just get things done?
The goal of this book is to answer questions just like these. We’re going to explore how the application of the behavioral and cognitive neurosciences can improve your productivity in the workplace. The information is relevant whether you’re working from the corner office at headquarters or from your closet office at home. Call it an exercise in cognitive ergonomics.
This won’t be your typical book about work, however. Almost every concept here was braided by the deft hands of Charles Darwin. We’re going to use his evolutionary ideas to outline the book’s central challenge: how to work with a brain that operates in the twenty-first century but still thinks it lives in the ancient Serengeti. We’re going to explore how this jiggly three pounds of incredible problem-solving genius, finely tuned to spear mastodons and pick berries, learns instead to run staff meetings and read spreadsheets.
Sometimes the organ conforms only reluctantly. After all, our brains haven’t been exercising long enough in the civilization gym to completely break with the shackles of the Pleistocene, the prehistoric era in which the organ evolved inside the skulls of the first modern humans. Sometimes the brain conforms willingly to modern life, especially if we understand enough about its inner mechanisms to work with rather than against its natural tendencies. In a nutshell, we’re going to explore how the science side of behavior informs the business side of behaving.
This effort is divided into ten brain rules. These rules are things we know about the brain based on peer-reviewed science. You can apply each rule to the corresponding area of your work life. Some rules deal with specialized areas of business, such as hiring practices and presentations. Others deal with more general areas of interest, from workspace design to getting along with others. We’ll find out why you’re so tired after Zoom meetings. We’ll examine what you can do to your office, whether at home or at your work building, to make you more productive (hint: add plants). We’ll learn why people become more interested in sex after they’ve been promoted. We’ll explore the cognitive neuroscience behind creativity and teamwork, and discover the most effective ways to kill your PowerPoints. We’ll end with explaining why good, old-fashioned change is so hard for good, old-fashioned people. With this knowledge, we will discover how to work smarter— designing our five-fingered glove, stitch by stitch.
The brain is amazing
Let’s begin with some background information, starting with a few words about me, your glove outfitter.
I’m what’s called a developmental molecular biologist, with special research interests in the genetics of psychiatric disorders. Those interests have manifested in two ways in my career— on the scientific side, as an affiliate professor at the University of Washington (department of bioengineering), and on the business side, as an analytical consultant, mainly advising for-profit companies in the private sector. The latter side of that experience is why I was asked to lecture to that group of business majors I mentioned.
I have been interested in taking cues from brain science and applying those cues to aspects of our lives for my entire career. In fact, I’ve written three books doing just that: Brain Rules, Brain Rules for Baby, and Brain Rules for Aging Well. I have never ceased to be amazed by what the brain can teach us. To illustrate this fascination, I invariably start with a case study, whether giving a lecture or writing a book. These pages are no exception.
Let’s talk about an unremarkable fellow with a truly remarkable concussion. Jason Padgett was a below-average student and college dropout, interested primarily in his biceps and his mullet. He hated math, loved girls—his word—and pretty much lived to party. At one of those parties, Jason was brutally assaulted and knocked unconscious. He woke up in the ER with a severe concussion. The doctors injected him with a massive painkiller, then sent him home. He would never be the same again.
When Jason awoke, he started seeing outlines of people and then, weirdly, over the course of days, began drawing extraordinarily detailed mathematical shapes. One day, during his convalescence, he was sketching these figures at a mall. A man approached him, looked at his work, and struck up a conversation. “Hello, I’m a physicist,” the man said. “What are you working on there?” The man then said something that changed Jason’s life: “Looks like you’re trying to talk about space-time and the discrete structure of the universe.”
Jason was stunned. The stranger grinned. “Have you ever thought about taking a math class?” he asked.
Jason eventually took the physicist up on his suggestion and discovered something both amazing and funny: Jason the party animal had become Jason the mathematical genius. His quantitative superpower was the ability to draw mathematical fractals, which quickly developed into a wide variety of math skills. Researchers from Finland studied Jason’s brain and discovered that his injury had given him an all-access pass to specific regions that previously wouldn’t let him get past pre-algebra. It was a mixed blessing, however. He also acquired obsessive-compulsive disorder and, for a few years, became a hermit.
Jason is a rare individual, diagnosed with Acquired Savant Syndrome, one of about forty such individuals described in the research literature. Math proficiency isn’t the only acquired talent detailed in the literature. Other subjects with this syndrome have shown sudden changes in painting ability, writing proficiency, and mechanical aptitude. We have no idea how this shift happens. Padgett believes we all have certain hidden cognitive superpowers, if only we could find access to them.
That may be a bit of a stretch, but the possibility is intriguing and just one of the many reasons I’m so gobsmacked by the brain I haven’t had a boring day in years.
(By the way, don’t try Jason’s model at home. Most people with injuries as severe as his don’t wake up as Albert Einstein. Sometimes they don’t wake up at all).
Energy hog
Understanding how researchers look at people like Jason requires some rudimentary understanding of how the brain works. Whether geniuses or not, we are all saddled with an unexpected tendency that borders on being annoying: Our brains are really into energy conservation. They function like a parent continuously nagging us to turn off the lights when we leave a room. The organ monitors how much energy your body is consuming, how much it’s expending, and what needs to happen to fill up the tank. This accounting preoccupies so much of the brain’s working life, some scientists are convinced that energy conservation is its main function. Here’s how researcher Lisa Feldman Barrett puts it:
Every action you take (or don’t take) is an economic choice—your brain is guessing when to spend resources and when to save them.
The brain has profound reasons to be concerned about resources. It’s an energy hog, acting like a three-pound SUV. It accounts for only 2% of your body weight, but the brain sucks up 20% of the available fuel.
That might sound like a lot, but 20% is barely enough to keep it functioning. The brain has too much to do (something to keep in mind about your audience if you’re working on a presentation). It tries solving job overload by continuously scanning for shortcuts. For example, it reduces what it pays attention to, something best seen in visual processing. The eye initially presents to the brain the torrential equivalent of 10 billion bits of information per second, but then the brain’s energy editors go to work. By the time the information reaches the back of the brain (the areas where you’ll actually begin seeing something), that rate has been whittled to a paltry 10,000 bits per second.
The brain is so concerned with energy resources, it continuously livestreams to itself forecasts of how much energy will be needed for survival at any given moment. But it doesn’t just forecast gas-tank information. Its predictive ability bleeds into many other areas, from predicting people’s intentions to figuring out the best way to lead them—something that might be useful to know for those interested in becoming business managers or executives.
Sweet wattage
Exactly what type of energy resource does the brain consume? And what does it use that energy for?
The answer to the first question is familiar to anybody with a sweet tooth. The brain mostly consumes sugar (glucose), more than a quarter pound daily. The answer to the second question involves one word: electricity. The brain converts the sugar into electrical energy to perform most of its tasks, including communicating information from one brain region to another.
You can listen in on this electrical chatter by simply taping a few electrodes to your scalp. There’s quite a bit to listen to, even when you think your brain’s resting. It must keep many vital things going, after all: your heartbeat, for example, and your breathing—both of which require energy.
How much energy does it need? Stanford scientists estimate that a robot capable of accomplishing all the tasks a typical brain performs at rest would require 10 megawatts of power—the typical output of a small dam. When the brain executes those tasks, it uses only 12 watts to do it—about enough for a small light bulb. No wonder the organ is so preoccupied with its energy supply!
How did our brains become both fuel-hogs and fuel-efficient? The answer comes from understanding a bit about our evolutionary past, a history we’ll be revisiting in nearly every chapter of this book. We’ll discover we didn’t start out with a powerful 12-watt brain. We started out with a much smaller version, one virtually indistinguishable from the brains of primates, descendants of which we can still see in the jungles of Central Africa today.
We’ll also discover that, for reasons lost to antiquity, we began diverging developmentally from our simian sisters and brothers about 6–9 million years ago. We discarded our habit of walking on all fours, selecting instead the far more perilous bipedal design, which required us to continually shift our body weight back and forth onto constantly moving feet—a potentially hazardous development: we became top-heavy. Our incredibly important and incredibly fragile brains encased in our skulls (which account for a whopping 8% of our body weight) were now the body parts farthest away from the ground. Maintaining balance became a critical survival issue. Some researchers believe this shift produced a whole suite of demands on brain function, pushing us along the road to becoming earth’s cognitive valedictorian. Our brains got bigger, more complicated, and more in need of fuel.
There is much that is controversial about this origin story and its timeline, as with just about everything else in hominid paleontology. In fact, the only thing scientists can agree on is that, for a while, standing didn’t matter much. By the time we were 3 million years old, we’d only learned how to bang on things with partially chipped rocks. Yet things were about to change.
The first co-ops
A confluence of geological events more than 2 million years ago caused the earth’s climate to transform dramatically, resulting in an overall cooling, and much of the humid African jungle home of hominids began drying out. Our once-steady climate became remarkably unstable. The aridification of Africa, resulting in an expanding Sahara, had begun, a process that continues to this day.
This aridification was potentially catastrophic for us. We’d spent most of our time enjoying a climate that was wet, humid, and relatively easy to survive. But now our situation had become more difficult. We could no longer just pluck food from trees, then wash everything down with a gulp from a nearby stream. We were forced to change from being creatures of forests to being creatures of grasslands. Our ancestors who survived the change from wet-wash to dry-cycle did so by becoming wandering hunter-gatherers, traipsing around a drier world—the African savannah. The requirements for that lifestyle changed nearly everything about us.
With the closing of our rainforest of a grocery store, we were compelled to walk longer and longer distances to find food and water. Such changes put new pressures on our developing, energy-guzzling brains. We really needed to (a) remember where we were, (b) decide where we were going, and (c) figure out how to get from where we were to where we were going. It’s no accident that the same brain region involved in memory formation (the hippocampus) is also involved in helping us navigate flat surfaces.
Climatic change required us to learn how to navigate not only our physical surroundings but also our social relationships. The need to cooperate very quickly became a survival issue in the savannah. Why a survival issue? Compared with just about every other predator our size, we were (and are) a physically very weak species. Our canine teeth are so small and blunt that even chewing an overdone steak is challenging. Our fingernails (claws) don’t do very well even against plastic packaging.
These deficits presented us with an evolutionary choice: we could get physically bigger, following, say, the elephants’ size-upgrade plan. This would mean evolving to have a gigantic, dominating body column, which would have taken a gazillion years. Conversely, we could get smarter, shifting a few neural networks around, boosting something we were already starting to get good at: socially relating to each other. Such a shift would not take as much time as trying to become the size of an elephant but would have the same effect. It would create for us the concept of ally, effectively doubling our biomass without actually doubling our biomass.
Given that the estimated height of an average Pliocene hominid was 160 centimeters (about 63 inches), you can guess which path we chose.
Mammoth-sized cooperation
Cooperation turned out to be a useful design. It helped us to get otherwise impossible projects done, just as it does today. There are terrific examples of what groups of 63-inch-tall people can do if they learn to do it well. They become proficient in making death pits, for example.
A few miles north of Mexico City, a couple of these grim holes were found by a group of construction workers who were getting ready to dig out a landfill. The workers also found hundreds of mammoth bones, all concentrated into two pits, no creature showing signs of dying naturally. There were 14 individual mammoths altogether, along with ancient remains of camels and horses. These were hardly the only prehistoric killing pits ever discovered, but this was a particularly odd one: The animals had been slaughtered, butchered, flayed, and ritualized. One animal’s bones were arranged in what the researchers called a “symbolic formation.” The left shoulder of every mammoth was missing, leaving only right shoulders for investigators to ponder. All the mammoth heads had been turned upside down.
The researchers speculated that ancient hunters dug these exotic pits—which may have been filled with mud—then drove the animals into them, where they could be speared to death. At 6 feet deep and 82 feet long, the pits were certainly large enough for that. There’s also evidence for a larger chain of pits besides these two, suggesting a vast killing field on an industrial scale.
The point? An adult mammoth stands 11 feet at the shoulder and weighs around 8 tons. No way a 5-foot, 3-inch human could take down even one of these animals solo, and remember there were 14 carcasses. For ancient hunter-gatherers to create a franchise of mammoth slaughterhouses, they would have needed to coordinate their behavior. Indeed, cooperativity was evident in virtually every physical feature of the Mexico City find, from digging the holes to carving up the meals to creating the rituals.
Aspects of this story are currently controversial and, clearly, will be the subject of more research. But one thing not in dispute is evidence of evolution’s ability to transform a creature rising a mere 160 centimeters off the ground into the most colossal predator of the Stone Age.
Connections
Fast forward a few million years. We know today that the brain is one of the most powerful problem-solving tools evolution has ever fashioned. But how does it work? What are its quirks? Where is all the fuel going? And when we look into these incredible brains of ours, what do we find? Let’s consider some basic brain biology.
It took many centuries to discover that this connectable kernel did anything important. It just sits there, after all, unlike hearts (which beat) or lungs (which bellow). As a result, most of the early research was simply a boring cartographic exercise. Early neuroanatomists cracked open the skull and named what they saw.
Many of the brain’s structures were labeled after familiar objects from the non-brain world. For example, cortex means bark, probably because the brain’s thin “skin” reminded some neuroanatomist of tree parts. Thalamus means bed chamber, possibly because somebody thought it looked like one (it doesn’t, actually). Amygdala is the Greek word for almond, its shape reminiscent of the hard-shelled drupe. There’s even a small pair of rounded structures called the mammillary body—so labeled, rumor has it, because they reminded the neuro-cartographer of his wife’s breasts.
Early researchers believed that these regions were highly specialized, each with their own dedicated suite of jobs to do. They were partly right, but a modern understanding of how the brain works reveals a more nuanced and dynamic picture of brain structure and function. We now know the brain is not so much a collection of awkwardly labeled, unitasking regions, but rather hundreds of vast, dynamic, interconnected networks—the most intricate road map you’ll ever see. There are clusters of nerve cells, many still corresponding to the old labels, which you can think of as being like cities. These cities are interlinked with miles of neural “roads.” You have about 500,000 miles of these neural roads stuffed into your cranium, an object not much bigger than a cantaloupe. That’s more than three times the total number of roads in the US National Highway System.
These networks aren’t made of hardened asphalt, of course. They’re made of squishy cells. Many different cell types exist in the brain, the most famous of which are called neurons. A typical neuron looks like a scared mop—an extended, furry head plopped onto the end of a long stick. You have about eighty-six billion of these oddly shaped cells stuffed into your head.
To form individual cables within the brain’s networks, these mops are situated end-to-end, separated by tiny spaces called synapses. A typical neuron has several thousand of these synapses. These neural roads link together in astoundingly intricate formations. A handful of brain looks like the root ball of a rhododendron.
Wiring
Mapping such root balls is challenging, not that a lot of smart people aren’t trying. Despite their efforts, which often burn through federal deficit-sized budgets, we don’t yet have a complete, authoritative structural map of the human brain’s circuitry. We call such maps structural connectomes. And the structures are not even the hardest part to chart. Even more difficult to chart are their functions, or the way specific circuits work together to provide some service. We call these functional connectomes. One reason these maps have proven so tough to make is a certain annoying generosity the brain possesses. That is, it offers many neural “employment opportunities” for the circuits lying in its interior.
Some circuits have job descriptions that are quite stable. They’re hardwired into the brain and function similarly in anyone who’s human. Consider, for example, two specific areas on the left side of your brain named Broca’s and Wernicke’s areas. These lefties are responsible for human speech. Damage the Broca’s area in any human, and that person will lose the ability to produce speech (Broca’s aphasia) but generally will still be capable of understanding spoken and written words. An injury in Wernicke’s area (causing Wernicke’s aphasia) does just the opposite, causing an inability to understand spoken and written speech but, astonishingly, does not impact ability to produce speech.
Such hardwired circuits are ridiculously specific, and not just for speech. Consider a man the research world calls RFS. Due to disease, RFS lost the ability to consciously comprehend numbers, and in a really weird way. If his brain detected a number, the image of the number got visually perturbed, flipping and then deteriorating into a messy visual blob. The deterioration never happened when his brain viewed letters, however. He could perceive, read, and write the alphabet just fine. His speech was great, too. The bottom line is that he suffered damage to a neural circuit that was specifically dedicated to processing numbers, separate from other visual inputs.
This is hyper-dedication on steroids. Yet such hardwiring is characteristic of only some brain circuits. Many are not laid down according to some universal human template. Some circuits display configuration patterns that are as specific to you as your fingerprint, which means every person’s brain is wired differently from every other person’s brain. So mapping the brain’s structures and matching each to a function is achingly slow. Teasing out which circuits are common to everybody versus which circuits are common to no one has frustrated neuroscientists for decades.
Plasticity
Mapping is made all the more challenging when one considers that the brain is capable of rewiring itself on the fly. This might sound strange but is actually quite common. In fact, it’s happening right now as you’re reading this sentence. Whenever you learn something, the brain rewires itself. Whenever you process a new piece of information, physical connections between neurons change, sometimes by growing new connections, sometimes by changing preexisting electrical relationships. We call such rewiring neural plasticity. Eric Kandel won a Nobel Prize in part for discovering that, most of the time, the brain is hardwired to avoid being hardwired.
Know what this means? What you choose to expose yourself to profoundly affects how your brain will function. This can profoundly influence your relationship with stress and influence how much creativity you allow into your life – things we’ll discuss in a bit.
The brain’s ability to reorganize itself can be taken to ridiculous dimensions. Consider the case of a six-year-old boy who suffered from a severe form of epilepsy. To save his life, the surgeons had to remove half his brain (hemispherectomy). In this case, they removed the left side, the one carrying both Broca’s and Wernicke’s speech centers. You’d think with such catastrophic removal of hyper-dedicated neural tissue, the boy would not be able to talk or understand speech for the rest of his life.
That’s exactly what did not happen. Within two years, the remaining right side of his brain had taken over many of the functions of the left side, including the ability to generate and understand human speech. The then-eight-year-old’s verbal capacity had somehow been “miraculously” restored!
Does that mean the brain is so plastic, it can detect deficits, turn itself into a temporary neural workshop, then physically reconstruct itself? In the case of this little boy, yes. And he’s not alone. There are many such published restoration accounts in the research literature, all of them baffling. Said Johns Hopkins neurologist John Freeman, who does this line of work:
The younger a person is when they undergo hemispherectomy, the less disability you have in talking. Where on the right side of the brain speech is transferred to and what it displaces is something nobody has really worked out.
These are only some of the challenges researchers face as they attempt to create a comprehensive connectome map, a goal we are possibly still years away from achieving. Yet we are hardly clueless about how the brain works. Researchers in my field have chosen to specialize, deploying the scientific equivalent of a divide-and-conquer strategy. Exactly how that works—and how things are changing—is what we’ll examine next.
Historically, we separated our investigative efforts into three distinct domains: The first domain was populated by researchers studying brains at the molecular level, researching how tiny snippets of DNA contribute to brain function. The second domain was populated by researchers studying brain function at the level of the cell— those tiny, frightened mops we discussed a few pages back. These cells could be examined at the individual-mop level or as groups of mops, meaning networks. The third domain was populated by people studying brain function at the behavioral level. This domain is the realm of experimental and social psychology. We’ll talk about their efforts in virtually every chapter of this book.
The partitions between these molecular, cellular, and behavioral domains have blurred as the years have passed, thankfully, with many researchers actively pursuing questions in multiple domains. We even have an umbrella term for this blending, one we’ll use throughout the book: cognitive neurosciences. This field of study is populated by scientists interested in linking biological process to behavior. The messiest research by far is the behavioral work, which deserves some special mention.
Skepticism and the grump factor
My scientific career often involves consulting with business professionals on issues related to human behavior. We usually end up discussing how to look at brain research with a healthy dose of skepticism. I’m a nice guy, but as a molecular biologist interested in psychiatric disorders, I can be pretty grumpy about what research says (and doesn’t say) about the complexities of human conduct. There’s lots of high-fructose nonsense out there, especially in the realm of self-help advice. One client called this skepticism the MGF, the Medina Grump Factor. It simply meant that facts I share are evidence-based, supported by peer-reviewed studies, often replicated many times. Just like most other scientists.
The same grumpy filter is true of the information in this book, but to keep it reader-friendly, I’ve chosen not to embed the references directly in the text. You can certainly see them for yourself. I encourage you to look up any of the studies mentioned within this book on the reference page at brainrules.net/references.
So what do I teach my business clients about applying brain research to their worlds? Realizing you can’t build a career based solely on raising your middle finger at popular mythologies, I tell them to remember the following four issues.
Issue #1: The field is still immature.
We’re still in the beginning stages of understanding even basic brain functions. We still don’t know, after all these years, how your brain knows how to sign your name or how it remembers to pick up the kids at 3:00 p.m. It will be a long time before brain science can tell us what makes a great leader and what makes a great parking attendant.
Issue #2: Many results are hard to reproduce.
Human behavior is messy, and sometimes researching how it all works is just as untidy. Consider this ugly finding that rocked the behavioral research world a few years back: we’re not always able to replicate certain important results in experimental psychology. University of Virginia researcher Brian Nosek formed something called the Reproducibility Project, an effort to reproduce specific famous behavioral findings. He and his colleagues discovered that only 50% of published experimental psychological results could be successfully (and independently) replicated.
This auditing is a good thing, of course, though it ran shock waves through the discipline. Many scientists painstakingly revisited old research findings to find the owies, then amended conclusions where warranted. This exercise was admittedly frustrating. We already knew precious little about brain function. Yet even some of the findings we thought stable—even canonical—had to be revisited.
Issue #3: The origins of behavior are complex.
You may have heard about an old debate concerning nature versus nurture. For years, there was a partisan conflict between these two words, one side thinking behavioral origins were primarily genetic (nature), the other side thinking the origins were primarily nongenetic (nurture).
Researchers have signed a truce these days, surrendering to the fact that nearly every human behavior has both nature and nurture components. Scientists laboring away in their prized molecular, cellular, and behavioral fiefdoms did more insightful research once they had this realization, opening their borders and often participating in multidisciplinary projects. I, too, tell clients that almost every behavior they can think of has both nature and nurture components. The mystery lies in understanding the percentage contributions.
Issue #4: There’s an inherent problem with crystal balls.
One last concern involves an issue I’ve added only recently to my conversations with clients. The main text of this book was written in 2020–2021, spanning nearly the entire arc of the COVID-19 infection cycle. Watching the global business world stagger as if punched in the gut by this invisible adversary was heart-wrenching. Researchers from many fields are still doing damage assessment— and probably will be for years—examining the long-term effects of viral-mediated social and economic disruption. Because of the pandemic’s recency, rigorous and solid evidence about these effects is currently exceedingly rare. I’ve thus warned clients about relying too heavily on people looking at behavioral crystal balls to predict the future of work beyond COVID-19.
If past is prologue, most people will get it wrong anyway. Perhaps no greater example of the perils of prognostication exists than trying to understand the so-called work-life balance, an issue we’ll take up in chapter five. (Spoiler alert: some people think the virus has changed things forever, but I’m not so sure.) Sociologists will understand the impact eventually. So will you and I, but those details must be left to chapters printed in publications far younger than this book. If it helps, we’re not going to predict the future in these pages anyway. We’re going to reimagine it.
Taken together—even with a heaping helping of Medina Grump Factor—I absolutely believe cognitive neuroscience has much to say to the business world. The evidence-based suggestions in this book are well worth examining, even trying out. The practice will go a long way toward illustrating what business would look like if someone gave it a cognitive five-fingered glove.

