What Is Bounded Rationality And Why It Matters

Bounded rationality is a concept attributed to Herbert Simon, an economist and political scientist interested in decision-making and how we make decisions in the real world. In fact, he believed that rather than optimizing (which was the mainstream view in the past decades) humans follow what he called satisficing.

A quick intro to bounded rationality

Many models, especially in economic theory and social sciences still rely on unbounded rationality to make predictions about human behavior. Those models have proved wholly ineffective, and they do not reflect the real world.

In the last decade cognitive theories that look at humans as a bunch of flawed beings that due to their biological limitations commit a series of errors (the so-called biases) has taken over.

I supported this theory on this blog. However, what might seem biased, at a more in-depth look are in reality unconscious rationality (what we call gut feelings) that helps us survive in the real (uncertain) world.

Bounded rationality is a framework that proves way more robust – I argue than any other. That is why it makes sense to look at it to understand what bounded rationality really means.

Bounded rationality – more than a theory is a warning to economists and social scientists – that can be summarised as the study of how people make decisions in an uncertain world. As pointed out by Greg Gigerenzer, there are at least three meanings attributed to unbounded rationality:

  • optimization: there are constraints in the outside world that don’t allow us to get all the data available
  • biases and errors: there are constraints in our memory and cognitive limitations that limit our decision-making ability
  • bounded rationality: how do people make decisions when optimization is out of reach.

The first two don’t admit the existence of an uncertain world. Why? When you study decision-making under risk, the assumption is that we live in a certain world, where given all the data available we can compute that risk.

What economists like to call optimization under constraints. This is true only in a small world, where everything can be calculated.

The second assumes that due to our limited cognitive abilities we deviate from solving problems accurately, thus we fall into biases and cognitive errors.

While the first emphasizes on rationality, the second focuses on irrationality.

The third concept, which is what bounded rationality really is about was elaborated by Herbert Simon.

He asked the question, “how do people make decisions when optimization is out of reach?” In short, how do people make decisions in an uncertain world?

There are a few things to take into account when thinking about bounded rationality:

We don’t live in a small world

In a small world, given enough data, we can compute the consequence of many actions and behaviors.

In the real world, risk cannot be known or modeled

In many disciplines, especially economics and finance at the academic level, the assessment of risk is central.

However, what we cal risk implies something that can be computed. In fact, in the financial toolbox, there are many measures of risks.

However, those are often worthless, since they start from the assumption that given enough data you can put a precise number on the risk you’re undertaking.

However, that is not the case. In the real world, there are hidden variables that can never be taken into account, even if you have zillions of data

Optimization is not bounded rationality

Many confuse optimization for bounded rationality. They are opposite concepts. Optimization starts from the assumption that we live in a small world where you can compute risk.

Bounded rationality starts from the assumption that we live in an uncertain world where we can’t assess risk. That is why we have a toolset of heuristics that work more accurately than complicated models in the real world

Biases are not errors but heuristics that work in most cases to make us avoid screw-ups

In short, heuristics rather than being shortcuts that are fast but inaccurate. Those are instead quick, effective, and in most cases, more accurate than other forms of decision-making (in the real world)

Satisficing: Look at the one good reason

In an uncertain world in many cases, ignore all the information and look at the one good reason to make a decision that works best.

Survival is rationality in the real world

Put in this form rationality is not a matter of beautiful mathematical models, but it is about survival. What survives might be then called rational.

Kahneman’s error

The whole behavioral school of thought today is mostly based on Kahneman’s and Tversky’s work on heuristics and biases. 

Kahneman and Tversky are two pillars of modern behavioral economics, and indeed those that most of all have influenced policies in the field. 

There is a core issue underlying the Kahneman and Tversky definition of bias and heuristics. 

Where in the world of Herbert Simon, heuristics are seen as a very effective shortcut (actually working much better than other more complex models of the real world) that help humans successfully deal with the context in which they are. 

In Kahneman’s view, heuristics mostly lead to biases or errors of understanding of the real world. 

This negative view of human psychology has led Kahneman to formulate a whole bunch of biases or errors that humans supposedly make. Still, as it turned out to be, rather than being errors, the definition of real-world from these academics turned out to be wrong. 

In other words, most experiments led to a wide list of psychological errors, almost as if a human is a collection of a bunch of misconceptions about the real world; it turned out those experiments were manufacturing a fake context, which does not exist in the real world. 

For instance, if you take a bias like loss aversion used as one of the many examples of human biases, you realize that this has been tested as if humans had unlimited ability to take losses. 

Instead, more contextual models of the world, like ergodicity, show us that humans are highly contextual creatures (this is what Herbert Simon meant with bounded rationality) acting according to the fact that we do not have unlimited lives.

This simple fact got missed from most behavioral psychology research of the last two decades and it led to a whole bunch of mistakes. 

Source: Nassim Nicholas Taleb at The Logic of Risk Taking

As you can see above, we live in a world where each of us is constrained by time probability. Meaning if you take too many risks, you go broke, and that will affect your whole life. 

Instead, behavioral psychologists, when testing some human biases, tested them as if, each of us had ensemble probability (in short, there was no time dependence), as if we were in a simulated world with many lives. 

That turned into a major crisis in behavioral economics, in which foundations have been shaken by the fact that most of these experiments could not be replicated. 

And the whole school of thought of heuristics as the primary avenue to biases and of the nudging school (you can influence people to do things by leveraging those biases) has been shaken to its foundation. 

Bounded rationality and Artificial Intelligence

We’re making the same mistake now, with the development of new technologies, like artificial intelligence. 

Also, here, many academics and practitioners in the field act as if a human is just a set of tasks, not considering that there are many more facets of being a human that science doesn’t grasp yet (or perhaps might never grasp).

This leads to a dystopian view of the world, where AI can take over humans any time soon and a world where Artificial General Intelligence is possible. 

Instead, it’s critical to recognize the huge limitations that AI has, as of now, the fact that it’s not conscious at all. And AI works in a completely different way than humans. 

Where humans can adapt to many contexts which are ambiguous and noisy and where there is extremely conflicting information about what the problem at hand is. 

The AI thrives, instead, in a narrow context, highly controlled, where we give it a clear definition of the problem. 

If we realize that, we can move to a human-in-the-loop AI approach, where humans can focus on designing the proper context for AI to thrive. 

But it’s the human that defines what problems are worth solving, what context it makes sense to have the AI operate within, and sets the boundaries and guardrails for that. 

That’s a critical point to take into account for the future development of AI, as otherwise, the risk, is putting too much confidence into machines which will leave us awry. 

Bounded rationality explained

Books to read to enhance your bounded rationality

With technological advancements, there is more and more available information at a cheaper cost (actually information nowadays is free). Also, technology also gives us the impression that we live in a world that we can control.

All it takes is enough information and we’ll be able to be successful in business. That is why you need to have the latest news, the newest gadget, and follow the latest trend.

This kind of approach can live you astray! As you get access to more and more information, this also improves the noise exponentially.

Thus, rather than getting better at making decisions you become way worst. With an even worse consequence: you’re not aware of that. The fact that you have a lot of data makes you believe that you know best.

Therefore, I believe there are three aspects to take into account in the modern, seemingly fast-changing world:

  • have at your disposal a simple yet effective toolset for decision-making in the real world
  • develop the ability to ignore information that isn’t needed
  • know when to trust your gut feelings rather than relying on complex models

In this respect, three books can help you with that. Two books are from Gerd Gigerenzer, a German psychologist who has studied bounded rationality and heuristics in decision making. The third is from Nicholas Nassim Taleb, author of The Black Swan and the Incerto Book Series. 

Risk Savvy: How to Make Good Decisions

In the past century, the leap forward for humanity was to teach to most of us how to read and write. If that was enough in a modern world where information was still scarce.

Nowadays with the advent of social media and the increasing speed of the internet, there is another tool that anyone has to master to survive: statistical thinking.

Risk Savvy helps you build the toolbox to become a better statistical thinker. Or to ask better questions that allow you to navigate through the noise of the modern world:


Gut Feelings: The Intelligence of the Unconscious

This book is an excellent introduction to the concept of bounded rationality and heuristics. It is also a fresh perspective on decision-making. Where current prevailing cognitive psychological theories focus on our biases and cognitive errors, this book focuses on why instead those heuristics make a lot of sense.

In fact, gut feelings are seen quite skeptically in the world of academia and corporations where big words are looked with more respect. This book shows you why gut feeling matters in business as in life:


Skin in the Game: Hidden Asymmetries in Daily Life

Skin in the Game is the bible for understanding how to get along in a world that is plenty of hidden asymmetries:

Skin in the Game: Hidden Asymmetries in Daily Life

Bounded Rationality Examples in Business

Jeff Bezos is one of the business people that throughout his career as an entrepreneur in building Amazon from scratch, has leveraged various mental frameworks very close to the concept of bounded rationality.

Indeed, he understood the difference between linear and non-linear thinking and how intuition, driven by bounded rationality, could be used to create breakthroughs for Amazon.

Let’s explore some of these examples.

Regret Minimization Framework

A regret minimization framework is a business heuristic that enables you to make a decision, by projecting yourself in the future, at an old age, and visualize whether the regrets of missing an opportunity would hunt you down, vs. having taken the opportunity and failed. In short, if taking action and failing feels much better than regretting it, in the long run, that is when you’re ready to go!

As the story goes, when Jeff Bezos had to decide whether to leave his well-paid job and consolidated position on Wall Street to start a venture on the nascent Internet, he didn’t use spreadsheets or complicated mental equations.

Quite the opposite, he cut through the noise by using a mental model which is called regret minimization.

In short, he imagined himself as an old man at the end of his career and how he would have looked at his life back in the grand scheme of things.

And with that visualization, he imagined he would have regretted not having tried to start what would later become Amazon.

The regret minimization framework is extremely powerful because it is a via negativa framework. In other words, it tries to avoid having major regrets by taking a long-term vision.

Indeed, balancing long-term with short-term decision-making is probably one of the most complex human endeavors.

Day One Mindset

In a letter to shareholders in 2016, Jeff Bezos addressed a topic he had been thinking about quite profoundly in the last decades as he led Amazon: Day 1. As Jeff Bezos put it, “Day 2 is stasis. Followed by irrelevance. Followed by an excruciating, painful decline. Followed by death. And that is why it is always Day 1.”

Another mental framework, in the bounded rationality domain, was the use of the “day one” mindset within Amazon.

This is another bounded rationality approach because it helps cut through the decision-making process in very uncertain times.

When Amazon finds itself in an important turn of events, it determines how the company will look in the long term.

Day One helped the company be on track to its long-term vision. The Day One mindset is about keeping a startup mindset as the company grows.

Customer Obsession

In the Amazon Shareholders’ Letter for 2018, Jeff Bezos analyzed the Amazon business model, and it also focused on a few key lessons that Amazon as a company has learned over the years. These lessons are fundamental for any entrepreneur of a small or large organization to understand the pitfalls to avoid running a successful company!

Customer obsession has been the driving principle of Amazon since the onset. In a tech company like Amazon, which leveraged data to improve its operations.

Customer obsession helped the company keep its feet on the ground, thus always returning to the bottom-up innovation approach, where you got to focus on customers to build a successful business.

This bounded rationality mental model critically helped Amazon maintain its focus while scaling up.

Working Backwards

The Amazon Working Backwards Method is a product development methodology that advocates building a product based on customer needs. The Amazon Working Backwards Method gained traction after notable Amazon employee Ian McAllister shared the company’s product development approach on Quora. McAllister noted that the method seeks “to work backwards from the customer, rather than starting with an idea for a product and trying to bolt customers onto it.”

The Working Backwards Method has been critical at Amazon as a product development methodology where you focus on the customer needs.

In a tech-driven world, it’s very easy to fall into the “innovator’s bias” or the trap of considering the technical solution as the priority over solving a concrete business need.

A backward working framework does exactly that. It helps simplify the development process of a product with the customer in mind.

The Flywheel

The Amazon Flywheel or Amazon Virtuous Cycle is a strategy that leverages on customer experience to drive traffic to the platform and third-party sellers. That improves the selection of goods, and Amazon further improves its cost structure so it can decrease prices which spins the flywheel.

In a digital world driven by network effects, moving from sales funnels to flywheels has been another critical shift in mindset.

Amazon has led the way there.

The Amazon Flywheel was a mental model where Amazon could build momentum into its business by enabling the compounding of growth over time as it kept building demand for its products.

This is one of the most powerful mental models. of the digital business world.

Connected Thinking Frameworks

Convergent vs. Divergent Thinking

Convergent thinking occurs when the solution to a problem can be found by applying established rules and logical reasoning. Whereas divergent thinking is an unstructured problem-solving method where participants are encouraged to develop many innovative ideas or solutions to a given problem. Where convergent thinking might work for larger, mature organizations where divergent thinking is more suited for startups and innovative companies.

Critical Thinking

Critical thinking involves analyzing observations, facts, evidence, and arguments to form a judgment about what someone reads, hears, says, or writes.

Systems Thinking

Systems thinking is a holistic means of investigating the factors and interactions that could contribute to a potential outcome. It is about thinking non-linearly, and understanding the second-order consequences of actions and input into the system.

Vertical Thinking

Vertical thinking, on the other hand, is a problem-solving approach that favors a selective, analytical, structured, and sequential mindset. The focus of vertical thinking is to arrive at a reasoned, defined solution.

Maslow’s Hammer

Maslow’s Hammer, otherwise known as the law of the instrument or the Einstellung effect, is a cognitive bias causing an over-reliance on a familiar tool. This can be expressed as the tendency to overuse a known tool (perhaps a hammer) to solve issues that might require a different tool. This problem is persistent in the business world where perhaps known tools or frameworks might be used in the wrong context (like business plans used as planning tools instead of only investors’ pitches).

Peter Principle

The Peter Principle was first described by Canadian sociologist Lawrence J. Peter in his 1969 book The Peter Principle. The Peter Principle states that people are continually promoted within an organization until they reach their level of incompetence.

Straw Man Fallacy

The straw man fallacy describes an argument that misrepresents an opponent’s stance to make rebuttal more convenient. The straw man fallacy is a type of informal logical fallacy, defined as a flaw in the structure of an argument that renders it invalid.

Streisand Effect

The Streisand Effect is a paradoxical phenomenon where the act of suppressing information to reduce visibility causes it to become more visible. In 2003, Streisand attempted to suppress aerial photographs of her Californian home by suing photographer Kenneth Adelman for an invasion of privacy. Adelman, who Streisand assumed was paparazzi, was instead taking photographs to document and study coastal erosion. In her quest for more privacy, Streisand’s efforts had the opposite effect.


As highlighted by German psychologist Gerd Gigerenzer in the paper “Heuristic Decision Making,” the term heuristic is of Greek origin, meaning “serving to find out or discover.” More precisely, a heuristic is a fast and accurate way to make decisions in the real world, which is driven by uncertainty.

Recognition Heuristic

The recognition heuristic is a psychological model of judgment and decision making. It is part of a suite of simple and economical heuristics proposed by psychologists Daniel Goldstein and Gerd Gigerenzer. The recognition heuristic argues that inferences are made about an object based on whether it is recognized or not.

Representativeness Heuristic

The representativeness heuristic was first described by psychologists Daniel Kahneman and Amos Tversky. The representativeness heuristic judges the probability of an event according to the degree to which that event resembles a broader class. When queried, most will choose the first option because the description of John matches the stereotype we may hold for an archaeologist.

Take-The-Best Heuristic

The take-the-best heuristic is a decision-making shortcut that helps an individual choose between several alternatives. The take-the-best (TTB) heuristic decides between two or more alternatives based on a single good attribute, otherwise known as a cue. In the process, less desirable attributes are ignored.


The concept of cognitive biases was introduced and popularized by the work of Amos Tversky and Daniel Kahneman in 1972. Biases are seen as systematic errors and flaws that make humans deviate from the standards of rationality, thus making us inept at making good decisions under uncertainty.

Bundling Bias

The bundling bias is a cognitive bias in e-commerce where a consumer tends not to use all of the products bought as a group, or bundle. Bundling occurs when individual products or services are sold together as a bundle. Common examples are tickets and experiences. The bundling bias dictates that consumers are less likely to use each item in the bundle. This means that the value of the bundle and indeed the value of each item in the bundle is decreased.

Barnum Effect

The Barnum Effect is a cognitive bias where individuals believe that generic information – which applies to most people – is specifically tailored for themselves.

First-Principles Thinking

First-principles thinking – sometimes called reasoning from first principles – is used to reverse-engineer complex problems and encourage creativity. It involves breaking down problems into basic elements and reassembling them from the ground up. Elon Musk is among the strongest proponents of this way of thinking.

Ladder Of Inference

The ladder of inference is a conscious or subconscious thinking process where an individual moves from a fact to a decision or action. The ladder of inference was created by academic Chris Argyris to illustrate how people form and then use mental models to make decisions.

Six Thinking Hats Model

The Six Thinking Hats model was created by psychologist Edward de Bono in 1986, who noted that personality type was a key driver of how people approached problem-solving. For example, optimists view situations differently from pessimists. Analytical individuals may generate ideas that a more emotional person would not, and vice versa.

Second-Order Thinking

Second-order thinking is a means of assessing the implications of our decisions by considering future consequences. Second-order thinking is a mental model that considers all future possibilities. It encourages individuals to think outside of the box so that they can prepare for every and eventuality. It also discourages the tendency for individuals to default to the most obvious choice.

Lateral Thinking

Lateral thinking is a business strategy that involves approaching a problem from a different direction. The strategy attempts to remove traditionally formulaic and routine approaches to problem-solving by advocating creative thinking, therefore finding unconventional ways to solve a known problem. This sort of non-linear approach to problem-solving, can at times, create a big impact.

Bounded Rationality

Bounded rationality is a concept attributed to Herbert Simon, an economist and political scientist interested in decision-making and how we make decisions in the real world. In fact, he believed that rather than optimizing (which was the mainstream view in the past decades) humans follow what he called satisficing.

Dunning-Kruger Effect

The Dunning-Kruger effect describes a cognitive bias where people with low ability in a task overestimate their ability to perform that task well. Consumers or businesses that do not possess the requisite knowledge make bad decisions. What’s more, knowledge gaps prevent the person or business from seeing their mistakes.

Occam’s Razor

Occam’s Razor states that one should not increase (beyond reason) the number of entities required to explain anything. All things being equal, the simplest solution is often the best one. The principle is attributed to 14th-century English theologian William of Ockham.

Mandela Effect

The Mandela effect is a phenomenon where a large group of people remembers an event differently from how it occurred. The Mandela effect was first described in relation to Fiona Broome, who believed that former South African President Nelson Mandela died in prison during the 1980s. While Mandela was released from prison in 1990 and died 23 years later, Broome remembered news coverage of his death in prison and even a speech from his widow. Of course, neither event occurred in reality. But Broome was later to discover that she was not the only one with the same recollection of events.

Crowding-Out Effect

The crowding-out effect occurs when public sector spending reduces spending in the private sector.

Bandwagon Effect

The bandwagon effect tells us that the more a belief or idea has been adopted by more people within a group, the more the individual adoption of that idea might increase within the same group. This is the psychological effect that leads to herd mentality. What in marketing can be associated with social proof.

Read Next: BiasesBounded RationalityMandela EffectDunning-Kruger

Read Next: HeuristicsBiases.

Main Free Guides:

Other resources for your business:

About The Author

Leave a Reply

Scroll to Top