by Donella H. Meadows
Thinking in Systems is a fantastic exploration of systems thinking and decision theory, and remains the gold standard. The insights in this book will blow your mind!
It’s probably been made clear to you over the years that your body is a system. Your heart pumps your blood through your veins, your kidneys remove waste from your blood, you’re able to breathe because of your lungs, and so on. In other words, the organs that make up your body are held together by the relationships they have with each other, all of them serving the purpose of keeping you going. But what about a soccer team? A company? Are they systems too? Well of course they are! Systems are everywhere – though some are more obvious than others. This book summary will take you on a journey into the world of these systems. It will explain what systems are, where to find them, and how they work and sustain themselves. In this summary of Thinking in Systems by Donella H. Meadows, you’ll learn
- why systems thrive on feedback;
- how a system can be corrupt; and
- why it would be wrong to expect that putting 20 pounds of fertilizer on your field will yield four bushels of wheat just because ten pounds of fertilizer yielded two bushels.
Thinking in Systems Key Idea #1: What is a System?
A system is defined as a group of connected elements that share a purpose.
Have you ever stopped to really take in the different systems that surround you? If so,, you’d quickly notice that they’re just about everywhere – from your body, to your favorite football team, to the company you work for, to the city you live in. This is due to the fact that a system is simply designed as a group of elements which are connected by their relationship to each other and paired with the idea of purpose. These elements can be anything from visible to physical, but they can also be intangible. For example, you’re able to physically see the elements of the system that keeps a tree alive: its roots, branches, and leaves, but things like academic prowess in university is much more abstract. Whether or not these elements are physical, each and every part of a system is glued together by relationships. When it comes to the tree, the relationships between these elements are the metabolic processes and chemical reactions the tree produces to stay alive. When it comes to the system within a university, the elements might be things like standards for admission, examinations, and grades. And what is the purpose of a system? A system’s purpose is actually defined by its behavior rather than its stated goals. For example, the government might claim that it has a goal to work toward environmental protection, but actually spend their money on other things. Therefore, environmental protection is not the government’s purpose since it isn’t reflected by its actions. The most important thing to remember is that the relationships and purpose of a system are always what determine the system’s purpose, even if the elements change. For example, a football team might gain an entirely new roster of players, but the relationships between each position and purpose of winning games will never change. Furthermore, how a system behaves is broken down into stocks and flows, which change over time. There’s also a special way each of these work. Stocks are those elements which can always be accounted for, at any point in time. These are things like water in a bathtub, books in a store, or money in a bank. On the other hand, flow is the change in stock over time as a result of inflows, which add to the stock, and outflows, which subtract from it. These include things like births, deaths, purchases, and sales.
Thinking in Systems Key Idea #2: Feedback!
Every sustainable system is reliant on feedback to stabilize it.
Now that you understand what the stocks and flows of a system are, it’s important to know why they’re constantly changing. This is due to the fact that when changes in stock affect the flows of a system, it’s considered to have a feedback. On top of this, there are actually different types of feedback. If a force stabilizes the difference between the actual and desired levels of stock it’s known as a balancing feedback. This type of feedback is a chain of rules or physical laws which relate to the level of stock and have the ability to change it. For instance, think about a thermostat balancing a room’s temperature. In this case the room temperature itself is the stock, heat from a radiator is the inflow, and heat escaping through windows is the outflow. This means that when the temperature falls, the difference between the actual temperature and the desired one is recognized by the thermostat, which prompts it to tell the heater to turn on. However, this is just one of many forms of feedback. Another is reinforcing feedback, a type of feedback which perpetually generates more – or less – of what already exists. For example, this means that the more money you have in savings, the more interest you’re able to accrue, and the more interest you accrue, the more money you’re storing in savings. This shows that the reinforcing mechanism of a system can produce constant, maybe even exponential growth, but it can also cause constant destruction. Feedback is incredibly crucial to a system because one of the most common and important structures of a system consists of a stock paired with one balancing and one reinforcing feedback. For example, a positive birthrate is a form of reinforcing feedback for a population due to the fact that it produces exponential growth – for the human race, in this case, more people means more babies, and more babies means more future adults who will eventually have children of their own. However, it’s important to note that population of any kind is also given a type of balancing feedback: death. This means that as a population becomes unsustainably large, the balancing feedback will step up, causing people to die of things like disease in insufficient resources.
Thinking in Systems Key Idea #3: The characteristics of a well-functioning system
The characteristics of a well-functioning system are that they’re resilient, self-organized, and hierarchical.
Has it ever crossed your mind why some systems, such as well-running machines, or even the world’s natural ecosystems function so seamlessly? This is in part due to resilience — a major determining factor in a system’s ability to adapt to changing conditions. This is because a system’s resilience is also its elasticity: how well it recovers from a transition. Any system’s resilience comes from its structure paired with its feedbacks, each working in different ways and directions, and even varying in time scales. Take the human body, for example. It has the ability to protect itself from invaders, tolerate extreme temperatures, adapt to changes in food supply, reallocate blood, and even repair broken bones. However, the importance of resilience is often underestimated. People sacrifice it to goals like productivity or comfort to the point where the whole system collapses. For instance, an industry might use natural resources for profit, but not realize that as a result of their money-making, species are dying off, chemicals they’re using are altering the soil, and toxins in the air are becoming more concentrated, leading us closer to an inevitable environmental catastrophe. Resilience, though, isn’t the only defense systems have; some of them can also self-organize. This means that they’re able to learn, allowing them to diversify, evolve, and build on their own structure. A good example of this is that a single fertilized ovum is able to grow and become a fully grown human adult. This means that as systems grow, and build new and increasingly complex structures, they’re also naturally organizing themselves based on a hierarchical structure. Actually, everything on earth is actually divided into subsystems, all of which form larger subsystems, and so on. A single cell in your liver is a subsystem of the organ itself, which is a subsystem of yourself, and you’re a subsystem of a family, which is a subsystem of a nation, and so on. But why hierarchies? Hierarchies actually reduce the amount of information a certain part of the system needs to handle. For example, since the job of a liver cell is to decompose toxins, lung cells don’t need to.
Thinking in Systems Key Idea #4: First Seek to understand
Being able to understand some common mistakes will help you more productively investigate systems.
It’s easy to see systems we know well as transparent, but it’s also easy to misread them if we’re too focused on their outputs, rather than their behavior: the way they each function over time. The reason this can be a problem is because a system’s outputs are often the part of them that’s most visible, making it so that we often oversimplify systems into nothing more than a series of events. It’s incredibly easy for us to pay attention to whether a team wins or loses a game, or the percentage of the Amazon that has been affected by deforestation. Imagine that you’re watching a football game. Perhaps both teams seem to be evenly matched, however one team might be playing exceptionally well that day. When the team that’s playing incredibly well ends up winning, you may be less surprised than a person who sees nothing more than the final score — the output. However, this isn’t the only common mistake we make. We also have a tendency to anticipate that a relationship within a system will be linear, despite the non-linear nature of the world. For instance, if one year you added ten pounds of fertilizer to a field, and it produced two bushels of wheat, you might then expect that adding 20 pounds would later produce four bushels. But the reality is, the real world doesn’t function this way. If you do add 20 pounds of fertilizer, your yield might remain fixed because the excess nutrients may damage the soil, reducing its fertility. Finally, it’s important to note that people often forget that systems are rarely separate from other systems. We think this way because our minds tend to only be capable of processing so much, which means that, to simplify matters, we mentally isolate each system. It’s quite easy to forget that boundaries between each system are artificial, leading us to become so accustomed to them being separate that the separation starts to feel natural. This often leads us to think in terms that are either too broad or too narrow. A good example of this in action would be to pretend you’re brainstorming ways to help reduce CO2 emissions: creating a detailed model of the planet’s entire climate would likely make the process harder to understand, but focusing solely on the auto industry instead of highlighting other problems as well would likely prove equally fruitless.
Thinking in Systems Key Idea #5: Disproportionate power
The main cause for corrupt systems, enabling overuse.
So, it’s clear that all systems share common features, but some of them can produce extremely unnatural and even problematic behavior. This often happens when the subsystems within a system each have their own, separate goal, causing what’s known as policy resistance. Here’s how it works: If one factor within the system somehow gets the upper hand, therefore using it to shift the direction of the system’s purpose, all of the others will have to work twice as hard to pull it back in line. This can cause a system that appears stuck, which will end up reproducing the same problems over and over again. For instance, drug traffickers and addicts both want the supply of drugs to be high, but law-enforcement wants the opposite. This means that when the law prevents drugs from crossing a border, prices on the street will rise. As a result, addicts commit more crime to pay the higher prices and suppliers invest in planes and boats that can evade the authorities. Correcting a system like this requires letting go and making the energy and resources available, so that every part of each subsystem can unite. This way they can find a situation that works for everyone. But there can be other problems in a system. For example, a system that uses a resource that’s fairly common and also unsustainable will inevitably collapse. If land is used by several shepherds who keep adding animals to their herds, the land itself will stop being able to support the shepherds, as the grass will lack the time necessary for regrowth, and the roots will lose their grip on the soil, the rain washing it away. Why does this happen? This is due to the fact that feedback between these resources and the parties using the resource either doesn’t exist or is somehow delayed. In order to avoid this inevitable collapse, education of the system’s users is necessary, so that they’re able to understand how their actions will affect the resource they need and how they can restore it by regulating their use of it.
Thinking in Systems Key Idea #6: Observe and Adjust
You can physically adjust a system to improve its efficiency.
Throughout all of this, the question that’s likely been on your mind is: why don’t we just find a way to make systems produce more good and less bad? Well, you’re in luck. That’s because by changing buffers, system design and delays, we can produce more effective systems. How? Well, system buffers – like time, inventory and storage space – must be of optimal size to properly function. This means that increasing a buffer’s capacity can stabilize a system. However, increasing it too much will make the system inflexible. This is why businesses buy minimal inventory—allowing there to be the occasional product shortage, which is cheaper than simply investing in the costly storage of goods that the company might not actually sell. System design is another important factor. This is based on the fact that a system that’s been designed properly allows for maximum efficiency, is less prone to malfunctions, and is able to better understand its own limits and bottlenecks. For example, in that past, there was only one road between east and west Hungary, which ran through the capital city. The congestion that this road produced couldn’t be fixed by the simple addition of traffic lights, meaning that the system required a total redesign. Finally, delays – the time it takes a system or its actors to notice and respond to change – represent another point of leverage. Every system has delays, but when they become too long term, a system will struggle to respond to any new, short term changes. As a result, delays should be proportional to a system’s rate of change. A good example of this is in the case of global economics, in which the world is always pushing for more rapid economic growth. However, the physical reality of elements like factories, technologies, prices, and ideas don’t change at the same rate. In other words, there’s a delay. This means that slowing growth down and then giving technology and prices the time they need to catch up would make the system more efficient.
Thinking in Systems Key Idea #7: Ease of Efficiency
By simply adjusting its internal mechanisms and rules, a system can be made more efficient.
This means that changing the physical elements of a system would be able to improve it, however, there are other ways to fix problems a system might be having. One way is to focus on improving the flow of information, the rules of the system, and its self-organization. Oftentimes, systems lack sufficient information flows. This means that adding a more sufficient flow of information can make significant improvements. An example of this is that installing electrical meters in hallways instead of basements actually reduced energy consumption by one-third in some Dutch suburbs, simply because residents now had access to information about their use and were able to adjust how much power they used accordingly. But also, if the people benefitting from the system are also able to set rules and exercise control over that system, it won’t be able to function well. This means that if the world trade system is ruled, run, and primarily benefits corporations, it will inevitably collapse. On top of this, when systems can self-organize, they’re able to evolve and therefore, learn on their own – a fascinating characteristic, but one that often frightens humans as it means losing control. The result of this is manmade limits imposing on systems. But, oftentimes, this can actually produce greater issues with the system, which means that letting a system organize itself is much better. Systems also run into trouble when they hold incorrect goals or paradigms. If a system is based around accomplishing the wrong goal, and that goal is then changed, it’s possible for the entire system to adapt. For instance, some countries have discovered that one centralized system of economic planning simply doesn’t work, and when they shifted their goals, every subsystem in the economy adjusted to the new model. And paradigms? Well, paradigms are the deepest held beliefs on which a system is built – like, “growth is good” or “one can own land.” This means that if a system’s paradigms are incorrect, they’ll need to be changed. For example, ecologists have started shifting the paradigms of environmental protection, and the results have been changes in a variety of systems as industries, peoples, cities, and entire countries start to adapt the way they manage waste.
Thinking in Systems Key Idea #8: Pay Attention
You’ll be able to better understand the world if you pay attention to the inner workings of the systems around you.
By now, it’s probably clear that systems are uncontrollable and are really only comprehensible in the most general sense. The good news: there are some incredibly simple steps toward helping you better navigate and understand the world of systems, which will increase their efficiency. First, it’s important to observe how systems behave by learning the history of said system and collecting information from it. Our world is full of misconceptions, and the more data we have on these systems, the better judgements we can make about our world. For example, while you might think prices are going up, they could just as well be going down. Once you’ve collected your information, it would be helpful to write down how the system is functioning, paying special attention to its structural arrangements and functions. Doing this will make sure that your models are complete, add up, and are consistent through and through. From here, you’ll distribute the information in the system. Generally speaking, in order for a system to work properly, its information needs to be distributed. This means that the more timely, accurate, and complete the information is, the better the system will be able to run. While going through this process, it’s important to pay attention to what’s actually important, in terms of both measurable and immeasurable factors. This is due to the fact that humans tend to put more value on numbers and quantity and less on quality. This is because quantity is easier to measure and related to. However, things like justice, democracy, security, and freedom are essential too, even though they can’t be quantitatively assessed. It’s also important to notice the way a system can produce its own behavior. To do so just keep these questions in mind: Which external and internal influences produce certain behaviors? Are these factors controllable? Once you answer these questions, you’ll then see where responsibility lies within a system, as well as how actions are produced, and what consequences they may have. For example, next time you’re upset about a flight being delayed, you’ll be able to ask yourself these questions, which will make you much less likely to take your frustration out on an innocent stewardess.
In Review: Thinking in Systems
The key message in this book:
Everything we see, do, and experience in this world is made of systems. While we’re unable to fully understand them, predict how they’ll work, or exercise control over them, we are able to at least study their behavior and the patterns they show. Doing this will allow us to help them to function better and more efficiently, so that we can identify when a broken system is in need of changes and repairs.
Actionable advice: Always expect a positive outcome, rather than a negative one.
It’s always easier to see the world as being worse than it actually is and to assume that the worst will always happen. For example, if a salesman’s numbers dropped in one instance, he’ll be much more likely to assume that they’ll drop that way again in the future, and won’t be surprised if they do. In the long term, the effect is that he’ll have a lack of motivation to improve things now, also known as drift to low performance. So, make sure you don’t let your expectations of greatness drop, and rather keep your eyes fixed on the best outcome possible.
If you liked this, you will love our review of ‘thinking fast and slow’ Check it out here.
We highly recommend Thinking in Systems – click here to find your copy from Amazon