For a while I’ve been curious about Systems Thinking. It both sounds fascinating and very relevant. Bonus points for not being exactly a fad. I like the fact that apparently a lot of great ideas in ST can be found in old books that you can get second-hand and have managed to stay relevant. Very Lindy.
If there’s some sort of cognitive archetype, for the past 10ish years, I’ve looked at myself as a bit of an engineer. Not that I can actually do any math or physics, but I’m attracted to the organized, procedural way engineers look at problems.
Not being able to do any maths (in my mind, the toolkit of an engineer) made me pay closer attention to the thinking modes and methods that engineers tend to use. In a very dilettante way, nothing truly structured. This is why I got very excited when discovered the world of Startups, Steve Blank, Eric Ries and so on. It provided me a sense of order to look for the truth of a business or product.
The Engineering Way is closely related to the Science Way, in the sense both work by decomposing a problem into smaller chunks, solving them and then bringing the parts together again. This is why I found comfort in some of my High Modernist tendencies ( which got back further than 10 years, more like 15 ): they suggested there is a truth, and by the application of rigorous thinking, that truth could be grasped.
In the past couple of years, the mix between Systems Thinking and the reductionist thinking from Engineering has revealed some inconsistencies in my world view: on one hand, reductionist thinking looks for the truth of the thing; on the other, Systems Thinking looks for the truth of the relationships of the thing with other things.
Although these concepts are not mutually exclusive, I was unsatisfied with them not clicking together in a more seamless way.
I was looking at it in the wrong way
The world we live in, the cliché of ultra-connectivity and speed is the result of the application of Science and Engineering. We live in a context made possible by the reductionist approach. Most purely technical problems that we can trace a line around are relatively easy to solve: apply more of the same thinking that got us here.
At the same time, the faustian deal we’ve made with reality was that by focusing on improving parts and components, we’ve paid no attention to the broader context. This explains the sorry, terrifying , pre-cataclysmic way the environment currently is. We were not looking at things from that bigger perspective. Besides a few luminaries that could see this coming centuries or decades ago, for most people, the changes came too slow to notice and happened too far away to care.
And it’s no wonder: the tools we needed to fully understand things at a planetary scale had not been invented yet. When we see a video of an iceberg crumbling into the sea, we should remember that all the technology and reach that we developed to capture and broadcast that picture, is the result of precisely the actions that lead to the iceberg’s disappearance.
The path we carved out of reductionism cleared the ways to make connections more visible. And also created more connections. A good example is the global economy, more intertwined than ever.
What brought us here, won’t take us there
So, by the application of hyper-focused efforts on discrete challenges, we’ve created a network that finally allows us to see the bigger picture. And when we look at this bigger picture, perhaps we want to resort to the tools we understand well, the same that took us this far.
But, I feel, at some point we went through a phase-change: what has been seen cannot be unseen. In a connected world, we need to look at the problems as they happen: in a web. A web of people, resources and so on.
I’m a fan of the spectacular RibbonFarm blog. It blew my mind when I’ve found it and it keeps of doing so, even if I read it less frequently now. When I first came across that blog, I loved what seemed to me a very sophisticated perspective that leaned heavy on a vocabulary of engineering. After all, the founder of that blog holds a Phd in Systems and Control.
As the time went by, I started to realize that RibbonFarm had stronger focus on the narrative aspects. I couldn’t ( and still can’t fully) pin it down. It became weird and more intriguing. Why would an engineering perspective shift into one more about narratives? It finally clicked for me.
As I slowly became aware of the Engineering Way, the Science Way, the Systems Thinking Way, I’ve struggled to organize them in a toolbox. Part of me was trying to either:
- Decide which one was best or;
- Decide which one made sense for the current era or;
- Decide which one was the one for me.
These options were wrong, I now see. It was never about their place in a continuum of right<—–>wrong.
It is about the scale at which one wants to operate.
The reductionist approach gave us literal tools to perceive the world at a global scale. These tools made connections more apparent and enabled a discussion about systems in a more tangible way. When we think about systems, at some point we end up having to think about people.
Systems of people have weird connections that are not easily explainable by pure science or economics. A lot of these connections are sustained by meaning, another fuzzy idea that both the reductionist and systematic approaches are ill-equiped to deal with.
The central thesis of Sapiens ( at least as I remember it ) is that myth is what enabled mankind to advance so much in such a short amount of time. Via a different path ( but one that I can now almost intuitively trace ), I think I’ve reached the same conclusion.
Narrative Thinking (?) is just another step up in terms of operating scale. To solve specific problems one resorts to science and engineering, to tackle connected problems one needs an ecological perspective and to address people-at-scale problems one has to wield the narrative.
There are a couple of notable conclusions from this:
- Narrative is a tool that can only be used by a very small subset of mankind, under the risk of too much fragmentation ( which sounds incredibly familiar, doesn’t it? )
- When we read about “the Great ones”, they often seem to operate at this level;
- The current attack on the Humanities (at least here in Brazil) seems either smarted or dumber, depending on your POV. But definitely not irrelevant.
A preview of perhaps another idea.
A final idea that I still can’t fully understand and/or package is how we, as a mass of connected beings, are now both more capable and less powerful.
Organizations create products that reflect their structure. It’s no wonder that most of the big corps can’t make decent software when their internal incentives are based on obsolete premisses. At the same time, societies create culture that reflects their structure. As an example, higher social mobility probably means a different type of art than that of a place that enforces one’s place in the ladder.
Today’s western digitalized culture places a strong premium on “user experience”. In effect this mostly means creating products and services that do exactly what the user wants to do, even if he can’t articulate it. We are less tolerant of sub-par performance and expect our apps to treat us as complete overlords.
These apps, even if they have some customization, mostly seek to serve our more common instincts. The ones we all share and happen truly under the hood. These are, of course, our basest and more limbic ones.
So we end up with a virtually totally encompassing network of empowered individuals (empowered to do more things, faster and with better results ) but that are treated on the basis of their more primitive, pre-cognitive instincts. We don’t have a network of geniuses, we have a network of slightly advanced primates.
The narrative, as the central unifying notion linking large groups of these advanced primates is ever more relevant. This is the reason it is so interesting, for instance, to try to understand what countries and cultures say about themselves. Like air, narrative is everywhere but invisible to most.
A final thought: if at some point, we have a few artificial intelligences, what sort of narrative do we feed them? Would that be as relevant as the learning sets of data for ,say, an autonomous car?