Systems Thinking

I recently started reading a fascinating book titled Thinking, Fast and Slow by Daniel Kahneman. In it, he describes the two types of “systems” that our human brain has developed to make sense of our world and the actions we take to navigate it. System 1 is our fast-acting, stereotypical, bias-prone, judgmental, and reactionary mechanism that allows our species to assess threats and make split-second decisions. System 2 is our logical, rational, and thought-provoking mechanism that is in charge of validating the inputs derived from System 1; albeit with one problem. It tends to be lazy and would rather validate System 1’s processes without exerting the necessary cognitive strain of applying resources for verification of an input. Sound familiar? It is much easier to judge, operate under bias, and allow our thinking machine to take a backseat while we operate in a cruise control frame of mind. Engaging System 2 requires us to be more vigilant and cognitively aware of our surroundings, while also sorting and deciding on which mental image/thought we choose to dwell or act on. As a result, we have developed mental shortcuts, aka heuristics, that allow us to make decisions that are “good enough.” This works well in a lot of areas but can get us in trouble when we come to rely on it for situations that require specificity in the decision-making process.

So, how do we avoid mistakes in thinking when our lazy System 2 would love to validate System 1’s decisions without applying mental resources to justify the input? How do we not jump to conclusions or seek cause and effect scenarios when statistical evidence or random chance counter what our System 1 is implying? A good place to start is by accepting the existence of System 1 and 2 in mental modes of operation and recognizing that our emotional nature would like nothing more than to not engage our lazy System 2. It is much easier to allow our personal world view to stay intact and constructed without ever questioning the foundation on which it was built. However, to grow as a person and productive citizen, allowing our emotions to reign will surely lead to a common question plagued by many of us: “How did I get myself into this situation?”

Cue critical thinking and the cognitive strain it requires. Even now as I formulate my thoughts and decide which words to use in portraying my message, System 2’s requirement of my attention and the churning of my mental machinery is a process that involves parsing Kahneman’s text with my interpretation and self-awareness. Meaning, it requires effort, focus, and determination in making System 2 perform to the specifications I know it can achieve. Yet, once engaged, I find myself enjoying the effort it requires of me and the effect it has on my sense of being an industrious individual! The same principle of forcing yourself to exercise when you absolutely do not “feel” like doing so but know your future self will appreciate the effort, applies in this example. Namely, we tell System 1 to hush and tell System 2 to get its lazy ass to work because we know that doing so is in our best interest.

I have only touched on a few points made by Kahneman and I highly encourage you to read it. My reason for discussing this two-part style of thinking, One being Quick-Draw McGraw and Two being a Lazy Eddy, is to provide context for an even larger impediment that will impact our society’s decision-making process: Generative Artificial Intelligence (Gen-AI). This transformative technology has already made an impact on how we obtain and dispense with information, and on January 22, 2025, the US “announced a private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in the business-critical technology” (Reuters). Project Stargate, funded by SoftBank, OpenAI, Oracle, and MGX, are set to begin building massive data centers in the great state of Texas with the purpose of supporting “the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies” (Stargate). Speaking of national security, it was only last month, 12/24/24 to be exact, that Anduril Industries, a defense technology company, partnered with OpenAI to announce “a strategic partnership to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions” (Anduril). Now, couple all of these new developments with the impact Gen-AI has had on our education industry, and we are fully ripe for a future in which algorithmically fueled machines are all too available to provide System 2 with a permanent vacation…

Leave a comment