Paradigm Shift

To start, let this be the first blog post, in a series of posts, in which our human curiosity meets the future of technology. In doing so, we can reflect on our co-dependent relationships with our technologies and how, through simple allegories, we can better understand the nature of the beast we created. This beast is Artificial Intelligence (AI). And while it currently remains shackled, we are fast approaching a time when the most difficult decision we face as humanity will need to be answered: Do we unleash it? While you might think this incredible technology has already been released, and to an extent it has, we are only beginning to scratch the indelible itch of more… So, the next obvious question is, what does more look like?

Artificial General Intelligence (AGI) has been the goal of many of the leading AI companies since the realization that deep learning does indeed improve a model’s output. AGI, as defined by Geoffrey Hinton, uses the term to mean “AI that is at least as good as humans at nearly all of the cognitive things that humans do” (AP News). Now, while you might be thinking (as I do about my own intellect), that “I’m really not that smart so it would seem that obtaining AGI might not be that difficult and, frankly, it would seem that ChatGPT 4o and 4o-preview already know more than I do.”  But is it really? Or is that the stochastic parrot nature of the beast, spouting words with no understanding of their meaning? This brings us to the heart of the matter: Can a machine ever truly understand in the way humans do? While AI can process and generate language that appears coherent and contextually relevant, it’s operating on patterns and probabilities derived from vast datasets. Essentially, it’s like a highly sophisticated autocomplete function that predicts the next word based on statistical likelihood rather than genuine comprehension.

So, does the ability to mimic human language equate to possessing consciousness or awareness? Or are we projecting our own experiences onto a faceless algorithm and mistaking imitation for understanding?

This brings us to the concept of paradigms, i.e., a lens through which we can examine how revolutionary ideas, like AI and (eventually) AGI, disrupt established norms. In Kuhn’s Structure of Scientific Revolution, the notion of a paradigm is introduced. The example discussed in that book is Newtonian mechanics which stood unchallenged for hundreds of years until Einstein’s theory of relativity. Thinking about this reminded me of two prior readings, Meno and How We Think. In Meno, Socrates and Meno are trying to settle on a satisfactory definition of virtue. Throughout the dialogue, many avenues are explored as to its potential meaning, yet a clear-cut answer is never given. We are left with Socrates saying that “virtue appears to be present in those of us who may possess it as a gift from the gods” (Meno, 2002, p. 35). His conversational style with Meno, and the rationalist method in which he approached his reasoning, was a major school of thought in 350 BC. According to Edgar (2012), “Recitation literacy was prevalent because it was a common belief that the mind was a gift from God and not to be questioned. Although scientific understandings of the mind have been postulated for centuries, it was not until the 19th century that scientific understanding of the mind started forming” (p. 1). Humans learned to read, to write, and memorize facts (mental discipline in its simplest form). Cue John Dewey and How We Think. I admit, it was not easy reading for me. In truth, I listened to much of it (thanks to technology). Still, I was able to appreciate his work and picked up a few nuggets of gold along the way; namely, reflective thought and how each idea builds on the other to form a belief. Simple enough. We do this daily, yet Dewey laid it out on paper for all to see.

We each have our experiences and our realities for thinking the way we do, right? So, my coloring of an event or new idea might not take the same hue as your coloring. Or my thoughts and beliefs might not be grounded with the same glue as yours. And that’s okay. In fact, it’s as it should be. Prior to this shift in theory, Edgar (2012, p. 2) states:

Schools in the 19th century were for preparing students for entrance into college. Those individuals who were not college bound mostly entered the workforce prior to completion of high school. Families needed children to work and to support the family unit, and education beyond “necessary” skills such as being able to read and write was viewed by the common person as a frivolous novelty for the rich.

So, just as Socrates grappled with the definition of virtue in Meno, we grapple with defining true ‘understanding‘ in machines. Dewey’s insights on reflective thought further illuminate how beliefs are formed (a process that AI attempts to mimic but may not fully replicate; yet…).

Then, the 20th century stepped in with its bipolar nature and off we went on an even more technologically advanced journey. Wars and depressions worked jointly with civil rights and technological advancements (and we reacted accordingly). New demands in the form of military aircraft and space shuttles created a need for more complex forms of learning. Think you can beat us to the moon, Russia? Get bent. We will put a man on the moon. In fact, we will make a computer small enough to carry while creating a platform on which to connect it to the world. How’s that for complex thought? The needs of time called our brains to action, and we responded accordingly. And so, behaviorism, and its forms of conditioning, gave way to cognitive theories which led us to social constructivism and where we are now in our current information overload era.

So, what now? Where are we in terms of education (i.e., thinking), and how do we receive and dispense with it? Seems we are at a crossroad in terms of our relationship with technology and just how far we are willing to use it before the master becomes the servant (or are we already there?).

In speaking on education, thinking, and the environment in which we live today, the teacher is a central figure whose role has the potential to steer students down many a career path. Speaking from my own experience, my kindergarten, first grade, and third grade teacher, each made an indelible mark on my life. All three played the role of a second mother while guiding my mind towards a love for learning (this being in the early 90’s). She would teach, write on a chalkboard, engage our mind in various hands-on activities, and move about the class to see if we were progressing in our skills. Sound familiar? Then, through our human ingenuity, computers became portable, phones became mobile, and we each caught a wave while surfing the web. Instant gratification became the name of the game as we hooked our brains to a technological nirvana. As a result, the onus seemed to shift as the instructor was no longer strictly a dispenser of information. In truth, we divorced tradition, married with technology, and allowed the instructor to assume the role of a guiding facilitator and mediator. The days of lecturing the student and being the sole source from which to obtain information were replaced with newer and younger models (and isn’t it something, wow!).

As we stand on the cusp of this new paradigm, the question isn’t just about unleashing AI, but also about redefining our roles in an increasingly automated world. Are we prepared for the consequences of this shift, or will we find ourselves chasing the very technology we’ve created?

My advice? Buckle up baby because this paradigm has already shifted.

Leave a comment