Alignment is All You Need

In a 2016 interview, Charles Fadel, the Center for Curriculum Redesign founder, stated “from the times of Socrates and Confucius, it has been obvious that what makes people successful in life is not only what they know and how skillfully they use their knowledge, but also how they behave and engage in the world. In other words, their character” (Rubin, 2017, p. 18). In my previous work as an Educational Specialist, I had the privilege of collaborating with a group of 100 middle-school students from low-income and first-generation backgrounds. These individuals had been identified as having the potential to excel in college yet required additional support to fully realize their capabilities. Through this experience, I gained a deep appreciation for the unique challenges that these students face and the importance of providing targeted resources and guidance to help them succeed. As it relates to this article, I would like to share a learning activity that was highly popular among the students. This activity provided them with an evidentiary roadmap based on their self-identified personality traits and paired their responses via a career interest survey.

The learning activity that was created has its roots in several different areas that I had previously explored while working with middle and high school students. The title of the activity, The Road to Character, shares the same name as the book by David Brooks. In it, Brooks describes the two Adams, Adam I and Adam II, and how each represents human nature. Adam I is career driven, while Adam II seeks to find inner peace. Brookes provides biographical sketches of important figures throughout our human history and how each overcame personal failures and/or weaknesses to then obtain greatness. In essence, each example was humbled by life before continuing along the path to good character and transformation. After my initial reading of the book several years ago, I immediately went out and purchased a hardback first edition. I loved the style in which it was written and felt the message throughout the book was highly important for young people to grasp at an early age.

Cue the next phase and how I translated it for middle school students. For 7th graders, I had them complete a Personal Values Identification form in which they picked their five most important values and why they chose them. From there, they would then complete a basic Holland Code test in which potential job seekers are divided into six personality types: Realistic, Investigative, Artistic, Social, Enterprising, and Conventional. I would then choose their highest three scores to determine their Holland code (SIR, AIC, ESI, etc.), and would then match that score with career fields that align with their code. My “food for thought” question was: Are your values and personality aligned with your career interests? Only you can know and decide.

For 8th graders, we went a step further and built upon the results obtained from the prior year. The students would complete a Career Clusters Interest Survey that asked them to choose: 1) the activities they liked doing, 2) personal qualities that described them, and 3) school subjects they liked. The survey was arranged in 16 boxes with each box correlating to a different Career Cluster. So, for example, if a student circled 10 activities, qualities, and school subjects within a particular box (and that box was the highest score), I would then align it with a career field (Business Admin & Management, Human Services, IT, etc.). At this point, I would access the Occupational Outlook Handbook (OOH) and compile a portfolio for the student detailing the job summary, work environment, educational requirements, future job outlook, similar occupations, etc. My goal for these activities was to put tangible data into their hands at a young age so they could have a specific reason to focus on their education.

The 8th graders who completed the assessment graduated in May 2021. Had I stayed within that career, some follow-up questions I would have asked them would be:

  • Are your career interests still the same as when the assessment was completed?
  • What solidified your career choice or caused you to change it?
  • What career resources would you have added to your high school career?
  • Were you able to do any job shadowing?
  • Do you feel like you have a good starting point to start your post-secondary education?

My thoughts were that the information garnered from these questions would allow educators to focus on what works, and what needs improvement. What better way to gain this information than from those who recently went through the process? Also, with these students largely being LIFG students, we have the potential for generational change by impacting educational and career decisions.

By aligning personal values and character traits with potential career paths, students can gain a deeper understanding of themselves and their goals, potentially leading to a greater sense of fulfillment and happiness in life. Remembering my own days in middle school, it felt at times that I was simply going through the motions of attending school without fully realizing what I was working towards. If we as educators can provide tangible and relevant data as it relates to the often-heard question of “why do we have to learn this?”, then it is my opinion that students will begin to shift their mindset to that of a determined and self-motivated individual working towards a defined goal.

In providing this assignment, my hopes were that LIFG students would see a blueprint for how to obtain their desired career and potentially impact their family dynamics by being the first to attend and graduate college. I hope you find this activity as rewarding as I did. In closing, let’s remember our true purpose as educators in that we are helping mold the next generation by promoting independent thinking and inspiring a love for learning. While challenging at times, the reward is a citizen capable of contribution, self-preservation, and the ability to think for themselves.

Learning to Be

In looking at our current educational institutions, the majority are structured to teach to specific outcomes (i.e., teaching to the test), as this methodology was applicable to the Industrial Age. Basic literacy and numeracy were taught as essential requirements for working in factories and military service. As nations further modernized, public education systems still taught rote knowledge while emphasizing higher education institutions for those who wanted a specific skillset on which to base a career. As we further questioned our human ignorance and admitted that our prior information systems (religion, state, economy) were not sufficient to account for the wonders of modern medicine, space exploration, and computing systems, we sought to provide worldwide access to information highways via the Internet and proliferate globalization through technology and trade. Microprocessors, mobile phones, automation, and social networking condensed, connected, and concentrated our man-made systems into a technological hub that unlocked the power of the algorithm. No longer were decision-making capabilities relegated to the realm of flesh and blood, nay, we successfully offloaded this powerful mechanism to machine learning that produced incredible results built on probabilities and statistics. Why make an educated guess when mathematics can provide pattern-based predictions? Why depend on heuristics when specificity is but a computer calculation away?

As computing systems scaled up via mass amounts of data and energy, we learned that the neural nets on which our human brain operates can be applied to artificial intelligence (AI) systems that resulted in their current generative abilities. Prior to this breakthrough in deep learning, our current sense of AI capabilities was confined to narrow systems that excelled in a specific domain (ex., Deep Blue, AlexNet, AlphaGo, etc.). As witnessed by the generative pre-trained transformer (GPT) models that launched in 2022 for public use, large language models (LLM’s) versatility created renewed interest and speculation concerning AI and how to best apply it to our vast range of systems. In true “cart before the horse” fashion, this advanced technology was deployed at scale before we fully understood its potential risks or prepared world governance structures for the ethical frameworks needed to ensure Gen-AI’s creator (humans) were not displaced in a changing society. And what of the data used to train these advanced models? Or, better yet, who owns AI-generated content and who is responsible if these models produce harmful content that negatively impacts a human? We are not even certain how they work! While not a complete black box, at best it’s opaque as this is the nature of reinforcement learning techniques used to train billion-parameter datasets. It’s akin to discovering fire the second time around; life changing, yet destructive if uncontrolled and misunderstood.

If we look at our current educational landscape, as this think piece is particularly concerned, we see a patchwork of responses from various institutions as it relates to implementing Gen-AI. This result is exactly as expected when presented with a novel technology that potentially disrupts the learning process. We didn’t know how to best implement it within our established systems, so many teachers were left reacting as students were all too eager to apply their generative abilities to coursework.  Currently, we see newer and more advanced AI models being developed, deployed, and open-sourced for all to use. What began as an innovative technology for those in the IT sphere has now garnered the attention of nations as evidenced by the US announcing a “private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in the business-critical technology” (Reuters). This investment signaled to the rest of the world that an AI race is in progress and those who wield its true potential (Artificial General Intelligence; AGI) first, are likely to reap unprecedented benefits if aligned with our human values.

AGI, as defined by Geoffrey Hinton, uses the term to mean “AI that is at least as good as humans at nearly all of the cognitive things that humans do” (AP News). This ideal has long been the goal of computer scientists and one that seems like a potential reality soon. Leopold Aschenbrenner, a former OpenAI employee and author of Situational Awareness: The Decade Ahead, stresses the importance of aligning AI systems prior to an AGI system automating the research process, thus creating superintelligent systems that dwarf our human ingenuity. While his work is beyond the scope of this article, the process of automating research is underway as a recently produced paper by The AI Scientist-v2 on compositional generalization was submitted to the Thirteenth International Conference on Learning Representations (ICLR 2025), and became the first AI system to pass the same peer-review procedure as a human would. In short:

The AI Scientist-v2 came up with the scientific hypothesis, proposed the experiments to test the hypothesis, wrote and refined the code to conduct those experiments, ran the experiments, analyzed the data, visualized the data in figures, and wrote every word of the entire scientific manuscript, from the title to the final reference, including placing figures and all formatting (Sakana.ai).

This is an incredible feat and one that begs the following question: If AI systems can eventually learn and create at or beyond our current human understanding, then what role does formal schooling have in our society? While education is the subject matter of this thought bubble, the prior question can be framed for any institutional system if (when?) AGI is reached. Rather than “fight the machine,” let it have linear, algorithmic, and computational thinking. We can offload these tasks and peer back into history and ponder on the questions that brought us to this moment in time. Namely, what does it mean to be human when all our technical problems have been solved?

            Termed by Edward de Bono (1967), lateral thinking involves approaching problems provocatively so that our creative abilities can be “unburdened by what has been” (pun intended). Realistically, de Bono’s idea harkens back to a time when western thought was being influenced by Socrates, Plato, and Aristotle. Thinkers of the truest sense, these individuals laid a foundation for our freedom of thought to flourish via probing questions into human behavior, ethics, epistemology, metaphysics, and scientific inquiry. But let’s look even further back in time and focus on what allowed these great minds to develop. What was it that allowed our species to influence one another to such an extent that we are now on the precipice of creating an artificial intelligence that has the potential to surpass our level of understanding? Put simply, our communication abilities, our cooperation agilities, and our organizational willingness to work as collective sapiens. These traits allowed us to shape our future and take control of our purpose in life.

Now, as we are primed to enter a new era of potential uncertainty, I want to remind our educators of their true mission in life, i.e., the reason you chose to enter this sacred profession. You wanted to make a difference in the lives of people by helping shape their thinking. You knew that, regardless of the agenda, curriculum, or mandate being implemented, what you were truly doing was guiding a mind. It is this principle, human thinking, that needs to be developed and curated within our educational systems. When a student asks, “Why do I have to write when AI can do it for me?”, your answer can be “Because writing involves thinking and that is what we are trying to develop.” When a student asks, “Can’t I just use AI to provide the answer?”, you can respond with “It is the questioning process that matters when searching for an answer.” When teaching history, “Why do I need to know what happened in the past”, you can answer with confidence that “Understanding how we arrived at this moment in our advanced society allows us to reflect and grow together as human beings.”

In closing, it is our thinking abilities that separate us from other animals. I would encourage my fellow lifelong learners to start constructing a future that emphasizes human creativity in meaning-making. Doing so will require us to understand the value in our emotional intelligence and learn to live with mechanistic beings that perform skills at a level unattainable by humans. If we accept these conditions then we have a chance of obtaining a level of comfort and tranquility that our forefathers would say are ripe conditions to foster human relationships. For isn’t this the key to meaning in our potential new world, i.e., learning to love one another just as we are. If so, then our level of happiness should rise and in doing so, we finally find the connection that links us all.

Systems Thinking, pt. III

To conclude the Systems Thinking series, I wanted to draw attention to another book I finished reading titled A Random Walk Down Wall Street by Burton Malkiel. In it, Malkiel suggests that stocks are unpredictable in the short term and that passive investing, i.e., index funds, provide more stability in their holdings by diversifying risk across many different sectors. If, as Burton suggests, the market efficiently accounts for all known information available in stock prices, then attempting to pick stocks or “beat the market” is a futile effort for most investors as the market has already accounted for all knowable knowledge (this being especially true in our digitally connected world). Several historical examples are provided in his book that illustrate how financial bubbles, largely driven by speculation rather than value, caused great angst in society as reality caused many “castles in the sky” to come crashing down to earth (Tulip Mania, South Sea Company, Dot-Com, etc.). “How could we have been so foolish,” cried many an investor? “It seemed like a can’t miss investment,” exclaimed another! Well, as with many shiny objects and bright lights, the euphoria fades once the curtain is drawn back and we see a company’s fakeness for all it’s worth. Bernie Madoff, a classic example discussed by Malkiel, convinced investors that his returns held steady during turbulent times, even when the market was turning downwards (cue the 2008 financial crisis and we all saw his scheme for what it was).

My purpose in writing this article is due to the vast amount of attention that AI is garnering in many sectors as a transformative technology, but also in the financial markets as analysts and gurus attempt to monetize its present and future value. Generative AI is capturing our attention as more businesses are advertising its capabilities and as technology companies promote its benefits in the form of free services for consumers. We recently saw another massive investment in the US ($500 billion to be exact) as Apple seeks to “produce servers to power Apple Intelligence, its suite of AI features” (AP News). Similarly, Taiwan Semiconductor Manufacturing Company (TSMC) announced “it will build three new chip fabrication plants, two advanced packaging facilities, and a research and development center at its complex in Arizona, growing the company’s total investment at the site to $165 billion” (Investopedia). It seems that everywhere we look, news of AI is being reported as companies position themselves for what seems to be an AI race between nations. The US was thought to have a clear advantage with this technology, only to be surprised by China’s DeepSeek V3 and R1 models (see Systems Thinking, pt. II for my thoughts on it). In short, AI has shifted from a purely business opportunity to a geopolitical priority, with governments aggressively securing their national interests. And, as history has taught us, various companies looking to take advantage of the AI train will undoubtedly begin to market their goods and services as “AI powered” or some similar variant. In this scenario, there will undoubtedly be losers whose business will fail. As described in Malkiel’s book, stay diversified, avoid speculation, and think long-term. AI will surely be a rising tide that lifts all boats. So, rather than picking the single best option, invest in broad index funds to ensure you rise with the tide. In doing so, you limit your risk of choosing incorrectly while maintaining your peace of mind.

Lastly, harkening back to Kahneman’s thoughts pertaining to System 1 and 2, if ever there were a time to invoke our System 2 process, that time is now. As nations, businesses, and consumers try to get a leg up on their deemed competition, bubbles will form as companies with no background in technology begin advertising how they are “harnessing AI” to its full potential… Just play it smart, don’t give in to FOMO, and continue investing a set dollar amount (bi-monthly or monthly preferably) into your broad market index fund and watch as your investment grows via compound interest.

AI is a technology that will surely change how we do business. However, along its path to embed itself within our markets, bubbles will form and pop.

Systems Thinking, pt. II

Disclaimer: I am not a financial guru, nor does anyone fully understand how the stock market operates. This article is my thoughts on human behavior and how Kahneman’s ideas pertaining to loss aversion and Prospect Theory, contributed to the Nasdaq dropping 3% and the S&P 500 dropping 1.5%.

Yesterday, 1/27/2025, we witnessed a massive tech sector sell-off in the stock market due to the news that DeepSeek’s AI model was “on par with similar models from U.S. companies such as ChatGPT maker OpenAI, and was more cost-effective in its use of expensive Nvidia chips to train the system on troves of data” (AP News). We saw Nvidia, a giant in terms of creating graphic processing units (GPU’s), lose roughly 600 billion in market value and a 17% drop in stock price (Yahoo Finance). Various other technology companies were also affected (TSM, Broadcom, Micron Technology, etc.) by the recent developments, as well as nuclear power companies that would (and still will) power the next generation of AI products in terms of compute and electricity. In short, a lot of money was lost as investors panicked and reacted to fear caused by the potential of disrupted business models as the sitting kings of AI (OpenAI, Google, Meta, etc.) were being challenged for the throne. Essentially, a new kid on the block showed up, talked smack, and instead of acting calmly and logically, our collective System 1 took control as we feared a potential loss in territory. Investors, notorious in their herd mentality, started selling without taking a moment to think through the vitality of a product such as DeepSeek, the country in which it was created, as well as how DeepSeek was built (via the open source model from Meta) using reinforcement learning methods (RL) which potentially exploits the reward function, aka reward hacking. The purpose of this brief article is not to expound on the model’s components, but to draw attention to a great example of Kahneman’s Prospect Theory, loss aversion, and the certainty effect he so aptly describes in his book: Thinking, Fast and Slow.

Prospect Theory posits that we tend to evaluate our potential losses more heavily than comparable gains. This asymmetric response is due to the individual’s reference point in terms of how they perceive their utility, aka value, from a gain or loss. As such, the decision or scenario faced by a choice is relative to the individual’s perception (this is in contrast to the expected utility hypothesis proposed by Daniel Bernoulli). Investors, and the pressure they face to appease stakeholders, reacted to incumbent uncertainty surrounding the impact of DeepSeek’s innovation on our aforementioned tech companies. In short, investors sold shares not just because of real, tangible risks but because of their aversion to potential future losses. Kahneman’s idea of the “certainty effect” then took hold as investors chose to cash out now to avoid uncertain future failures.  Then, as is common, we witnessed a classic example of herd behavior in that as prices began to drop, investors followed the crowd and further amplified the sell-off. System 1’s instinctive and emotional thinking began to further affect the market as news about DeepSeek and its potential to disrupt the AI landscape may have loomed disproportionately large in the minds of investors; availability heuristics at its finest. So, System 1 likely drove the initial panic we witnessed, while our System 2 was slower to catch up (as evident in the Nasdaq rebound of today).

In closing, let’s not forget that it was American technology that built the foundation for DeepSeek’s model to thrive. Sure, they used some interesting techniques that can be learned from, but the overreaction of our stock market yesterday could have been avoided had more thought been placed into the (more than likely) intention of China’s main goal of disrupting our economy. And, like Lucy pulling the football from Charlie Brown, we did exactly as they most likely predicted. Our prior announcement of a US AI investment and DeepSeek’s launch on inauguration day is not a coincidence. It was a timed response that signified to both nations the growing AI race and, more importantly, the need for the US to be the first to obtain Artificial General Intelligence (AGI). The world is taking notice of an AI driven future, and it is imperative that we do not yield to System 1’s impulsive nature. Thoughtful decision making is needed as yesterday was the official start of a free world race.

Systems Thinking

I recently started reading a fascinating book titled Thinking, Fast and Slow by Daniel Kahneman. In it, he describes the two types of “systems” that our human brain has developed to make sense of our world and the actions we take to navigate it. System 1 is our fast-acting, stereotypical, bias-prone, judgmental, and reactionary mechanism that allows our species to assess threats and make split-second decisions. System 2 is our logical, rational, and thought-provoking mechanism that is in charge of validating the inputs derived from System 1; albeit with one problem. It tends to be lazy and would rather validate System 1’s processes without exerting the necessary cognitive strain of applying resources for verification of an input. Sound familiar? It is much easier to judge, operate under bias, and allow our thinking machine to take a backseat while we operate in a cruise control frame of mind. Engaging System 2 requires us to be more vigilant and cognitively aware of our surroundings, while also sorting and deciding on which mental image/thought we choose to dwell or act on. As a result, we have developed mental shortcuts, aka heuristics, that allow us to make decisions that are “good enough.” This works well in a lot of areas but can get us in trouble when we come to rely on it for situations that require specificity in the decision-making process.

So, how do we avoid mistakes in thinking when our lazy System 2 would love to validate System 1’s decisions without applying mental resources to justify the input? How do we not jump to conclusions or seek cause and effect scenarios when statistical evidence or random chance counter what our System 1 is implying? A good place to start is by accepting the existence of System 1 and 2 in mental modes of operation and recognizing that our emotional nature would like nothing more than to not engage our lazy System 2. It is much easier to allow our personal world view to stay intact and constructed without ever questioning the foundation on which it was built. However, to grow as a person and productive citizen, allowing our emotions to reign will surely lead to a common question plagued by many of us: “How did I get myself into this situation?”

Cue critical thinking and the cognitive strain it requires. Even now as I formulate my thoughts and decide which words to use in portraying my message, System 2’s requirement of my attention and the churning of my mental machinery is a process that involves parsing Kahneman’s text with my interpretation and self-awareness. Meaning, it requires effort, focus, and determination in making System 2 perform to the specifications I know it can achieve. Yet, once engaged, I find myself enjoying the effort it requires of me and the effect it has on my sense of being an industrious individual! The same principle of forcing yourself to exercise when you absolutely do not “feel” like doing so but know your future self will appreciate the effort, applies in this example. Namely, we tell System 1 to hush and tell System 2 to get its lazy ass to work because we know that doing so is in our best interest.

I have only touched on a few points made by Kahneman and I highly encourage you to read it. My reason for discussing this two-part style of thinking, One being Quick-Draw McGraw and Two being a Lazy Eddy, is to provide context for an even larger impediment that will impact our society’s decision-making process: Generative Artificial Intelligence (Gen-AI). This transformative technology has already made an impact on how we obtain and dispense with information, and on January 22, 2025, the US “announced a private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in the business-critical technology” (Reuters). Project Stargate, funded by SoftBank, OpenAI, Oracle, and MGX, are set to begin building massive data centers in the great state of Texas with the purpose of supporting “the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies” (Stargate). Speaking of national security, it was only last month, 12/24/24 to be exact, that Anduril Industries, a defense technology company, partnered with OpenAI to announce “a strategic partnership to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions” (Anduril). Now, couple all of these new developments with the impact Gen-AI has had on our education industry, and we are fully ripe for a future in which algorithmically fueled machines are all too available to provide System 2 with a permanent vacation…

Paradigm Shift

To start, let this be the first blog post, in a series of posts, in which our human curiosity meets the future of technology. In doing so, we can reflect on our co-dependent relationships with our technologies and how, through simple allegories, we can better understand the nature of the beast we created. This beast is Artificial Intelligence (AI). And while it currently remains shackled, we are fast approaching a time when the most difficult decision we face as humanity will need to be answered: Do we unleash it? While you might think this incredible technology has already been released, and to an extent it has, we are only beginning to scratch the indelible itch of more… So, the next obvious question is, what does more look like?

Artificial General Intelligence (AGI) has been the goal of many of the leading AI companies since the realization that deep learning does indeed improve a model’s output. AGI, as defined by Geoffrey Hinton, uses the term to mean “AI that is at least as good as humans at nearly all of the cognitive things that humans do” (AP News). Now, while you might be thinking (as I do about my own intellect), that “I’m really not that smart so it would seem that obtaining AGI might not be that difficult and, frankly, it would seem that ChatGPT 4o and 4o-preview already know more than I do.”  But is it really? Or is that the stochastic parrot nature of the beast, spouting words with no understanding of their meaning? This brings us to the heart of the matter: Can a machine ever truly understand in the way humans do? While AI can process and generate language that appears coherent and contextually relevant, it’s operating on patterns and probabilities derived from vast datasets. Essentially, it’s like a highly sophisticated autocomplete function that predicts the next word based on statistical likelihood rather than genuine comprehension.

So, does the ability to mimic human language equate to possessing consciousness or awareness? Or are we projecting our own experiences onto a faceless algorithm and mistaking imitation for understanding?

This brings us to the concept of paradigms, i.e., a lens through which we can examine how revolutionary ideas, like AI and (eventually) AGI, disrupt established norms. In Kuhn’s Structure of Scientific Revolution, the notion of a paradigm is introduced. The example discussed in that book is Newtonian mechanics which stood unchallenged for hundreds of years until Einstein’s theory of relativity. Thinking about this reminded me of two prior readings, Meno and How We Think. In Meno, Socrates and Meno are trying to settle on a satisfactory definition of virtue. Throughout the dialogue, many avenues are explored as to its potential meaning, yet a clear-cut answer is never given. We are left with Socrates saying that “virtue appears to be present in those of us who may possess it as a gift from the gods” (Meno, 2002, p. 35). His conversational style with Meno, and the rationalist method in which he approached his reasoning, was a major school of thought in 350 BC. According to Edgar (2012), “Recitation literacy was prevalent because it was a common belief that the mind was a gift from God and not to be questioned. Although scientific understandings of the mind have been postulated for centuries, it was not until the 19th century that scientific understanding of the mind started forming” (p. 1). Humans learned to read, to write, and memorize facts (mental discipline in its simplest form). Cue John Dewey and How We Think. I admit, it was not easy reading for me. In truth, I listened to much of it (thanks to technology). Still, I was able to appreciate his work and picked up a few nuggets of gold along the way; namely, reflective thought and how each idea builds on the other to form a belief. Simple enough. We do this daily, yet Dewey laid it out on paper for all to see.

We each have our experiences and our realities for thinking the way we do, right? So, my coloring of an event or new idea might not take the same hue as your coloring. Or my thoughts and beliefs might not be grounded with the same glue as yours. And that’s okay. In fact, it’s as it should be. Prior to this shift in theory, Edgar (2012, p. 2) states:

Schools in the 19th century were for preparing students for entrance into college. Those individuals who were not college bound mostly entered the workforce prior to completion of high school. Families needed children to work and to support the family unit, and education beyond “necessary” skills such as being able to read and write was viewed by the common person as a frivolous novelty for the rich.

So, just as Socrates grappled with the definition of virtue in Meno, we grapple with defining true ‘understanding‘ in machines. Dewey’s insights on reflective thought further illuminate how beliefs are formed (a process that AI attempts to mimic but may not fully replicate; yet…).

Then, the 20th century stepped in with its bipolar nature and off we went on an even more technologically advanced journey. Wars and depressions worked jointly with civil rights and technological advancements (and we reacted accordingly). New demands in the form of military aircraft and space shuttles created a need for more complex forms of learning. Think you can beat us to the moon, Russia? Get bent. We will put a man on the moon. In fact, we will make a computer small enough to carry while creating a platform on which to connect it to the world. How’s that for complex thought? The needs of time called our brains to action, and we responded accordingly. And so, behaviorism, and its forms of conditioning, gave way to cognitive theories which led us to social constructivism and where we are now in our current information overload era.

So, what now? Where are we in terms of education (i.e., thinking), and how do we receive and dispense with it? Seems we are at a crossroad in terms of our relationship with technology and just how far we are willing to use it before the master becomes the servant (or are we already there?).

In speaking on education, thinking, and the environment in which we live today, the teacher is a central figure whose role has the potential to steer students down many a career path. Speaking from my own experience, my kindergarten, first grade, and third grade teacher, each made an indelible mark on my life. All three played the role of a second mother while guiding my mind towards a love for learning (this being in the early 90’s). She would teach, write on a chalkboard, engage our mind in various hands-on activities, and move about the class to see if we were progressing in our skills. Sound familiar? Then, through our human ingenuity, computers became portable, phones became mobile, and we each caught a wave while surfing the web. Instant gratification became the name of the game as we hooked our brains to a technological nirvana. As a result, the onus seemed to shift as the instructor was no longer strictly a dispenser of information. In truth, we divorced tradition, married with technology, and allowed the instructor to assume the role of a guiding facilitator and mediator. The days of lecturing the student and being the sole source from which to obtain information were replaced with newer and younger models (and isn’t it something, wow!).

As we stand on the cusp of this new paradigm, the question isn’t just about unleashing AI, but also about redefining our roles in an increasingly automated world. Are we prepared for the consequences of this shift, or will we find ourselves chasing the very technology we’ve created?

My advice? Buckle up baby because this paradigm has already shifted.