AI and Wisdom
Best of times, worst of times
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair…”
— Charles Dickens
I don’t want to disappoint you, so I’m telling you upfront: this is not yet another article pondering whether AI is or can be conscious and wise. It’s more about whether we can be like that in our interactions with AI, individually and collectively.
If your fears make you believe, for example, that AI is a dangerous disease that will ruin our lives, trigger the mythical robot takeover, and lead to humanity’s extinction, then you are right.
If your hopes make you believe the opposite, for example, that AI is a panacea and it can turbo-charge drug discovery, give every student a personalized tutor, and take us beyond the threshold of longevity escape velocity, or even make wage-slavery obsolete, then you are right, too.
How can you be correct in both cases? Simply because whatever we give attention to, it will feed on it and grow. Here is a case to illustrate my point.
Tales of AI-induced human extinction are looming large in the media-groomed social imaginary. While pundits and tech bros grow their attention capital from talking up the AI doom, the consequences are far from being merely imaginary.
The louder the drum rolls of doom, the deeper they penetrate the public psyche. The polls prove that most people have a negative outlook on AI. Never mind the vast number of them who happily use it in their daily lives and work. Having said that, there’s also the story of couples who hesitate to bring children into the world because of the impending AI catastrophe and similar panic scenarios.
The choice of what we pay attention to is foundational freedom and, according to Daniel Schmachtenberger, one of our three vectors of sovereignty. It’s worth attending to what we pay attention to as wisely as possible because it is far-reaching.
I choose to give my attention to AI’s emancipatory potential without discounting the AI doomers’ beliefs. The AI alignment problem they point to is real, but the perspective from which it is stated tends to be frequently too narrow to find a holistic solution.
The AI alignment problem is a challenge coming from not the nature of AI but the diversity of human value systems that need to align themselves around a possible and desirable future worth investing the attention of AI.
The AIceberg
In the tip of the AIceberg, there’s the sought-after AI alignment stoked by fear and the “solve-everything” hopes alike. What is below the surface is weirdly fascinating. It is because what is, ultimately, is up to us. Let me explain.
Trying to make sense of the bewildering range of opinions and analyses, we may ask, What’s the most useful question we can ask about AI? The replies are highly inconclusive because they are as different as the goals and perspectives of the people we ask.
Looking below the surface of their responses, we can see factors modeled roughly on Donella Meadows’ “systemic leverage points.” The influence of those leverage points grows more decisive as we go down in the body of the AIceberg.
At the bottom are the values and mindsets from which we operate, which also define our relationship with AI. Those values and perspectives shape our worldviews that can be very limited, ruled by narrow self-interest, or extremely large, treasuring the whole of life and anywhere in between.
Where we look at AI is precisely where it looks back at us. At one end of the spectrum, we can find the “get rich quickly with ChatGPT” videos and the zillions of AI-generated self-help ebooks on Amazon. On the other end, there are, for example, stories about AI’s evolutionary potential and how it is fostering the restoration of our relationship with the “more-than-human world,” as John Thackara refers to it.
The large-scale development of a harmonious relationship with the more-than-human world first requires healing our social ills. Using AI at its best, when guided by wisdom, may help with that epic task.
What is wisdom, and can it guide AI?
The hybrid intelligence of human and AI agents, which I wrote about in February 2023, was a trailblazing topic back then. Today, only eight months later, it’s already commonplace. The “co-pilot” metaphor in Microsoft’s and other vendors’ AI products emphasizes the collaboration of people and AI bots, forming a mixed intelligence.
Such a hybrid intelligence proves useful for productivity and coordination gains in offices, factories, and over the land. Still, if you look for AI’s more significant potential, you must look for something more than artificial intelligence. AI can give us better, even surprising information, but “we are drowning in information while starving for wisdom” (Wilson, 1998).
Wisdom is not a static “thing” but a quality of consciousness that evolves and deepens over time. People and groups are wiser when their views and decisions serve both the well-being and well-becoming of all by taking into account more of the interdependent contexts and long-term consequences of their actions.
“So the wisdom we’re seeking is necessarily co-created by diverse people and firmly grounded in an expanded sense of reality — especially embracing the aliveness of human and natural systems and interactions.” (Atlee, 2018)
In other words, reality is relational. That is the cornerstone of the indigenous research paradigm’s ontology and epistemology. (Wilson, 2008) The farther we want to see into the future of AI and ourselves, the more intimately we need to connect with the wisdom of the first people of this planet.
Conscious living is action research and has always been. We observe what is happening within and without, pick up signals of relevance, reflect, act, and observe again in never-ending circles, or, should I say, spiral. We do that individually and together with others. Smart people figure it out themselves; wiser people turn toward each other.
That’s because even the wisest of us have only a limited capacity to account for all the relevant signals or anticipate the consequences of our actions in multiple contexts across time and space.
AI research, development, and even its use is a team sport. Our curiosity drives our urge to discover what ChatGPT or other AI agents can do for us, so we start exploring it by ourselves. But when we want to accomplish something with their aid, then we start building on the knowledge and experience of others, watching videos, reading articles, and chatting with friends and colleagues about which tools to use, what prompting strategy works best, etc.
The more consequential the challenge or opportunity a group faces, the greater the need to enlist the support of Collaborative Hybrid Intelligence (CHI) of AI and human agents. However, while intelligence refers only to a “capacity for learning, reasoning, and understanding,” wisdom is about discerning “what is true or right coupled with just judgment as to action” (Collins Dictionary).
That’s why we need wisdom-guided CHI in high-stake AI development and applications. But where will the wisdom come from? We frequently quote that “our problems cannot be solved at the same level of consciousness that created it,” but we rarely ask where the new consciousness will come from.
Those questions are pivotal to “The Rise of Compassionate AI” action research I’m conducting in collaboration with the RADAR collective. Stay tuned and subscribe here to receive updates.
References
Atlee, T. (2018). The Nature of Wisdom in a Wide Democracy. 3D Wise Democracy.
www.wd-pl.com/3d-wise-democracy/the-nature-of-wise-outcomes-in-a-wise-democracy/
Wilson, E.O. (1998). Consilience: The Unity of Knowledge. Little, Brown, & Co.
Wilson, S. (2008). Research Is Ceremony: Indigenous Research Methods. Fernwood Publishing
Originally published on Medium.
technoshaman.medium.com/ai-and-wisdom-ce0cd11db218