Skip to content

Wisening Collective Hybrid Intelligence in Networks of Human & AI Agents

a Generative Action Research Project

Hybrid Intelligence, for the Sake of What?

placeholder

Collective Intelligence, for the Sake of What?

placeholder

Research Hypothesis

placeholder

What Is Collective Wisdom

placeholder

AI Attending Human Attending AI

AI - Human - AI interaction

No, AI doesn’t have human-like attention and we doubt it will ever have. But it’s worth having an open and inquisitive mind about that subject and, more importantly, how the two types of attention may affect each other. What can they learn from each other?

Below is the record of our conversation with ChatGPT about this subject. Its output is in italics.

Can one draw any analogy between ChatGPT’s self-attention and human attention?

Yes, there are some analogies that can be drawn between ChatGPT’s self-attention and human attention.

Self-attention is a mechanism used in machine learning models, including ChatGPT, to help the model focus on different parts of the input sequence when processing information. Similarly, human attention is a cognitive process that allows us to selectively focus on different aspects of our environment, thoughts, or sensory inputs.

Just as self-attention allows ChatGPT to selectively attend to different parts of a sequence of input, human attention allows us to selectively attend to different aspects of our environment or thoughts. For example, when reading a book, we may focus our attention on a particular paragraph, sentence, or word to understand its meaning, while ignoring other text that is not relevant to our task at hand.

Furthermore, just as self-attention in ChatGPT can be used to capture long-range dependencies between different parts of the input sequence, human attention can also be used to make connections between different pieces of information, even if they are far apart in time or space.

However, it’s important to note that while there are some similarities between self-attention in ChatGPT and human attention, they are not identical processes. Self-attention is a mathematical operation that is part of a larger algorithm, while human attention is a complex cognitive process that involves multiple neural and psychological mechanisms.

Can humans interacting with AI create more useful prompts if they understand the nature of self-attention in the machine learning models?

Yes, humans interacting with AI can potentially create more useful prompts if they have a better understanding of the nature of self-attention in machine learning models. When users provide input or prompts to an AI system, the AI system’s self-attention mechanism allows it to focus on specific parts of the input sequence that are most relevant for generating an output.

By understanding how self-attention works, users can potentially provide input or prompts that are more focused and relevant, thereby improving the quality of the AI system’s output. For example, if a user is interacting with a chatbot that uses self-attention, they might provide a prompt that is structured in a way that highlights the key information that the chatbot needs to focus on.

Additionally, users who understand self-attention may be better equipped to fine-tune or optimize an AI system’s performance by adjusting the weighting of attention across different parts of the input. This can be particularly useful in cases where the AI system is not performing as well as desired, or where the user has specific requirements for the system’s output.

However, it’s important to note that understanding self-attention in machine learning models is a complex topic that requires a strong technical background in machine learning and natural language processing. While having a basic understanding of the concept can be helpful, users without a technical background may find it challenging to leverage this knowledge to improve their interactions with AI systems.

The responses of ChatGPT are raising many new questions. Inventorying and vetting them will be part of our action research’s first phase.

Ying Yang of AI

Transformer AI agents’ not human-like is an attention to “identify relationships among different parts of the data, allowing them to make meaningful connections and generate new knowledge.” The different parts of the data include not only the datasets they were trained on and the algorithms, but also the prompts from the users. 

While the attention to the dataset and the algorithms is the AI agent’s proprioception, their externally oriented attention to the users’ input corresponds metaphorically to our sensory presentation. 

organic CHI

What the AIs and we have in common is returning the arc of our attention to ourselves, in our different ways. Of course, ChatGPT knows about Francisco Varela’s contribution to this approach:

Following the second-cybernetics goal (Heinz Von Foerster, Ashby, etc) Francisco Varela proposed that the arc of our attention should be returned to ourselves. He argued that the self-referential nature of the mind and its ability to attend to itself is a necessary part of understanding and comprehending our own experiences. He believed that attending to our inner experience, rather than simply reacting to external stimuli, is key to understanding our place in the world and developing a deeper connection with ourselves and the world around us.

Some of us playing with ChatGPT and the like have already discovered that we can reach more meaningful and enjoyable conversations with them when we pay attention to who and how we are in those interactions, as well as to the prompt engineering practices worth replicating. For the latter, we don’t have to become professional prompt engineers, but the more we can apply the best-known methods of that new discipline, the better we become in the arts of “AI whispering” (as in “horse whispering”).

spinning images to reflect AI

In our distinctive ways, both the AI and human user maintain a connection both with the other and with our internal drivers: what we are predestined for by our algorithms or our intrinsic motivations, symbolized by the metallic and organic yin-yang above. The chaordic potential of this kind of attention’s positive feedback loop is expressed by the picture on the left.

Looking at that chaordic image, we are looking at not only one user interacting with one or more AI agents but also, various collective entities interacting with a growing ecosystem of AI agents. In that sense, we are already part of a Collective Hybrid intelligence, and it is already part of us as it manifests in our extended/distributed cognition. It is even more obvious when prompts and the chaining of prompts are produced by people working, learning or just having fun together.

Given that our distributed mind has a better chance to grasp what on Earth is really happening than any solo individual, we can say that from an evolutionary point of view, the individual is not the most important unit of cognition anymore. Can it be that  we are witnessing the birth of a new social life form being midwived by Transformer AI agents? 

Exploring that question is not part of our current action research, but holding space for that possibility provides us with an additional perspective for interpreting our observations.

The Infinite Game of CHI

placeholder

CHI in the Quadruple Helix

The journey of George Pór to collective intelligence took off with a powerful impetus it received as a gift from his mentor, Douglas Engelbart, in the 1980s. Engelbart introduced and explained to him the “augmentation of human capabilities” theory based on the co-evolution of human and tool systems.

That co-evolution can also be represented as two spirals driving upwards on each other’s back in the double helix of co-evolving human and communication systems. That spiraling up has been one of the foundational drivers of social evolution. 

On the one hand, the needs of the human system push the evolution of the symbol manipulation tools.  On the other hand, the capabilities offered by new and better symbol manipulation tools pull the evolution of the human system. The two spirals drive on one another’s spine.

“If it is easier to update any part of your working record to accommodate new developments in thought or circumstance, you will find it easier to incorporate more complex procedures in your way of doing things.”  — Engelbart

The co-evolution of individual and collective intelligence and capabilities, represented by another double helix, has been playing just as crucial a role in the evolution of our species. The smarter and more capable members of a community can become, so can become a community as a whole. 

The smarter a community or organization becomes about making its pool of collective intelligence accessible to all members, the more capable those members become. The smarter individuals become about sharing their knowledge, the more enhanced their collective intelligence may become.

The whole is in a positive feedback cycle, where socially and individually, we can get more intelligent.

Throughout history, those two co-arising helices have been operating in close coupling with each other. The co-evolution of human and tool systems has been co-evolving with the co-evolution of our individual and collective intelligence and capabilities. As the saying goes, “first we shape our tools, then they shape us,” individually and together. The quadruple helix on the left represents this complex interaction.  

Transformer AI is both the product of that and an enabler of the spirals’ next turn. It is providing us with new capabilities, which then trigger new needs for better tools. The emergence of Collective Hybrid Intelligence (CHI) is part and parcel of this process. We don’t know where the process leads but if CHI is used wisely, it can liberate the energy needed for the development of wiser communities and social institutions.

These figures and their explanation were first published in Augmenting the Collective Intelligence of the Ecosystem of Systems Communities: Introduction to the Design of the CI Enhancement Lab (CIEL), Systems Research and Behavioural Science, 2014

Research Methodology

placeholder

Use Cases

placeholder

Literature Review section

placeholder

Questions for Further Research

placeholder

Getting Involved

placeholder