Foundations for a Mathematical Model of the Global Brain: architecture, components, and specifications

Abstract

The global brain can be defined as the distributed intelligence emerging from the network of all people and machines on this planet, as connected via the Internet. The present paper proposes the foundations for a mathematical model of the self-organization of such a network towards increasing intelligence. The assumption is that the network becomes more efficient in routing the right information to the right people, so that problems and opportunities can be addressed more efficiently by coordinating the actions of many people. The network develops by the creation and strengthening of useful connections, and the weakening and eventual elimination of counterproductive connections. People are modeled as agents that try to maximize their benefit by processing the challenges (problems and/or opportunities) they encounter. Challenges propagate from agent to agent across a weighted, directed network, which represents the social connections between these agents. A challenge is defined as the difference between the actual situation encountered by the agent, and that agent’s need (i.e. the agent’s desired situation). Challenges, computed as the difference between a situation and a need, have a variable state represented as a sparse vector, i.e. a list of real numbers most of which are 0. Negative numbers represent problems or deficiencies, positive numbers opportunities or resources. Agents deal with challenges by multiplying the corresponding vector by their processing matrix, thus producing a new situation vector. An agent’s processing matrix represents its skill in relaxing the different components of the challenge towards 0 (i.e. reducing the difference between situation and need). The degree of relaxation defines the amount of benefit extracted by the agent. Agents receive challenges either randomly from the challenge generator, or selectively from other agents, after these agents have extracted some benefit. Challenges are transmitted along the links between agents, with a probability dependent on the strength of the link. This strength evolves through a reinforcement-learning rule: the more benefit sending and receiving agents extract, the stronger their link becomes. In this way, the network self-organizes in a way similar to a neural network, while increasing the total amount of benefit extracted by all agents collectively. The intention of the simulation is to explore the space of parameters and propagation mechanisms in order to find the configurations that maximize this collective benefit extraction, which we define as the distributed intelligence of the network. Some of the parameters and mechanisms to be explored are the following: the capacity of the challenge queue (buffer memory) from which an agent selects the most promising challenges for processing; the criteria (challenge intensity, assumed processing skill, trust in the sender...) that an agent uses to select the most promising challenge; the relative proportion of rival and non-rival components of the challenge vector, where a rival component represent a material resource whose value is consumed by the processing, while a non-rival one represents an informational resource that maintains it value; the agent “IQ”, defined as the ability of the agent’s processing matrix to reduce the challenge intensity; the agent “mood”, determined by the sequence of its most recent successes and failures in benefit extraction, which affects its willingness to take risks; and an agent’s “preference vector”, defined by its track record in accepting or rejecting challenges. In a later stage, we also envisage to include market mechanisms, in which agents “pay” others with raw benefit in order to receive promising challenges, and reputation mechanisms, in which the reliability of an agent with which the present agent has no experience yet is estimated on the basis of its relations with other agents. GBI Working Paper 2012-05, Version 10/3/14 2 The basic model has been tested through a prototype implementation in Matlab. This confirmed that distributed intelligence effectively increases under the given assumptions. We are now developing a more detailed and scalable simulation model, which can potentially include millions of agents and links, using distributed graph databases and graph traversal algorithms. This will allow us to explore the effects of a large number of variations of the parameter values and mechanisms. Because of the modular architecture of the implementation, the effect of parameters or mechanisms can be studied in isolation or in various combinations. We hope that this will allow us to elucidate the influence of variables such as network topology, individual intelligence and type of social interaction on the emergence of distributed intelligence on the Internet. We plan to eventually compare the results of our simulations with empirical data, such as the propagation of Twitter, email or Facebook messages across an existing social network. For this we can use Latent Semantic Analysis (LSA) techniques to convert the text of the messages to the kind of vectors used in the model to represent challenges. If the model is successful, it should be able to predict such propagation much better than chance. GBI Working Paper 2012-05, Version 10/3/14 3 FOUNDATIONS FOR A MATHEMATICAL MODEL OF THE GLOBAL BRAIN: ARCHITECTURE, COMPONENTS, AND SPECIFICATIONS .......................................................................................................................... 1

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@inproceedings{Heylighen2012FoundationsFA, title={Foundations for a Mathematical Model of the Global Brain: architecture, components, and specifications}, author={Francis Heylighen and Evo Busseniers and Viktoras Veitas and Clement Vidal and David. R. Weinbaum}, year={2012} }