*** Welcome to piglix ***

Singleton (global governance)


In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term has first been defined by Nick Bostrom.

An artificial general intelligence having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not directly micromanage everything in its domain; it could allow diverse forms of organization within itself, albeit guaranteed to function within strict parameters. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.

A singleton has both potential risks and potential benefits. Notably, a suitable singleton could solve world coordination problems that would not otherwise be solvable, opening up otherwise unavailable developmental trajectories for civilization. For example, Ben Goertzel, an AGI researcher, suggests humans may instead decide to create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved. Furthermore, Bostrom suggests that a singleton could hold Darwinian evolutionary pressures in check, preventing agents interested only in reproduction from coming to dominate.

Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk. The very stability of a singleton makes the installation of a bad singleton especially catastrophic, since the consequences can never be undone. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".


...
Wikipedia

...