*** Welcome to piglix ***

Singularitarianism


Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the Singularity benefits humans.

Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the Singularity is not only possible, but desirable if guided prudently. Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization.

Time magazine describes the worldview of Singularitarians by saying that "even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but... while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation."

Inventor and futurist Ray Kurzweil, author of the 2005 book The Singularity Is Near: When Humans Transcend Biology, defines a Singularitarian as someone "who understands the Singularity and who has reflected on its implications for his or her own life"; he estimates the Singularity will occur around 2045.

Singularitarianism coalesced into a coherent ideology in 2000 when artificial intelligence (AI) researcher Eliezer Yudkowsky wrote The Singularitarian Principles, in which he stated that a “Singularitarian” believes that the singularity is a secular, non-mystical event which is possible and beneficial to the world and is worked towards by its adherents.

In June 2000 Yudkowsky, with the support of Internet entrepreneurs Brian Atkins and Sabine Atkins, founded the Machine Intelligence Research Institute to work towards the creation of self-improving Friendly AI. MIRI's writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. These Singularitarians believe that reaching the Singularity swiftly and safely is the best possible way to minimize net existential risk.


...
Wikipedia

...