How could we create Superintelligence? All ingredients to create it are already here, apart from cognition and (eventually) consciousness.
Some scientists have expressed concerns about the possibility that AI could evolve to the point that humans could no longer control it.
In the view of some AI scientists, once AI becomes a mature Superintelligence, achieving Singularity, humans may be under its control.
2030 is the most likely date by which humans may lose an effective control over AI. This is the AI’s tipping point.
Transhumanism can be seen as three 'Super…': Superlongevity, Superintelligence and Superwellbeing.
We are the only species which is consciously capable of minimizing the risk of its extinction and control its own evolution in a desired direction.
There are top AI scientists who all point to 2030 as the time by when we will lose control over the (self)development of AI.
In practice, we have about one decade to put in place at least the main safeguards to control Superintelligence’s capabilities.
If we do nothing, our species may simply become extinct within this, or the next, century, as the consequence of a dozen existential risks.
The risk coming from Superintelligence is more likely to happen in the next 50 years rather than in the next century.
12Page 1 of 2