We are the only species which is consciously capable of minimizing the risk of its extinction and control its own evolution in a desired direction.
There are top AI scientists who all point to 2030 as the time by when we will lose control over the (self)development of AI.
In practice, we have about one decade to put in place at least the main safeguards to control Superintelligence’s capabilities.
If we do nothing, our species may simply become extinct within this, or the next, century, as the consequence of a dozen existential risks.
The risk coming from Superintelligence is more likely to happen in the next 50 years rather than in the next century.
People, like Nick Bostrom, one of the top experts on Superintelligence, think we need to invent some controlling methods to minimize the risk of AGI.