challenging religious privilege in public life

Tony Czarnecki


Will Superintelligence cause our extinction or pave the way for a human species evolution?

There are top AI scientists who all point to 2030 as the time by when we will lose control over the (self)development of AI.

Will there be a human level Artificial Intelligence by 2030?

2030 is the most likely date by which humans may lose an effective control over AI. This is the AI’s tipping point.

What is Superintelligence?

In the view of some AI scientists, once AI becomes a mature Superintelligence, achieving Singularity, humans may be under its control.

What is the risk of Superintelligence?

The risk coming from Superintelligence is more likely to happen in the next 50 years rather than in the next century.

How to create Superintelligence?

How could we create Superintelligence? All ingredients to create it are already here, apart from cognition and (eventually) consciousness.

Should We Be Worried About the Existential Risk of Artificial Intelligence?

Some scientists have expressed concerns about the possibility that AI could evolve to the point that humans could no longer control it.

What really is Transhumanism? Are you already partly Transhuman?

Transhumanism can be seen as three 'Super…': Superlongevity, Superintelligence and Superwellbeing.

Human extinction or Posthuman evolution?

We are the only species which is consciously capable of minimizing the risk of its extinction and control its own evolution in a desired direction.

Managing Human’s Evolution

In practice, we have about one decade to put in place at least the main safeguards to control Superintelligence’s capabilities.

Will humans gradually become part of Superintelligence?

If we do nothing, our species may simply become extinct within this, or the next, century, as the consequence of a dozen existential risks.