Forget dystopian scenarios – AI is pervasive today, and the risks are often hidden
Developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities.
Let’s base AI debates on reality, not extreme fears about the future
Longtermism is the belief that AI poses existential risks to humanity's future by becoming an out-of-control superintelligence.
AI: the world is finally starting to regulate artificial intelligence
AI regulation is a comprehensive set of rules prescribing how this technology should be developed and used to address its potential harms.
How AI could take over elections – and undermine democracy
AI could tune its persuasion efforts to millions of people individually – but it could be a nightmare for democracy.
AI-developed drug breakthrough. With Alex Zhavoronkov
A number of companies are now using AI to develop drugs faster, cheaper, and with fewer failures along the way.
Let’s focus on AI’s tangible risks rather than speculating about its potential to pose an existential threat
Over the past few months, AI has entered the global conversation as a result of the widespread adoption of generative AI-based tools.
Top 9 ethical issues in artificial intelligence
Optimizing logistics, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives.
Superintelligence: The Final Frontier of Artificial Intelligence
As AI capabilities continue to advance, there is a concept that fascinates researchers, ethicists, and futurists: superintelligence.
Eliminating bias in AI may be impossible – a computer scientist explains how to tame it instead
Even if researchers could achieve bias-free generative AI, that would be just one step toward the broader goal of fairness.
What are the Types of Artificial Intelligence?
Once we achieve AGI, AI systems would rapidly improve their capabilities and advance into realms that we might not even have dreamed of.