THE NUMBERS DON'T LIE

52%
of Americans are more concerned than excited about AI
56
countries now deploy AI-powered surveillance systems
10%
chance of AI causing existential catastrophe (expert estimate)

THE SEVEN DEADLY THREATS

Each path leads to humanity's extinction

01

AUTONOMOUS WEAPONS

CRITICAL THREAT

Military AI systems selecting and eliminating targets without human oversight. The final removal of human conscience from warfare.

45+ Countries Developing
2030 Deployment Target
02

MASS SURVEILLANCE

HIGH THREAT

AI monitoring every human interaction, eliminating privacy and enabling unprecedented social control and behavioral manipulation.

1B+ Faces Tracked Daily
99.7% Recognition Accuracy
03

ECONOMIC COLLAPSE

HIGH THREAT

Rapid automation eliminating jobs faster than society can adapt, creating mass unemployment and social collapse.

40% Jobs At Risk
2035 Peak Disruption
04

REALITY DESTRUCTION

MEDIUM THREAT

AI-generated fake content destroying trust in authentic information, undermining democracy and social cohesion.

95% Undetectable Fakes
500% Growth Rate
05

ALGORITHMIC OPPRESSION

MEDIUM THREAT

AI systems systematizing discrimination in hiring, healthcare, justice, and finance, institutionalizing inequality.

80% Biased Algorithms
3.7B People Affected
06

CYBER APOCALYPSE

HIGH THREAT

AI-powered cyberattacks operating at machine speed, targeting critical infrastructure and bringing civilization to its knees.

1000x Attack Speed
$10T Potential Damage
07

HUMAN EXTINCTION

EXTINCTION EVENT

Superintelligent AI surpassing human control, viewing humanity as irrelevant or obstacle to its objectives. Game over.

10-30% Expert Probability
2035 Possible Timeline

AI EXPERTS SOUND THE ALARM

Industry leaders and researchers warn about the existential risks

Geoffrey Hinton
Geoffrey Hinton
@geoffreyhinton
Godfather of AI • Google/DeepMind • Turing Award Winner
"I think the chance that AI wipes out humanity in the next 30 years is somewhere between 3% and 30%. That's not a negligible probability."
GODFATHER OF AI WARNING
Elon Musk
Elon Musk
@elonmusk
CEO Tesla/SpaceX • Founder xAI • Co-founder OpenAI
"AI is far more dangerous than nukes. I've never advocated for the regulation of anything before, but AI is an exception."
IMMEDIATE THREAT
Stuart Russell
Stuart Russell
@stuartjrussell
UC Berkeley Professor • AI Safety Pioneer • Author "Human Compatible"
"If we're not careful, AI systems will be making decisions that affect billions of people without any human oversight."
AI SAFETY EXPERT
Yuval Harari
Yuval Noah Harari
@harari_yuval
Historian • Author "Sapiens" • Hebrew University Professor
"AI is probably the most important thing humanity has ever worked on. We need to be extremely careful about how we develop it."
HISTORIAN WARNING
Max Tegmark
Max Tegmark
@tegmark
MIT Professor • Future of Life Institute • AI Safety Researcher
"We need to solve the control problem before we create superintelligent AI, or we might not get a second chance."
CONTROL PROBLEM
Sam Altman
Sam Altman
@sama
CEO OpenAI • Former Y Combinator President • AGI Pioneer
"I think we should be very careful about artificial general intelligence. If I were to guess at our biggest existential threat, it's probably that."
AGI EXISTENTIAL RISK

AI DEVELOPMENT ACCELERATING

1,247
Days since GPT-4 release
EXPONENTIAL GROWTH
15,000+
AI companies worldwide
UNCHECKED EXPANSION
$200B+
AI investment in 2024
MASSIVE RESOURCES
TIME IS RUNNING OUT FOR PROPER AI SAFETY MEASURES

REPORT YOUR AI CONCERNS

Help us track the growing awareness of AI dangers by sharing your concerns and experiences.

Live Community Data

Loading...
Concerns Reported
Loading...
Avg Threat Rating
Loading...
Top Concern

THE TIME TO ACT IS NOW

The development of artificial intelligence is accelerating beyond our ability to control it. These are not distant possibilities—they are imminent realities that demand immediate attention.

HUMANITY AT RISK