After 50 years of visions, it’s time for a realistic reassessment
For decades, extreme visions – both utopian and dystopian – have dominated the debate on AI and robotics. Yet it is now becoming clear that the experts of the last 50 years were wrong – and missed the core of the actual social transformation brought about by AI.
The 1980s: Techno-utopias and first warnings
In the 1980s, AI and robotics were still in their infancy. But visionaries were already sketching out bold visions of the future. Jaron Lanier, VR pioneer and founder of VPL Research (1984), developed the first commercial VR products and coined the term ‘virtual reality’ in 1987 – a symbol of the techno-utopianism of the time. (Source: Wharton)
The possibilities of computer technology seemed limitless. Some believed that machines would soon take over evolution. Hans Moravec, for example, predicted in Mind Children (1988) that self-learning superintelligences would surpass humans and ultimately render them superfluous. (Source: Wikiversity)
Automation also entered the public consciousness for the first time: in the USA, media outlets warned of mass unemployment with headlines such as A Robot is After Your Job. Thought leaders such as Joseph Weizenbaum (Die Macht der Computer und die Ohnmacht der Vernunft (The power of computers and the powerlessness of reason), 1976) warned early on against blind faith in technology.
Overall, however, optimistic visions of the future dominated the 1980s: technology gurus sketched out visionary information societies – without knowing exactly how these would unfold in concrete terms. Many believed that a breakthrough was imminent.
The 1990s: Digital revolution and new fears
The 1990s brought the breakthrough of the personal computer and the first Internet age – accompanied by euphoria and growing skepticism. In 1995, Jeremy Rifkin warned in The End of Work of massive job losses due to AI, robotics and telecommunications. He predicted a two-tier society: a small, networked elite and millions of ‘displaced workers’ with no prospects. (Source: FOET)
Rifkin thus posed a question that remains central to this day: How do we redefine human value and employment when AI is increasingly replacing work?
Nicholas Negroponte (Being Digital, 1995) and Alvin Toffler (The Third Wave, 1980) also described an emerging information society. The chess victory of IBM’s Deep Blue over Garry Kasparov in 1997 further reinforced concerns that AI could surpass humans cognitively.
However, many of the predictions made at that time proved to be exaggerated. Rifkin’s ‘end of work’ did not come to pass – on the contrary, in the mid-1990s, US unemployment reached a historic low. Nevertheless, debates about job losses, basic income and digital participation continue to reverberate today.
The 2000s: Dotcom boom, singularity and first dissenting voices
The early 2000s brought a new wave of technological euphoria with the Internet boom. In The Singularity is Near (2005), Ray Kurzweil popularized the idea that machines would soon surpass humans – even to the point of merging humans and machines. His vision of techno-utopian immortality made headlines.
At the same time, critical voices grew louder. In his essay Why the Future Doesn’t Need Us (2000), Bill Joy issued an urgent warning about a future in which technology would render humanity superfluous. (Source: Wired).
Jaron Lanier, once an optimist, also warned in One Half of a Manifesto (2000) against a religious belief in technology that mythically elevates machines – and in doing so ignores the human aspect. (Source: : Edge)
Technologically, there was certainly progress: self-driving vehicles at the DARPA Challenge (2004) or IBM’s Watson, which excelled on Jeopardy! in 2011. But many of the grandiose visions remained theoretical. Machine learning was still in its infancy.
Society was primarily concerned with the immediate effects of digitalization: online shopping, social networks, stock market hype – and the disillusionment caused by the dot-com crash. The profound upheavals promised by Kurzweil & Co. failed to materialize for the time being.
The 2010s: The breakthrough of AI and new ethical debates
In the 2010s, AI and robotics achieved a real breakthrough with machine learning and deep learning. From 2012 onwards, machines were able to recognize images, understand language, and win complex games. A milestone was AlphaGo’s victory against the world’s best Go player in 2016. AI found its way into everyday life – via voice assistants such as Siri and Alexa, translation services, and semi-autonomous driving. Robots also left the factory halls: as service, delivery, or social robots – but initially only in niche areas.
These developments sparked intense debate about opportunities and risks. In Superintelligence (2014), Nick Bostrom warned of potentially uncontrollable AI – supported by voices such as Stephen Hawking and Elon Musk. In Homo Deus (2015), Yuval Noah Harari outlined possible scenarios in which AI could create a new ‘class of useless humans’ – somewhere between a god-like utopia and an existential crisis for billions.
At the same time, specific ethical issues came to the fore: algorithmic bias that reinforces discrimination, the loss of privacy, the question of responsibility for autonomous vehicles (the ‘trolley problem’), and the concentration of power among tech companies. Shoshana Zuboff coined the term ‘surveillance capitalism’ in 2018, while Jaron Lanier warned as early as 2013 in Who Owns the Future? of a digital power elite that controls AI – and with it, the future.
The consequences were also visible in the media: deepfakes, AI-generated journalism, and algorithmic filter bubbles made the topic tangible. In Europe, technological change was incorporated into key political initiatives such as Industry 4.0 (from 2011). At the EU level, initial attempts were made to define ethical guidelines for AI. For the first time, it became clear that technological design is always also a debate about values.
Since 2020: AI in everyday life – a new diversity of voices
Since around 2020, the AI discourse has broadened significantly. With systems such as ChatGPT, millions of people have been able to experience first-hand what machine intelligence can achieve today. Robotics is becoming more visible – whether in the form of autonomous delivery robots or humanoid machines from Boston Dynamics and Neura Robotics. Technical understanding is growing. Terms such as neural networks, prompt engineering, and multimodal sensor technology are no longer the exclusive domain of experts – entrepreneurs, politicians, and laypeople alike are joining the discussion.
The rapid pace of progress suddenly makes many of the old questions acute. Even optimists admit that the pace exceeds all expectations. Society faces concrete challenges: How do we regulate systems that could affect millions of jobs? How do we ensure human control in an environment of learning machines? And what does all this mean for our self-image?
Today, the debate focuses on four areas:
- Tech visionaries such as Elon Musk and Sam Altman warn against powerful AI – while at the same time emphasizes its potential for innovation and prosperity.
- Business and politics are discussing the impact on the labor market and education. Which jobs will disappear, and which new ones will be created? Is a basic income necessary to maintain social stability?
- Ethicists and sociologists are questioning how dignity and autonomy can be preserved when machines increasingly make decisions. What happens to our self-esteem when AI takes over?
- The general public is actively participating: media reports about AI texts or seemingly emotional robots generate fascination – and unease. Both drive the social debate forward.
Today, the discussion is more nuanced – and at the same time more concrete. Technological details such as language models, image recognition, and autonomous systems are being examined more closely. And it is becoming clear that AI is not a monolithic phenomenon. Its effects depend on the context – and on how we shape it.
Demographics beats technology – or: Why are all the smart experts talking about the wrong things?
What many experts have overlooked in their debates on AI and robotics is the real driving force behind the coming transformation: the demographic reality. Germany, Europe – and even China – are aging faster and more dramatically than expected. In Germany alone, there will be a shortage of around 5 million workers by 2030. For Europe as a whole, forecasts predict a shortage of up to 20 million workers. In China, the working-age population could shrink by more than 35 million people by 2030.
This is not abstract number crunching, but a structural upheaval of our globalized economic model: industries such as manufacturing, logistics, and healthcare – today the backbone of value creation – will soon suffer from massive staff shortages.
This affects not only simple tasks, but increasingly also highly skilled jobs in software, AI, robotics, and system integration.
The future of work will therefore not be primarily determined by technology, but by the need to close an emerging demographic gap.
Automation is becoming a requirement – not an option
Therein lies the fundamental shift in perspective: the decisive question is no longer whether AI will replace jobs, but whether it will be available in time and on a sufficient scale to do the work for which there will soon simply be no more people.
Automation is becoming a demographic necessity. It is no longer a tool for increasing efficiency, but a systemic response to a structural crisis. And that is precisely what is forcing us to rethink the logic of work, value creation, and society.
However, neither politics nor science has seriously addressed this perspective yet. The relevant questions are still not being asked.
The new systemic perspective: Why we need to rethink AI
After four decades of predominantly technical consideration, we are at a turning point. AI is not just a tool or an efficiency machine – it is changing the structures of our society and raising fundamental questions about our systems:
- Can government services – from building permits to tax assessments – be fully automated to radically reduce bureaucracy?
- Are we prepared to have AI systems handle minor legal disputes and administrative acts in order to relieve the burden on public authorities?
- What does it mean for democracy and personal responsibility when the state delegates large parts of its tasks to machines?
Examples from Estonia, Latvia, and Singapore show that fully automated government functions are possible – and do not have to be authoritarian. In Estonia, administrative procedures, tax returns, and elections have long been digitized. Latvia has automated its tax system. Singapore combines the digitalization of government services with individual freedom.
Automation ≠ loss of control – on the contrary
These models show that automation can also mean more freedom. Instead of continuing to rely on an increasingly detailed, overburdened welfare state model, we could use AI to create leaner, more transparent, and more citizen-friendly structures.
Instead of constantly producing new rules, we could build systems that strengthen personal responsibility, initiative, and entrepreneurship. AI can make government tasks more efficient – without dehumanizing them. Provided, that is, that we actively shape this process with a clear set of values.
From technical discourse to social reality
The past decades of AI debate have been marked by extremes – from fears of machine domination to hopes for techno-utopian liberation. Neither has come to pass. Instead, we are seeing a gradual but profound change that is affecting all areas of life.
AI is no longer a vision of the future. It is reality – and therefore a social mandate for action.
New questions for a new era
It is no longer enough to ask which jobs could be automated. We need to ask bigger questions:
- How do we organize a society in which humans and machines coexist in partnership?
- What new business models are emerging beyond the mere digitalization of existing processes?
- What is the value of human labor if every activity can in principle be automated?
What matters now: entrepreneurial and political action
Entrepreneurs and decision-makers must …
- … not just digitalize business models, but rethink them and
- … establish value creation systems that work for a demographically aged and technologically advanced society.
Politics and society must …
- … define which tasks we want to leave to AI – and which ones should remain the responsibility of humans and
- … fundamentally realign education, social systems, and labor market policy – not as a reaction, but as a proactive measure.
Conclusion: From debate to action
Social change cannot be planned on the drawing board. Instead of trying to regulate everything down to the smallest detail – as the EU is attempting to do in its approach to AI – we need spaces for practical experimentation.
Cities, regions, and organizations must become real-world laboratories where we can boldly test, learn, and adapt. It is not egalitarianism, but diversity, creativity, and a willingness to take responsibility that will secure our future viability.
After 40 years of debate, one thing is clear: now is the time for action. Those who continue to talk only about risks and regulation, risk precisely what they want to prevent – the loss of freedom, innovation, and society’s ability to act.
Now is the time for action.