The illusion of free decision-making
As a restaurateur, hotelier, or actor, you know this very well: No matter how hard you work to achieve your best performance, the words of a critic carry the most weight. You work to the point of exhaustion. You toil away day after day. And in the end it’s the opinion of someone who contributes nothing and never had to take a risk that counts. A critic has the power to make or break a restaurant, a hotel, or a production – without lifting a finger himself.
Theodore Roosevelt, the former US president, complained about this in a speech in 1910 and tried to side with the brave creators and designers: “It is not the critic who counts. Not the one who smugly comments on how the daring man stumbles and knows better how the doer should have done it. It is the man who dares to enter the arena in person, whose face is marked by dust, sweat and injuries, who deserves recognition.”
He who creates or directs information has power over the physical and social world as well as our local and international financial systems. This gives him so much of this power that he can also direct the people whose existence and circumstances depend on this information. Whereas a few years ago these people were critics, journalists, or media stars, today it is effectively the algorithms of big tech giants like Facebook, Google and TikTok that amplify and direct content.
Much of our attention is no longer focused on the physical reality of our surroundings, such as family, work, clubs, volunteering, or friends, but on several hours of social media consumption per day. This particularly affects the under-30s – the parents, the creators and drivers, the leadership generation of the future. In Germany, 95% of them use TikTok, Instagram or similar for an average of 3.5 hours a day. If you want to reach these people, you have a pretty easy game on social media channels. Traditional media such as TV, films, series, newspapers, and magazines are just as overshadowed as the real world. The virtual world, dominated by social media, is virtually in the control of the younger generations in particular and determines almost every minute of the day that is not filled with work, eating or sleeping.
How freely do we make our decisions?
Decisions are not made on the internet. They are still made in people’s heads. Not even social media can change that. The question is, however, to what extent these digital media have the power to influence our decisions. Has the decision we make in front of a screen really been made freely? Or was our reality influenced on the way to the decision in such a way that our decision must in fact also be counted as influenced, at least. Especially when neural mechanisms such as nudging are used, we can, in fact, speak of manipulation. This involves reducing the diversity of information to a few search results, which are then mixed with contextual advertising.
All of this shows how social media has hacked our human brains: We have allowed monsters to form which organize communities, borne partly by content creators and partly by information consumers. Regardless of whether we are senders or recipients, we submit to the dictates of the system, i.e. the platform algorithm. Some receive social recognition for their creation, while others are “allowed” to participate. We use this process, which is in fact artificial, to create reality because we humans cannot distinguish between the physical world and this artificial reality.
How important is the seemingly obvious price for decisions?
Another mechanism for the apparent simplification and preparation of decisions is also widespread. We give options a price, and this forms simple decision hierarchies. This price is very complex in its detail because it has different effects on different areas of the brain: In order to achieve social status, we want the most expensive handbag possible, which is best obtained without counterfeiting. For most other things, we want the cheapest price. That’s okay for a pen and a standardized item like a screw, but not for a complex and networked system. Cars are right in the middle: They represent mobility but also social status. Apple CarPlay and Google Car are the best examples of human madness: We want to operate all cars as conveniently as possible in the same way, but we want them to look special from the outside. It should demonstrate that you could afford the better model.
Decisions in a networked world of the future with AI
We measure the air quality in every room, monitor how a driver keeps eyes on the road, and give every light bulb an IP address so that it can be switched on and off from anywhere on the planet. We do all this under the pretext of improving quality, increasing efficiency, cutting costs, and saving time. In addition, all this is supposed to reduce suffering, conserve resources, cure diseases or increase our satisfaction. The entire physical world will be networked – but what will we do with these billions of new data records that we already collect every day and can only process and evaluate automatically using AI?
In an increasingly connected world with AI support, we will make decisions in the future based on better integration into our smartphone, smart home and smartwatch world. We will receive recommendations on the effects on our health, our accident risk and certainly also on our social status and incorporate these aspects into our decisions. Perhaps we will also receive information on the impact of our decisions on our planet, our ecosystem, and our fellow human beings.
AI narrows our view of reality
Such AI support is already taking place today in the form of auto-completion and ever-improving suggestions in dialog systems. What is unclear to us is that we already determine part of the result through the selection of criteria that we allow for the evaluation of information. Regardless of whether we choose price or another reference system as a criterion, AI helps us to make more decisions per unit of time in seemingly increasingly complex contexts.
This may feel good and self-determined at first – but in fact, we no longer know anything about reality in such AI-supported environments. The data has been processed so comprehensively that our rationality no longer has any basis in fact. This alienation of our decisions from direct perceptions via our senses is reinforced by the direct correlation of processed information to social status (likes) and prices.
Whom can we still trust?
Whereas in the past we were guided and directed through the not-yet-digital world by leading media with editors and politicians made of flesh and blood, today it is mainly algorithms that determine what news we see and when.
The time scale for adapting narratives and memes is thus changing from months to years, to days, to hours.
However, since this year, the large language models have presented us with a new challenge that affects us all: We no longer know whether the text and the entire story were actually created by a human or by a machine.
With the large generative language models, we have developed a technology that generates seemingly human content for films, books, newspapers, media, television, and social media, which most of us can no longer distinguish from real human content. Objectively speaking, it may even be better than some human-generated content. And now our feelings and emotions are being warmed by content that we no longer know whether it is of human origin. What fundamental crisis will this drive our species into? What will happen when we no longer know what is humanly real and what is technically real or virtual? Or are we already so overwhelmed that we can no longer grasp this point as humanity?
In any case, we also need technology to put all this disorder back in order. We use AI to create technical content that is indistinguishable from human content and then we need AI to sort out the chaos that we have created with all the new impressions and information.
Are we still free if we can or have to categorize so many parameters? Or should we simply be happy in future if we have the illusion of a decision at all?