Informing, planning, deciding – autonomy and being human in the AI age

14. February 2024

Why we need a new enlightenment

Today, purchasing decisions can not only be made but also implemented anywhere. Smartphones enable us to buy goods and execute security transactions almost anywhere and at any time. But how do we make these decisions? Are we really better informed thanks to our smartphones, constant access to knowledge and, above all, the knowledge pre-sorted for us by algorithms? Are our decisions more informed thanks to filtered information? Is the quality of our influenced decisions better? Or are we not simply being pushed in a certain direction by the type of information processed, which is not determined by us but by our digital twins?

Is it good that we are creating a credit score that we can use to determine whether a person can carry out a certain transaction? Will the answer to this question be different if a human still makes such a decision, but the human receives information processed by machines?

Decisions – or what amounts to being human?

Self-determination and autonomy is an essential cornerstone that we humans have achieved through the Enlightenment in order to free ourselves from the power of a few rulers and build democratic societies. This concept of humanity is based on a group of individuals and enables us to lead a self-determined life in a humane society.

In recent years, more and more machines have been pushing their way into this society. These include virtual machines such as social media platforms and apps as well as physical machines such as cobots and perhaps soon, autonomous cars. So far, all of this has entered our world and caught us off-guard, more or less. We haven‘t practiced dealing with them and don‘t really know what consequences these developments mean for us as individuals or as a society as a whole. We should be open and try things out; purely theoretical warnings or a discussion before action is of little use, even with small children and a hotplate. Ignoring and doubting does not protect us from making mistakes, but unnecessarily widens the gap between those who understand technology as a tool and can use it for themselves and those who at least try to reject AI technologies. The clear recommendation is to embrace the developments and set new rules for each individual and society. Before AI and digital innovation overwhelm us and the digitalization that we are so longing for in many areas becomes too much of a challenge, we must deal with it in a very concrete way. Otherwise, we will not be able to retain our human autonomy and dignity without running away from innovation.

The ability to make decisions as the ability to differentiate

Using the criterion of decision-making ability as the ability to differentiate between simple machines and humans will no longer work in the future. Cobots, drones, and apps are already a reality today and will make increasingly autonomous decisions for or against us. Our influence is dwindling without us really realizing it. We lack the widespread awareness and know-how for the right way to deal with AI and our role in it.

The future of decision-making

We will have to develop a new self-image, a new enlightenment, in which we do not regulate equality between us humans and create systems such as democracy, but this time clarify the boundaries between humans and machines and define rules. Otherwise there will be no room for us humans. Sounds harsh, but we must not be intimidated by this. We should not have a fear-filled discussion about language models or machine learning, but rather create rules for interfaces between humans and machines. And this includes, above all, the question: Which decisions will machines be allowed to make in the future and which decisions will we reserve for humans?

Autonomy – or (what has the Enlightenment brought us)?

The principle of democracy is the possibility of choice, the decision of the majority. The core is the equal voice of every human being. This was the central answer to the great injustice of the time before the Enlightenment.

Until then, the absolute ruler had the final decision on everything that was important to him. We have derived the equality of all people from this Enlightenment. This means that everyone should have the same rights, everyone should have the same opportunities and everyone should have the same voice. The fact that this process still has its flaws today and is far from truly inclusive is another discussion. What remains to be said, however, is that we as a society have decided to hold elections in which everyone can make a decision for the general public with the same vote. The constitution (a man-made construct) determines what we as individuals can decide with our individual rights and what issues we can decide with an election – in which each of us then has the same vote. This leeway between making our own decisions and participating in an electoral decision is our human autonomy – our significant creative leeway!

Scope for shaping the cobot and AI world?

Are we as individuals allowed to transfer all our decision-making rights – in other words, our entire creative freedom – to a machine or an algorithm? What then remains of us as individuals? What if someone else provides us with an algorithm that then makes decisions for us? Shouldn’t we at least be able and allowed to determine which decisions are influenced by the algorithm?

Our own rights to make decisions and autonomy end where we touch, restrict, or change the sphere of other people. What about cobots and algorithms in the future? Do we also have to be mindful of algorithms? Can they enter our sphere just like other people and restrict or change our decisions?

Robot rules

Isaac Asimov proposed a visionary concept for dealing with robots back in 1942. This concept and its rules consisted of three core aspects:

  • Robots must not harm humans, 
  • Robots must obey the commands of humans, 
  • Robots must protect themselves.

The American biochemist established these rules for the interaction of robots, without yet considering how interaction with machines influences our own humanity. It is precisely this step that we must now initiate and take. We need to expand these rules and establish new framework conditions.

Our cooperative world with physical AI

We will have to rethink things in terms of people and their needs. It is no longer about equality between people as it was in the Enlightenment, but about creating a framework of autonomy for people in relation to non-deterministic machines.

We are losing this space as humans. We should not wait until we have lost it, but rather think about preserving it for ourselves right now.

We need a new enlightenment that (re)places the (co-)human being at the center of our thoughts and actions in contrast to the machine. To do this, we need co-thinkers who think about what such an antrophocracy can and should look like, without thinking about banning technology first.

Image generated with ChatGPT
Share this article

More article

Articel, Podcast
24. November 2024
Artificial intelligence and robotics are radically changing our relationship with machines: instead of just being tools, machines are entering into a real dialog with…
Article, Podcast, Video
20. November 2024
Heute zu Gast bei Impact X: Wolfgang Gründinger, seit Juni 2021 Chief Evangelist bei Enpal, einem der am schnellsten wachsenden Green-Tech-Unternehmen in Europa….
Article, Podcast
6. November 2024
Artificial intelligence and robotics are radically changing our relationship with machines: instead of just being tools, machines are entering into a real dialog with…
Articel, Podcast
24. November 2024
Artificial intelligence and robotics are radically changing our relationship with machines: instead of just being tools, machines are entering into a real dialog with…
Article, Podcast, Video
20. November 2024
Heute zu Gast bei Impact X: Wolfgang Gründinger, seit Juni 2021 Chief Evangelist bei Enpal, einem der am schnellsten wachsenden Green-Tech-Unternehmen in Europa….