Vragen? Bel 06 – 160 749 53 of mail mij info@jarnoduursma.nl

Vragen? Bel of mail mij

On March 22nd 2019 I gave a TEDx talk in Apeldoorn. It is now online.
‘Algorithms in, Humans out? How to stay human in the era of artificial intelligence’.

Summary: Artificially intelligent systems are getting smarter by the day. They now even have human like skills, such as seeing, listening and talking. They will give us superhuman cognitive abilities. As humans and as society, we are going to reap the enormous benefits of this, but this powerful technology also comes with disadvantages.
In insurances, mortgages and healthcare for example, smart computer systems are increasingly making important but opaque decisions that affect our lives.
And now that AI systems also seem to be able to detect our emotions, we have to be critical.
And perhaps most importantly: if AI systems are constantly focused on taking away our daily inconvenience, is that a blessing or a curse? What will this do to us as human beings? And what solutions are there? How do we remain human in a world that is increasingly dominated by artificially intelligent systems?

Watch the video

Here you can watch the video. Aren’t you able to watch? The transcribed text is below the video

Transcription

Which of you has ever been to Mumbai India? I haven’t, but I have. Let me explain.

I still vividly remember and that was in 1997, I was drinking instant coffee with my twin brother in the computerroom of his school. I remember that I could log in to a website where you could watch a webcam facing a square in Mumbai India. Gradually you got to see what was happening there in Mumbai in real time. I was captivated. It was magical.

I had a similar experience when I watched low-quality footage of a technology congress on YouTube in April 2014. A demonstration was given of Deepmind’s self-learning computer model. The computer system played a game called Breakout. It had learned this all by itself, without having exact rules pre-programmed in advance. Within a few hours the system reached the high score. Once again, I thought it was magical.

It might not surprise you, but the very day it was possible, I bought an Apple Watch, an Amazon Alexa speaker and a Google Home assistant. I have apps that complement my sentences, convert my speech to written text, suggest books to me and send me a notification when they think I’m in the middle of a workout.

As we all know, there is a qualitative growth spurt in the field of artificial intelligence. You no longer have to tell a computer exactly what the world looks like, it can discover it for itself as long as you have enough digital examples. Machine learning and deep learning are responsible for this qualitative growth spurt.

Benefits

AI has enormous benefits. It gives us superhuman cognitive capabilities.

We obtain insights from large amounts of data; we can make predictions about the world around us. Smart computer systems now have human like skills, such as seeing, listening and talking. We obtain answers to questions we didn’t even know we had.

For example, An artificially intelligent system can detect false police statements. People who are robbed, for example, mainly talk about the experience. People who give false statements appear to talk above average about the things they have lost. This is wat an AI system has detected.

So artificially intelligent systems give us superhuman cognitive capabilities, and at the same time they take over routine and computer-like tasks from us. That doesn’t matter: I found them boring anyway. The rise of artificially intelligent systems will bring us much well-being, prosperity and convenience. As you can hear, I am optimistic about the role of this technology in our development.

Disadvantages

But we should not be naive; with the qualitative rise of this technology, not everyone will be better off. There are also disadvantages.

Sometimes algorithms made with good intentions lead to unintended and unforeseen harmful effects.

Looking at the disadvantages, I have now chosen to highlight one specific development here: algorithmic decision-making. Algorithms that are increasingly making decisions for us, about us and on our behalf.

First of all, companies and governments are increasingly outsourcing decisions to smart software using big amounts of data. But this data is often messy, biased and wrongly classified. Unfortunately, this data is used to train algorithmic models which as a result make biased decisions.

Algorithmic Decision Making

Discrimination and exclusion can be expected when algorithmic models choose the winners and losers.
And the past shows that we tend to accept algorithmic decision-making without criticism and sometimes prefer to let them make the difficult decisions.

The consequences of this can have a severe impact on me, you, my brother, my wife and my children. You do or do not get a mortgage, insurance or not, leave or not, a contract with an employer or not, or an invitation to apply for a job or not. Certainly when the systems are not transparent, we must avoid constantly dealing with the hat from Harry Potter’s film: where nobody knows exactly what is happening, but is confronted with an irreversible outcome.

And I know that more and more software is being produced where the developers claim it is able to detect our emotions, based on our facial expression, voice, non-verbal movements and what we literally say.

In my opinion, as a consumer and citizen, we should be critical about this. ‘Facial expression’, for example, is not as one-dimensional as we think. Life is not a Walt Disney film, where the outside of your face is a one-dimensional mirror of the soul, right? We humans are more complex than computers can ever understand.

What if ‘emotionally intelligent’ software is used in a job application, performance interview, police questioning, or in interaction with a lawyer?

Digital Butler

When we look at algorithms as decision makers, we also have to take a look at the digital butlers surrounding us.

Because more and more smart software and smart devices are going to make decisions for us, about us and on our behalf.
Smart software is increasingly better at understanding who we are, what we do and why we do it. In the future, smart software will answer emails on your behalf, change your calendar based on a recently received voicemail, and notify others. It will automatically renew or change your phone-subscription based on price and data usage. It’s going to help you before you knew you needed help. Answering questions we didn’t even know we had.

But we must avoid that our enormous appetite for personalized services and comfort will have a negative effect on us as humans in the long term. Let me explain.

Disadvantages

First of all, I think this digital butler will cause us to reflect less on our own consumption. We literally don’t have to think anymore. Smart assistants and smart software help us to buy, replace, reserve or renew, thoughtlessly. More stuff and services, but the question is whether it makes us happier. Don’t forget: Many tech companies are only driven by commercial motives and lack a moral compass.

The second danger of the digital butler is that it deprives you of the opportunity to autonomously unravel your own wishes or ideas, because it’s so easy to follow the suggestions of the software without thinking.

Because 86% of the people with your profile like pizza with pineapple, you’ll probably love it too.

Our taste, and that of our children, can be manipulated towards the commercially most appealing outcome.

Amazon chooses a birthday present for my brother, Google Maps determines how I travel, Tinder chose my friends life partner, Facebook chooses what information we consume and thus determines our world view.

We must prevent our lives from taking place in autoplay.

Inconvenience

The third danger of the digital butler is that it will eventually make us less tolerant of inner inconvenience because the software deprives us of inconvenience. This may seem like a strange thought, but it is not. We have already seen a similar process with the rise of the smartphone. This has replaced uncomfortable boredom. Seriously, when was the last time you were bored or saw someone who was bored?

I’m also worried that all this smart software, these on-demand services and all this “extreme customer centricity” mistakenly gives us the idea that the whole world revolves around us. Does the digital butler strengthen individualism? What do you think?

Solutions

But are there solutions? Ofcourse there are.

First of all: Governments and companies must be very critical of the software they are going to use or design in-house. They must use or create trustworthy AI that is focused on human well-being. Companies must look beyond their shareholder value and dividends.

We can also make a difference. You can educate yourselves about the impact of these technologies; So congratulations, one of the better decisions you have made today is to attend my talk.
And I made a decision: my Amazon Alexa and Google Home smart speaker have been in the closet for a while now, collecting dust. And to my children’s great annoyance, I often try to make them understand the drawbacks of smart devices and smartphones.

But part of the solution also lies within ourselves. To deal with the consequences of invisible algorithmic decision-making, we have to look inside. To our light side but also to our dark side.

Light Side

To distinguish ourselves from smart computers we need to explore our light side. Empower our human capabilities. Think of the use of creativity and fantasy, but especially empathy, warmth, attachment and compassion. These are the emotions we have to embrace, expand and strengthen. By doing so we can maintain the human touch in the decision-making processes of the future.

And you have to investigate for yourself at an individual level what your preferences are, what your passion is and where and your purpose lies. No autoplay, but purpose.

This process of self-development makes you agile and will give you fulfilment. And the latter is very important. Let me explain.

Dark Side

As I indicated earlier in my story, I am afraid that the digital butler will eventually make us less tolerant of inner inconvenience because the software deprives us of inconvenience. But we have to embrace discomfort. You might wonder why? Let me tell you a short story.

As a teenager I’ve sometimes found it difficult to face some of my darker feelings. Things like anger and my fears. And discomfort from the outside world related to these unpleasant feelings inside. Now that I’m older, I know for sure: You and I need friction, frustration and discomfort. They provide reflection. Reflection on our emotional state. They are the gateway to exploring our wishes and darker emotions.

One day I decided to face my uncomfortable feelings. I started to improve myself and take new steps. From this process, I grew as a human being. Discomfort is the engine for inner growth and taking new steps.

More Human

So: by exploring our light and dark sides we become more and more human! Especially now that artificially intelligent systems equal or surpass us this is important.

Because of this process, we will always distinguish ourselves from smart computers, we will maintain the human touch in the decision-making processes of the future and we are ultimately less sensitive to the suggestions, manipulation and distractions of smart software.

We’ll simply be more fulfilled. Establishing a connection with others helps us, but establishing a personal connection with ourselves does so in particular.

Contact Jarno