Vragen? Bel 06 – 160 749 53 of mail mij info@jarnoduursma.nl

Vragen? Bel of mail mij

Are you interested in booking me as a speaker? Please! I generally have optimistic stories about the rise of digital technology and Artificial intelligence, but I’m not naïve. I also mention the possible threats. Would you like to see more about what I do as a speaker? Then have a look here and here. Do you have any further questions? Please do not hesitate to contact me. I also like to give international talks.

My book ‘De digitale butler – Kansen en bedreigingen van kunstmatige intelligentie’ [English: ‘The digital butler – opportunities and threats posed by artificial intelligence] has been in stores since October 2017.

Jarno Duursma – Author Digital technology – TEDx Speaker – Futurist.

Indeed, artificial intelligence will produce many sweet fruits for us as a society, but there are considerable concerns as well. This blog aims to offer a comprehensive overview of the risks of artificial intelligence.

12 risks of artificial intelligence

Artificial intelligence is a subject that captures the imagination of many people. Of course, this is largely the result of the many Hollywood films that have appeared about this concept. You often see science fiction-like doom scenarios in Hollywood films. They are almost always exaggerated, yet an increasing number of alarming reports have appeared on artificial intelligence, fuelled by the qualitative growth spurt of this new technology. (Especially the qualitative improvement of machine learning and deep learning).
Science fiction is becoming reality. This is because smart computer systems become increasingly adept at remembering and reading what we as people are capable of – this includes skills such as looking, listening or speaking. And they learn to discover patterns and rules from huge amounts of data. These systems easily have the upper hand in some areas. This has quite a few consequences.

Disruption

In my view, Artificial Intelligence (AI) will be the most disruptive technology over the next decade. The quality of this technology has improved considerably in a number of areas in recent years, with all the consequences this entails. Smart software systems gain an ever better understanding of who we are, what we do, what we want and why we want it. A world full of opportunities opens up. Chatbots, smart virtual assistants and autonomous intelligence software assistants will increasingly come to our aid. AI systems will bring us prosperity, time savings, convenience, insights and comfort. We will become used to a personal assistant that is available 24 hours a day and knows what we need, before we know it ourselves. Just as it is difficult to imagine living without the Internet, a decade from now will see the same scenario in terms of your personal assistant.

Also, smart AI systems will provide us with insights that we believed would never be possible, and they will provide answers to questions the existence of which we were not aware of. AI systems are faster, are never tired, learn from examples and from each other and are considerably smarter than humans in specific domains. This is not a futuristic idea but reality.

Specific examples include the following: smart computer systems are better able to recognise art forgery than human experts. Another system is able to recognise dementia even before a medical specialist considers this option. An artificial intelligence system recognises skin cancer sooner than a medical professional, while another system is able to do something similar with nail fungus. Researchers from Stanford are able to predict voting behaviour in elections based on Google Street View images, while an algorithm that is fed data from the Apple watch is able to predict diabetes. Facebook knows when you are dating someone before you have manually indicated this on the platform. Amazon has a patent on ‘predictive shipping’, where they are able to send you a package before you know you want it. The predictive value of AI will be very extensive.

However, we cannot close our eyes to the potentially negative scenarios: President Putin of Russia recently said that the frontrunner in the field of artificial intelligence would be likely to become the leader of the world. And what to think of the AI system that claims to be able to say something about someone’s sexual orientation based on facial recognition technology? How should we deal with this kind of new technology?

It is therefore sensible to have a close look at a potentially powerful technology such as artificial intelligence: this should include both the positive and the less positive sides. Here we go.

12 risks of artificial intelligence

1. A lack of transparency

Many AI systems were built with so-called neural networks serving as the engine; these are complex interconnected node systems. However, these systems are less capable of indicating their ‘motivation’ for decisions. You only see the input and the output. The system is far too complex. Nevertheless, where military or medical decisions are involved, it is important to be able to trace back the specific data that resulted in specific decisions. What underlying thought or reasoning resulted in the output? What data was used to train the model? How does the model ‘think’? We are currently generally in the dark about this.

2. Biased algorithms

When we feed our algorithms data sets that contain biased data, the system will logically confirm our biases. There are currently many examples of systems that disadvantage ethnic minorities to a greater degree than is the case with the white population. After all, when a system is fed discriminatory data, it will produce this type of data. Garbage in, garbage out. And because the output is from a computer, the answer will tend to be assumed to be true. (This is based on the so-called automation bias, which is the human tendency to take suggestions from “automated decision-making systems” more seriously and ignore contradictory data created by people, even if it is correct). And when discriminatory systems are fed new discriminatory data (because that is what the computer says) it turns into a self-fulfilling prophecy. And remember, biases are often a blind spot.

Companies still have too little expertise at their disposal to be able to properly assess these data sets and filter out any assumptions and biased data. The most vulnerable groups are disadvantaged by these systems even more than usual. Inequality will increase. In the worst-case scenario, algorithms will choose the winners and the losers. It is similar to the talking sorting hat from the Harry Potter film: nobody knows exactly what happens inside, but you will just have to accept the truth.

Nonsense? Many convicts have been sentenced by a non-transparent and technically incorrect system, while predictive policing disadvantages the vulnerable in society.

And how can we ascertain that our data sets (on which we rely to an ever greater extent) are not contaminated on purpose by hostile governments or other parties with malicious intent?

In short, we will have to avoid ending up in a ‘computer says no’ society to an ever greater extent, where people rely too much on the output of smart systems without knowing how the algorithms and data achieved their result.

3. Liability for actions

A great deal is still unclear about the legal aspects of systems that become increasingly smart. What is the situation in terms of liability when the AI system makes an error? Do we judge this like we would judge a human? Who is responsible in a scenario in which systems become self-learning and autonomous to a greater extent? Can a company still be held accountable for an algorithm that has learned by itself and subsequently determines its own course, and which, based on massive amounts of data, has drawn its own conclusions in order to reach specific decisions? Do we accept an error margin of AI machines, even if this sometimes has fatal consequences?

4. Too big a mandate

The more smart systems we use, the more we will run into the issue of scope. What is the extent of the mandate we give our smart virtual assistants? What are and aren’t they allowed to decide for us? Do we stretch the autonomy of smart systems ever further or should we stay in control of this at any cost, such as is preferred by the European Union? What do and don’t we allow smart systems to determine and implement without human intervention? And should a preview function perhaps be installed in smart AI systems as standard? The risk exists that we transfer too much autonomy, without the technology and preconditions being fully developed, and without us remaining aware over time where we have outsourced the relevant tasks and for what reason. Indeed, there is a risk that we increasingly end up in a world we no longer understand. We must not lose sight of our interpersonal empathy and solidarity as there is a real risk we leave difficult decisions (e.g. employment dismissal) to ‘smart’ machines too easily because we consider this to be too difficult ourselves.

5. Too little privacy

We create 2.5 quintillion bytes of data each day (which is 2.5 million terabytes, where 1 terabyte is 1,000 gigabyte). Of all digital data in the world, 90 per cent has been created in the last two years. A company requires substantial amounts of pure data to allow for the proper functioning of its smart systems. Apart from high-quality algorithms, the strength of an AI system also lies in having high-quality data sets at one’s disposal. Companies that are involved in artificial intelligence are increasingly turning into Greedy Gus when it comes to our data: it is never enough and anything is justified to achieve even better results. The risk, for example, is that companies create an ever more clearly defined profile of us with ever greater precision, and that these resources are also used for political purposes.

The result is that our privacy is being eroded. However, when we subsequently protect our personal privacy, said companies will simply use similar target groups; people that look very much like us. And our data is resold en masse, with an increasing loss of awareness as to who receives it or for what purposes it is being used. Data is the lubricating oil of AI systems and our privacy is at stake in any event.

And not unimportantly: technology will have eyes to see. Cameras can easily be fitted with facial recognition software. Our gender, age, ethnicity and state of mind can be measured with smart software. This is not the future, this type of software already exists. A dynamic advertising billboard in the Dutch city of Utrecht was switched off because the spy software installed on these billboards had given rise to public outrage. Face, voice, behaviour and gesture analysis also results in ever more clearly defined profiles. The use of smart cameras allows for real-time profiling. Smart systems are better able to determine our state of mind than our partner or family members. This is not something from the future: this already exists. And it is readily and generally available as open source software. The government is happy, businesses are happy. Bye privacy.

A number of these options have been introduced in China. Some police officers wear glasses with facial recognition technology featuring a database with facial pictures of thousands of ‘suspects’. However, bear in mind that you are easily labelled a suspect in China when you make certain political statements in public. A so-called social credit system already exists in China: a rating system where you are judged on the basis of certain behaviour. People with higher scores receive privileges. In addition, the country has a very extensive network of surveillance cameras with image recognition or facial recognition software.

Book Jarno as keynote speaker

Reviews

"Jarno’s lezing hielp ons de impact te snappen van kunstmatige intelligentie op mensen, bedrijven en zelfs de hele samenleving. Zijn frisse kijk op de digitale toekomst inspireerde ons team maar waarschuwde ons ook niet de menselijke factor te vergeten. Want juist die menselijke factor zal in de toekomst voornamelijk hetgeen zijn waarmee je je als bedrijf kunt onderscheiden."
Rudy Kempe
KONE
Rudy Kempe
"Jarno heeft bij de HKU een lezing verzorgd over Synthetische Media: Kunstmatige creativiteit. Jarno liet aan de hand van vele voorbeelden zien wat er in de fascinerende wereld van deepfakes allemaal mogelijk is. Interessant en beangstigend tegelijkertijd, en genoeg stof tot nadenken. Zijn lezing werd met veel enthousiasme ontvangen."
Marjoleine Gadella
Coördinator Evenementen bij HKU
Marjoleine Gadella
"In zijn lezing over The Metaverse nam Jarno het publiek stap voor stap mee naar een wereld die sommigen totaal vreemd is. Hij praat niet over hoofden heen, maar checkt regelmatig bij het publiek wat hun (voor)kennis is en zet ook kanttekeningen bij ontwikkelingen. Erg prettig. Na afloop schoof Jarno aan bij de paneldiscussie die ik begeleidde. Hij wist zaken goed toe te lichten op een losse, spontane manier. "
Anic van Damme
Creative storyteller | Presentator | Journalist
Anic van Damme

6. Major tech companies exert a great deal of influence

The issue above ties in with the power of major tech companies, namely Facebook, Microsoft, Google, Apple, Alibaba, Tencent, Baidu and Amazon. These eight tech companies have the financial capacity, the data and the intellectual ability to raise the quality of artificial intelligence enormously. The risk therefore exists that very powerful technology ends up in the hands of a relatively small group of commercial (!) companies. And the better the technology, the more people will start using it, the more effective the technology becomes, et cetera. This will give the big boys an ever greater advantage. The winner-takes-all mechanism of the Internet era also applies to data (data monopolies) and algorithms.

Also, the transfer of algorithms in so-called ‘transfer learning’ is becoming more and more effective. In this case, increasingly less data will be required for a good result. A system from Google, for example, had been offering quality translations from English to Spanish and Portuguese for a period of time. With the help of new transfer learning techniques, this system is now able to translate from Spanish to Portuguese and vice versa with very limited input. The major tech companies own the data and these transfer learning models.

Experience shows that the previously mentioned commercial objective will always dominate, and it remains to be seen how these companies will use the technology in the future.

 

7. Artificial superintelligence

I personally believe the debate about the drawbacks of artificial intelligence is dominated a little too often by the discussion on superintelligence. The latter refers to systems with an intelligence that far surpasses human intelligence in several respects. As a result, they are able to acquire all manner of skills and expertise without human intervention, they can train themselves for situations unknown to them and are able to understand context. They are a kind of super intelligent oracle that only regards human beings as ‘snails in the garden’: as long as they do not bother you, they are allowed to live.

First of all, I believe we should take this scenario seriously and we should primarily focus on conveying our ethical morals to intelligent systems. In other words, we should not teach them rules set in stone, but something about human considerations. This is very important. It would be best if we provided them with a ‘conscience’, otherwise they would become anti-social personalities. What I intend to say is that I do not believe we will see even the slightest hint of a consciousness in these systems in the short term. The technology is far too young for this – if it is possible at all to create consciousness in these systems. It is therefore perfectly fine to reflect on it and we should really take superintelligence seriously, but there are certainly other risks at issue now.

Lest I forget, I would like to add a nuance here. I am not that concerned about a system endowed with any form of consciousness that will take over the world. But… we will probably be affected far more often by systems programmed with a certain objective that they intend to pursue relentlessly, without taking into account issues we consider to be important as humans, including empathy and social equality. Simply because they are programmed this way. Because what we consider to be important is their blind spot. And that is NOT science fiction.

Facebook is a ‘great’ example of how an artificial intelligence system can have a completely negative result. Indeed, increasingly smart Facebook algorithms only have one goal: keeping you on the platform for as long as possible. Creating maximum involvement with the content. Collecting clicks and reactions. But the system is insensitive to issues such as ‘the objective factual representation of matters’. Truth is unimportant because the system is only interested in the time you spend on the platform. Facebook does not care about the truth, with all the harmful consequences this entails.

8. Impact on the labour market

AI will create pressure on the labour market in the years ahead. The rapid increase in the quality of artificial intelligence will ensure smart systems become far more adept at specific tasks. The recognition of patterns in vast amounts of data, the provision of specific insights and the performance of cognitive tasks will be taken over by smart AI systems. Professionals should closely monitor the development of artificial intelligence because systems are increasingly able to look, listen, speak, analyse, read and create content.

There are therefore certainly people with jobs in the danger zone who will quickly have to adapt. However, the vast majority of the population will work with artificial intelligence systems. And remember: many more new jobs will be created, although it is more difficult to conceive of them than the jobs that get lost. Social inequality will increase in the years ahead as a result of the divide between the haves and the have-nots. I believe we as a society will have to look after the have-nots – the people who are only able to perform routine-based manual work or brainwork. We should remember that a job is more than just the salary at the end of the month. It offers a daytime pursuit, a purpose, an identity, status and a role in society. What we want to prevent is that a group of people emerge in our society who are paid and treated as robots.

In short, it is becoming increasingly important for professionals to adapt to the rapidly changing work environment. Moore’s law ensures there will be an ever greater distance between humans and machines.

9. Autonomous weapons

As recent as this summer, Elon Musk from Tesla warned the United Nations about autonomous weapons, controlled by artificial intelligence. Along with 115 other experts, he pointed to the potential threat of autonomous war equipment. This makes sense: it concerns powerful tools that could cause a great deal of damage. It is not just real military equipment that is dangerous: considering technology is becoming increasingly easy, inexpensive and user-friendly, it will become available to everyone… including those who intend to do harm. One thousand dollars will buy you a high-quality drone with a camera. A whizz-kid could subsequently install software on it which will enable the drone to fly autonomously. Artificial intelligence facial recognition software is available as early as now, which enables the drone camera to recognise faces and track a specific person. And what if the system itself starts making decisions about life and death, as is the case now in warzones? Should we leave this to algorithms?

And it is only a matter of time before the first autonomous drone with facial recognition as well as a 3D-printed rifle, pistol or other gun becomes available. Check this video from Slaughterbots to get an idea of this. Artificial intelligence makes this possible.

10. Everything becomes unreliable – e.g. fake news and filter bubbles

Smart systems are becoming increasingly capable of creating content – they can create faces, compose texts, produce tweets, manipulate images, clone voices and engage in smart advertising.

AI systems can turn winter into summer and day into night. They are able to create highly realistic faces of people who have never existed.

Open source software Deepfake is able to stick pictures of faces on moving video footage. This therefore makes it seem on video as though you are doing something (which is not true and has not actually happened). (read my report on deepfakes here (Dutch)). Celebrities are already being affected by this because those with malicious intent can easily create pornographic videos starring these celebrities. Once this technology becomes slightly more user-friendly, it will be child’s play to blackmail an arbitrary individual. You could take a photo of anyone and make it into rancid porn. One e-mail would then be enough: “Dear XYZ, in the attached video file you play the starring role. In addition, I have downloaded the names and data of all your 1,421 LinkedIn connections and I would be able to mail them this file. Transfer 5 bitcoins to the address below if you want to prevent this.” This is known as Faceswap video blackmailing.

Artificial intelligence systems that create fake content also entail the risk of manipulation and conditioning by companies and governments. In this scenario, content can be produced at such speed and on such a scale that opinions are influenced and fake news is hurled into the world with sheer force – specifically targeted at people who are vulnerable to it. Manipulation, framing, controlling and influencing. Computational propaganda. These practices are reality now, as we have seen in the case surrounding Cambridge Analytica, the company that managed to gain access to data from 87 million Facebook profiles of Americans, using this data for a corrupt (fear-spreading) campaign to get President Trump in power. Companies and governments with bad intentions have a powerful tool in their hands with artificial intelligence.

What if a video surfaces featuring an Israeli general who says something about wiping out the Palestinians with background images of what seems to be waterboarding? What if we are shown videos of Russian missiles being dropped on Syrian cities accompanied by a voice recording of President Putin casually talking about genocide? Powder keg-> fuse-> spark-> explosion.

Please have a brief look at this fake video of Richard Nixon in order to gain an accurate impression of this.

And how do we prevent social media algorithms from providing us with ‘tailor-made’ services to an ever greater extent, thereby reinforcing our own opinions in a well where the echoes reverberate ever more strongly? How do we avoid a situation where various groups in society live more and more in their own filter bubble of ‘being right’? This allows for the creation of a growing number of individual filter bubbles on a massive scale, resulting in a great deal of social unrest.

Book Jarno as keynote speaker

And what should we think when it comes to ‘voice cloning’? It is now possible to simulate somebody’s voice with the help of software, even though the result is not yet perfect. However, the quality is improving all the time. Identity fraud and cybercrime are lurking risks. Criminals will have voicemail messages recorded by software with self-directed payment orders. This is social engineering (the use of deception to manipulate individuals into disclosing confidential or personal information that can be used for fraudulent purposes) through fake voice cloning cybercrime. And it’s a reality.

11. Hacking algorithms

Artificial intelligence systems are becoming ever smarter and before long they will be able to distribute malware and ransomware at great speed and on a massive scale. In addition, they are becoming increasingly adept at hacking systems and cracking encryption and security, such as was recently the case with the Captcha key. We will have to take a critical look at our current encryption methods, especially when the power of artificial intelligence starts increasing even more. Ransomware-as-a-service is constantly improving as a result of artificial intelligence. Other computer viruses too are becoming increasingly smart by trial and error.

For example, a self-driving car is software on wheels. It is connected to the Internet and could therefore be hacked (this has already happened). This means that a lone nut in an attic room could cause a drama such as the one in Nice. I envisage ever smarter software becoming available, which can be used to hack or attack systems, and this software will be as easy to use as online banking is at the present moment.

In hospitals too, for example, more and more equipment is connected to the Internet. What if the digital systems there were hacked with so-called ransomware? This is software that is capable of blocking complete computer systems in exchange for a ransom. It is terrifying to imagine somebody creating a widespread pacemaker malfunction or threatening to do so.

12. Loss of skills

We lose more and more human skills due to the use of computers and smartphones. Is that a pity? Sometimes it is and sometimes not. Smart software makes our lives easier and results in a reduction in the number of boring tasks we have to perform – examples include navigating, writing by hand, mental arithmetic, remembering telephone numbers, being able to forecast rain by looking at the sky, et cetera. Not immediately of crucial importance. We are losing skills in daily life and leaving them to technology. This has been going on for centuries. Almost nobody knows how to make fire by hand anymore, for example. In my view, it is important to wonder the following: aren’t we becoming excessively dependent on new technology in this scenario? How helpless do we want to be in the absence of digital technology surrounding us?

And, not unimportantly: given smart computer systems will increasingly understand who we are, what we do and why we do it and will offer us a customised service: isn’t it an important human skill to tolerate frustration? To be patient? Or to settle for slightly less than ‘hyperpersonalised’?

In our work, we are increasingly assisted by smart computer systems that can read emotions and the state of mind of third parties – this is already the case in customer service, for example, while experiments are being conducted in American supermarkets. To what extent are we losing the skill to make these observations ourselves and to train our antennas? Will we ultimately become less adept at reading our fellow human being in a physical conversation?

This is already the case to an extent, of course, because we increasingly use our smartphone in our communications. Has this made us less good at reading our conversation partner in a face-to-face conversation? In short, which of our skills, both in a practical and emotional sense, do we want to leave to smart computer systems? Are attention, intimacy, love, concentration, tolerance of frustration and empathy aspects of our lives which we are willing to have taken over by technology to an ever greater extent?…

In this respect, AI is a double-edged sword as a technological development: it is razor-sharp, both in terms of the potentially positive and potentially negative outcome.

Have I overlooked something or are you interested in a lecture on the benefits and/or drawbacks of artificial intelligence? Feel free to contact me.

Book Jarno as keynote speaker

 

Recente video

How to stay human in the era of artificial intelligence

How to stay human in the era of artificial intelligence

In 2019 gaf ik een TEDx-presentatie: "How to stay human in the era of artificial intelligence?" Deze presentatie heeft inmiddels meer dan 14 duizend views.

Leer over kunstmatige intelligente systemen, algoritmische besluitvorming, menselijke autonomie en twee belangrijke aspecten van dit nieuwe tijdperk: zelfreflectie en empathie.