Artificial intelligence: blessing or concern? (part 2)Back
Artificial Intelligence (AI) is increasingly used in our society. Not only to write texts, but also to recognize faces, detect (potential) fraudsters and locate tumors. Is this a development that we should embrace or is it wiser to be careful and step on the brakes?
In our previous blog we showed that AI is increasingly permeating our society. In this blog we will take a closer look at the advantages, but also the disadvantages.
Advantages, but also disadvantages
AI offers numerous advantages. For example, it can provide better public health, safer cars, customized services, and cheaper products that last longer. It ensures that you read books and listen to music that suits your taste. And it can create employment, improve machine maintenance, increase production and quality, improve customer services and save energy. An estimated 11-37% of labor productivity growth by 2035 will be related to AI (EP Think Tank, 2020).
Preventing disinformation and cyber-attacks using AI can strengthen democracy. It can also encourage diversity, for example by reducing biases in recruitment. Or for setting up defense and attack strategies against hacking and phishing in cyber wars.
AI will also be used more often in crime fighting and criminal justice. For example, the escape danger of prisoners can be estimated more accurately, and crime and terrorist attacks can be predicted or even prevented. AI is already being used to detect and respond to illegal and inappropriate behavior on online platforms.
In addition to these advantages, there are also numerous risks, just think of an algorithm for detecting welfare fraud based on, among other things, language, residential area, gender and indications of financial and psychological problems. The court ruled that this system was 'insufficiently transparent and verifiable', and that there were 'insufficient safeguards to protect the right to respect for private life', one of the requirements of the European Convention on Human Rights.
Liability is also a concern: who is to blame when AI causes damage? For example, in the event of an accident involving a self-driving car, should the damage be paid by the owner, the car manufacturer or the programmer? There is also the danger of creating online 'echo chambers' or myth traps by only showing content that someone likes, based on previous online behavior (confirmation of conspiracy thinking). Or take the making of realistic fake videos or audio fragments (the so-called 'deepfakes'), which can ultimately lead to polarization and manipulation of elections. Germany has the scoop of the first lawsuit for a fake interview generated by AI.
The use of AI in the workplace is expected to lead to the loss of many jobs. Despite the fact that AI can also create jobs, the European Parliament's Think Tank expects 14% of jobs to be automated and 32% of jobs to undergo significant change.
Greater chance of abuse
Less harmless is that unbalanced access to information can lead to abuse: an online store can, without the knowledge of the customer, use AI to predict, based on previous online behavior or other data, how much the customer is willing to pay and then the price increase or adjust its message. Or it can be used to track and profile people associated with certain actions or beliefs, for example animal or environmental activists.
However, perhaps the most important shortcoming of AI is that the results of AI are highly dependent on the concept and data with which it is trained. Both design and data can be intentionally or unintentionally unbalanced or biased. Moreover, using numbers to represent a complicated social situation can suggest the idea that AI is factual and precise, when it is not ('mathwashing'). For example, the current AI text generators can confidently claim the greatest possible nonsense.
When AI is not trained properly, it can be influenced by ethnic origin, gender and age, for example during a job application process or when granting a loan. What it can lead to if AI is used to profile people, we have seen in the Netherlands, where parents with a non-Dutch sounding name were already suspected in advance.
This appears to be a structural problem. AI must be trained with lots of examples and patterns. A camera at a bridge is trained to recognize a boat with positive examples (a boat) and negative examples (no boat). And because the world is changing rapidly, these systems have to learn continuously, for example with a new type of boat.
And then you see that AI is trained unbalanced. The map below shows where the data used to train AI comes from: a bloated United States and a tiny Africa.
This map comes from the Mozilla Foundation's Internet Health Report 2022, the open source project behind the Firefox browser. It shows that more than 60 percent of the datasets used come from the United States. Data from Russia are negligible; there is hardly any data from South America and virtually no data from Africa. What the map doesn't even show is that only 12 organizations (two non-U.S.) produce the datasets used in more than 50 percent (the top three: Princeton, Stanford, and Microsoft).
So it means that the language, pictures, behavior and faces with which programs such as ChatGPT and DALL-E are trained are quite one-sided, and that the models and algorithms that result from them are also...
In addition, American society is far removed from our societies. The question is whether we want our public administration and the welfare state to be influenced by AI that has been trained on social and racial relations in a country that has fundamentally different views on issues such as discrimination, sexuality and freedom of expression.
That is why a number of prominent scientists and industrialists recently published an open letter calling for a temporary pause in AI development. Politicians should first consider the conditions under which AI may be admitted and used. Italy has even (temporarily) banned the use of ChatGPT. Google CEO Sundar Pichai also advocates restraint.
There are also scientists who see no evidence for the claim that AI helps “people, businesses and communities” to “unlock their potential and open up new possibilities that can improve billions of lives”.
Perhaps AI can help people mainly in the technical field. In the field of creativity and culture, the degree of outrage about fake photos and AI-made 'interviews' is in any case even greater than the degree of acceptance.
The European Parliament has now set up a committee for 'artificial intelligence in the digital age' to investigate the impact of AI. Furthermore, with the Artificial Intelligence Act, the European Union is trying to outline a framework for artificial intelligence, as the GDPR does for the collection and processing of data. The legislation divides artificial intelligence into four groups: minimal, limited, high and unacceptable risk. An algorithm supervisor must then monitor the control of algorithms.
Finally, there is one bright spot. A new study shows that people become smarter and more creative when they have to compete against AI, at least when it comes to the board game Go. The successes of the computer program AlphaGo appear to encourage human go players to perform better.
In any case, this research shows that 'man' is not completely defenseless against AI, but adapts to it and knows how to come up with creative solutions. And that is in any case something with which we can (continue to) distinguish ourselves from AI...