30 March 2023

Temporary stop on AI?

Back

More than 1100 prominent figures from the science and tech sector have written an open letter in which they state that developments in the field of AI (artificial intelligence) are currently going so fast that a break must be taken. In addition to numerous academics, the signatories also include prominent figures from the tech world, such as Elon Musk and Steve Wozniak. The break should last at least six months and should be used to establish rules for dealing with the increasingly complex and sophisticated AI.
In the open letter, the signatories write that AI systems can already compete with humans in certain tasks. The writers question whether efforts should be made to develop non-human systems, which may in the future be smarter than humans and will replace them. These kinds of questions should be answered by politicians first.
The temporary stop should apply to all AI more powerful than GPT-4, the latest version of the text generator ChatGPT.
Of course there are quite a few snags to the proposal. Can you slow down or stop the development of a product by commercial companies at all? AI has become a battle between Microsoft/ChatGPT and Google, and the question is whether they want to stop developing it while the competitor goes on developing… After all, AI is used in the search engine battle between Google and Microsoft, and in the battle between their office applications : Gmail vs Outlook, Word vs Google Docs. The interests are therefore (too) great.

Cause for concern

The development of artificial intelligence that is developed without a specific purpose, such as ChatGPT, is booming, while legislation lags behind. Recent reports show that ChatGPT-like models exhibit behaviour that is very clever. Microsoft invests billions in the development of AI tools, but the outside world does not know exactly what the systems are trained on or how they are built.
That is cause for concern. For example, the unilateral training of AI can lead to prejudices, incorrect information or discrimination, even with the systems that are already in use. For example, there is the example of a dark-skinned student who was not recognized by the anti-cheating cameras during an exam, because the system had only been trained with students with a lighter skin colour.
In addition, incorrect answers given with authority (by systems like ChatGPT) are accepted by people as "the truth" because people tend to rely on automation. "It's coming from a computer so it must be good."

Another point is that OpenAI (the creator of ChatGPT) seems to be using society as a big laboratory, where they can see people's reactions and learn how people interact with AI. There is now an unjustified urgency to integrate this into your own products now, otherwise you will be too late.
Many signatories therefore have few illusions that the letter will help, but have signed the letter anyway because they have concerns. A temporary halt in AI development is not the solution, but: "It is important to talk about it now, because we are on the eve of the risky scenarios."
But there does not seem to be any real pause in the development of AI for the time being. In addition to Microsoft and Google, communication platforms Slack and Discord, payment service Stripe and language app Duolingo are also working on integrating ChatGPT.

image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor image decor