The Italian data protection authority has announced its intention to investigate OpenAI, a US start-up backed by Microsoft, regarding its AI chatbot ChatGPT. Millions of people have used the tool since its launch in November 2022, which is capable of answering questions using natural, human-like language and mimicking different writing styles. Microsoft has invested billions of dollars in the technology, which was recently added to Bing and will be incorporated into its Office suite of apps. However, the Italian regulator has raised concerns over a data breach involving user conversations and payment information and the way in which personal data is being collected and stored.
OpenAI has said it is committed to complying with privacy laws and has disabled ChatGPT for Italian users at the request of the Garante, the Italian data protection regulator. However, the watchdog has announced that it would block OpenAI’s chatbot and investigate its compliance with General Data Protection Regulation (GDPR). GDPR governs the way in which personal data is used, processed and stored. The Garante has raised concerns over a lack of legal basis to justify the mass collection and storage of personal data, the exposure of minors to unsuitable answers, and the collection of data to train AI algorithms.
The Italian ban is not the first to affect ChatGPT, with the tool already blocked in China, Iran, North Korea and Russia. There have been calls for EU and national authorities to investigate ChatGPT and similar chatbots following a complaint in the US. The concerns raised include the potential for deception and manipulation of individuals, with a lack of regulation leading to increased risk. Ursula Pachl, deputy director general of the consumer advocacy group BEUC, warned that society was “currently not protected enough from the harm” that AI can cause.
This incident highlights the importance of regulatory compliance for companies operating in Europe, with businesses required to comply with stringent data protection regulations set by the EU. The Information Commissioner’s Office, the UK’s independent data regulator, has stated that it will support developments in AI while also challenging non-compliance with data protection laws. While the EU works on the world’s first legislation on AI, BEUC has warned that the AI Act could take years to take effect, leaving consumers at risk.