Article Details
Scrape Timestamp (UTC): 2024-09-12 12:16:54.897
Source: https://www.theregister.com/2024/09/12/google_ai_model_inquiry_eu/
Original Article Text
Click to Toggle View
EU kicks off an inquiry into Google's AI model. Privacy regulator taking a closer look at data privacy and PaLM 2. The European Union's key regulator for data privacy, Ireland's Data Protection Commission (DPC), has launched a cross-border inquiry into Google's AI model to ascertain if it complies with the bloc's rules. The probe is part of broader efforts by the DPC and its peers in the European Union (EU) and European Economic Area (EEA) in regulating the collection of personal data of EU and EEA subjects into AI models. The DPC is concerned about whether Google fully complied with its Data Protection Impact Assessment (DPIA), an evaluation that EU regulators ask data controllers to perform before they ingest large amounts of personal data in a systemic way. A DPIA defines the scope, context, and purposes of data processing and assesses whether that processing might result in a high risk to the freedoms and rights of individuals. According to the DPC: "A DPIA assessment is a key process for building and demonstrating compliance, which ensures that data controllers identify and mitigate against any data protection risks arising from a type of processing that entails a high risk. "It seeks to ensure, among other things, that the processing is necessary and proportionate and that appropriate safeguards are in place in light of the risks." The obligations to do the assessment fall under the umbrella of the General Data Protection Regulation (GDPR), and the probe relates to Google's processing of personal data in developing its foundational AI Model, Pathways Language Model 2 (PaLM 2). A Google spokesperson told El Reg: "We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions." Google is not alone in having its AI ambitions come under regulatory scrutiny. In August, X agreed to suspend the processing of personal data from posts of EU and EEA users to train its AI Grok against the backdrop of an urgent High Court application. In June, Meta paused its plans to train AI models on EU users' Facebook and Instagram posts in response to a request from the Irish DPC. Using personal data in training and processing prompts is a potential privacy minefield for AI companies as far as the EU is concerned. However, AI models will be of little use to EU and EEA users without that data. For example, what might be culturally significant in the US might not apply in Germany. As this latest inquiry shows, EU and EEA regulators are closely monitoring how the tech giants are training their models and using citizen data.
Daily Brief Summary
The EU's Data Protection Commission (DPC) has launched an inquiry into Google's AI model, PaLM 2, to investigate compliance with EU privacy laws.
The investigation aims to ensure that Google's data handling practices, particularly the ingestion of personal data for AI training, adhere to the General Data Protection Regulation (GDPR).
The DPC is scrutinizing whether Google's Data Protection Impact Assessment (DPIA) adequately addresses data protection risks associated with the PaLM 2 model.
A DPIA is mandatory under GDPR and is designed to assess and mitigate high-risk data processing activities; it evaluates the necessity, proportionality, and safeguards of data use.
Other tech giants like X and Meta have also faced regulatory scrutiny in the EU over how they use personal data to train their AI models.
Google has expressed a commitment to fully cooperating with the DPC to demonstrate its compliance with GDPR regulations.