Digital mark and data protection guidelines “silence” on AI training prevents effective competition, SCiDA team warn
The SCiDA team say ignoring AI services for shaping digital markets would be a fundamental mistake
A “silence” in new data guidelines risks preventing fair competition in AI development and allows dominant firms to strengthen their artificial intelligence, experts have warned.
The rules risk creating uncertainty about the use of data and giving advantageous access to some companies, researchers believe.
Responding to the consultation on the Joint Guidelines on the Interplay between the Digital Markets Act and the General Data Protection Regulation, the Shaping Competition in the Digital Age (SCiDA) team say the lack of policy on AI training creates enforcement uncertainty and risks enabling practices which could clearly violate the DMA’s restrictions on data combination and cross-use.
The SCiDA team say ignoring AI services for shaping digital markets would be a fundamental mistake, as it would mean that the key driving forces for competition would fall outside the scope of regulatory scrutiny.
Without clear requirements for AI training and deployment, gatekeepers can freely combine massive datasets to train superior models while competitors cannot access equivalent data, creating structural barriers to effective competition in AI development.
The consultation response is by Oles Andriychuk, Pavlina Hubkova, Anush Ganesh from the University of Exeter and Rupprecht Podszun Kena Zheng and Sarah Hinck from Heinrich Heine University Dusseldorf and Jasper van den Boom, from Leiden University.
The study says the guidelines represent an important step toward coherent GDPR-DMA application, but there are critical gaps which require clarification.
There will always be conflicts when GDPR permits interpretations that enable DMA circumvention.
Professor Podzsun said: “As the Competition and Markets Authority has warned, the lack of clear guidelines could create feedback loops where advantageous access to AI allows firms to strengthen their position within a digital domain that generates rich data sets, which could lead them to have greater access to data required for building or improving foundation models.”
Professor Andriychuk said: “The guidelines do not do enough to define standards around anonymity. This means companies who are data “gatekeepers” have “carte blanche” while those who want more access for competitive purposes cannot do so.”
The SCiDA team warns that gatekeeper firms claim they follow prohibitively high data protection standards, when such conduct allows them to circumvent other laws, while these standards can be completely disregarded when it suits their business interests.
The objective of the guidelines is to provide guidance for some provisions of the DMA that concern or may entail the processing of personal data by gatekeepers or include references to GDPR concepts and definitions.
Some firms have invoked data protection and privacy requirements protected by the GDPR as a justification for circumventing compliance with the data-related obligations under the Digital Markets Act. The consultation response says the GDPR should not be compromised in the pursuit of the DMA’s objectives, and yet the DMA’s goals should not be diminished by the application of other legal frameworks, including the GDPR and ePrivacy Directive.
Dr Hubkova said: “If designations under the DMA cling to a narrow, platform-centric model of the digital economy, the regime risks becoming outdated. A more effective approach would incorporate broader theories of harm which address the evolving structures of power and dependency in AI-driven markets.”
The SCiDA team recommend new guidance which says when GDPR permits multiple interpretations, “gatekeepers” must choose the one which least hinders DMA objectives and maintains GDPR compliance.
AI training should be addressed explicitly, clarifying that combining cross-CPS data for AI foundation model training requires consent, and that data access obligations apply to enable competitive AI development with appropriate privacy safeguards.
