A Deep Dive into Privacy Concerns in the Generative AI Data Lifecycle: New Article by Mindy Duffourc, Konrad Kollnig and Sara Gerke
In a newly published article titled Privacy of Personal Data in the Generative AI Data Lifecycle, researchers Mindy Duffourc and Konrad Kollnig from the Maastricht Law and Tech Lab, along with Professor Sara Gerke from the University of Illinois, explore the significant risks and challenges that arise from the use of personal data in the development and operation of Generative AI (GenAI) models. Published in the Journal of Intellectual Property & Entertainment Law (JIPEL), this timely research highlights how personal data, including highly sensitive information, is processed in ways that can lead to privacy violations and broader societal impacts.
The article details the various ways in which personal data is fed into GenAI models to train and update them. This process raises crucial concerns about data privacy, as these models often handle sensitive information that can be exposed to large audiences. One key issue is the potential for GenAI to inadvertently reveal personal data or facilitate the profiling of individuals for purposes such as targeted advertising, surveillance, or even discrimination. The authors argue that such profiling can be harmful, as it may contribute to false or misleading information about individuals, further complicating their ability to control their digital identities. Additionally, the article examines how both U.S. and EU data privacy frameworks currently address the complexities of personal data usage in the GenAI lifecycle. While there are some protections in place, the authors suggest that more needs to be done to govern the collection, flow, and use of personal data, particularly when it comes to GenAI.
For a more detailed exploration of these topics, the article is available to read here.