If you have been in business or part of the working world during the past five years, you probably have run into GDPR. This European law, outlining rules and boundaries around data collection. It likely disrupted your policies, marketing, websites, CRM and external communication. The journey felt endless, mostly because it was unexpected and wide-reaching; thankfully we’re done with all that!

But don’t relax just yet: GDPR is due for a reboot.

GDPR was written in 2018. Back then, we were all still living in the quaint, easy world where data collection was mostly about cookies, getting off unwanted mailing lists and freedom from ad profiling. The law forced companies to explain how personal data was collected and used. We still see its effect with every “agree” pop-up we swat away when visiting a website.

AI has transformed the entire data landscape in the past three years.

AI models “learn” you: Today’s AI systems don’t just collect and store your data, they absorb it, remix it, encode patterns. They don’t just remember what you have done in the past but are designed to predict future behavior based on patterns they find. GDPR only regulates the use of data itself, not what happens when your behavior and artifacts become part of a model’s internal weights.

Consent doesn’t matter anymore: You can refuse all the cookies in the world, and AI is still easily able to guess your gender, income, location, age, state of mental and physical health and political leanings from subtle patterns. GDPR never imagined a world in which data is inferred.

Data is multimodal: The current law addresses biometrics like fingerprints and face scans. But for AI, everything is biometrics: our walking patterns, emotion signals during our calls, predictable shopping behavior triggered by late-night alcohol and ice-cream consumption. Humans give off much more data than the addresses and emails we type into an order form in our daily lives; AI consumes it all: text, images, voice, video, location, emotion, biometric signals and “vibes”.

AI synthetic data built from real people: As AI begins to run out of “real” Human data, it has been engineered to generate its own. The fake faces that look like you, but are not you, fake voices that sound like you, gathered from hours of Fireflies recordings, fake opinions that trace back to your behavior. There is currently no clause identifying synthetic replicas and model-generated profiles.

Universal purpose: GDPR rules govern “data use for the stated purpose.” But “purpose” today is an entire usage universe as AI translates, recommends, predicts and analyzes. The law focusses on company behavior but not open-sourced weights. A model trained anywhere in the world on EU data, outside GDPR jurisdiction, can be implemented in EU markets with no accountability.

Post-training data deletion is basically impossible: GDPR promises the “right to be forgotten”. With AI models releasing pre-trained “black box” environments, deleting a person from a training set is nearly impossible. There is no way to find the original source and link it to a single Human artifact.

AI agents and autonomous workflows: The law defines data as static and unchanging. But AI agents act, increasingly, autonomously and process data on the fly, long before it hits a data center thanks to edge computing. They fetch data, transform it, combine it, feed this to APIs and act on it without leaving a footprint.

If Europe wants to continue to champion user rights, there will need to be an AI version of GDPR for a world with autonomous machines that never sleep. Here’s an idea of what we can expect, and how to prepare your business now.

Explicit rules for AI training on personal data: Expect clarified rules around training an AI model (Small Language Model) on personal data counts. As you train your in-house models now, be sure to document exactly what datasets feed which models, and make sure the documentation is updated with every new parameter you add. In your processes, separate training, fine tuning and analytics: just saying “it’s R&D” will probably not be sufficient.

Stronger rights around AI profiling and inference: We will likely see new guidance describing inference, not just black-and-white data. This will specifically be about health, sentiment, emotion, sexuality, political views, gender, location, income and more. There may be a special category of inferred targets, even if the original input was “normal”. Map what your models are inferring, not just what you collect. Add a line in your DPIAs[1] which explicitly refers to “sensitive inferences” and the ways you block or minimize them.

Stronger guardrails around children and AI: There will probably be extra restrictions for systems that target and interact with minors, including profiling, nudging, recommenders and education tools. Age bracketing is a way of preventing adults from communicating directly with children in any digital space.

The recent Roblox lawsuit is forcing the company to digitally silo each user in a variety of ways; they had to explain the 23,000 incidents of potentially harmful content flagged to the National Center for Missing and Exploited Children in 2025.[i] Treat “may be under 18” as high risk by default. Turn off profiling and personalization for minors unless you have a rock-solid reason for doing so.

Model unlearning requirements: You may need to show that a person can remove their data from training sets when they exercise their right to be forgotten, to be studied and their behavior tracked. Keep training data traceable and versioned rather than in one giant bucket. Before you implement your agents via external parties, choose tools and vendors that already offer data deletion or unlearning features.

We still have time. We all know how slowly the wheels in Brussels turn; the new law is not rolling out tomorrow, but enforcement, guidance and small targeted changes are going to push everyone toward the same thing: know what your AI is doing, with whose data, for what purpose, with what risk, and with what fallback when someone says “delete me” or “explain this.” Start building that discipline now, and when the new rules drop, you will be adjusting, not panicking.

The time to think about GDPR 2.0 is before you implement your MCPs and AI tracking agents to save yourself and everyone in the organization a pounding migraine.

Need help with AI Integration?

Reach out to me for advice – I have a few nice tricks up my sleeve to help guide you on your way, as well as a few “insiders’ links” I can share to get you that free trial version you need to get started.


No eyeballs to read or watch? Just listen.

Working Humans is a bi-monthly podcast focusing on the AI and Human connection at work. Available on Apple and Spotify.

About Fiona Passantino


Fiona helps empower working Humans with AI integration, leadership and communication. Maximizing connection, engagement and creativity for more joy and inspiration into the workplace. A passionate keynote speaker, trainer, facilitator and coach, she is a prolific content producer, host of the podcast “Working Humans” and award-winning author of the “Comic Books for Executives” series. Her latest book is “The AI-Powered Professional.