Rethinking consent in the age of AI

Rethinking Consent in the Age of AI

As artificial intelligence (AI) continues to evolve and integrate into our daily lives, the ethical and legal implications are drawing increasing scrutiny. From generative AI tools to predictive analytics, AI systems are now in mainstream products and services from healthcare or retail. One question that now needs to be asked is how consent should work in this new environment. Specifically in regard to how individuals give permission for their data to be used in training and operating AI systems.

Consent has long been a foundational principle of data protection law. Under the General Data Protection Regulation (GDPR) and similar frameworks, consent ensures that individuals retain control over their personal information. It fosters trust between users and organisations and is one of the clearest signals of respect for individual rights. However, the application of consent in the context of AI presents unique challenges.

Why consent is complicated

AI systems require vast amounts of data to function effectively. Yet, much of this information is collected without individuals knowingly as such. Instead, it often comes from diverse and sometimes opaque sources: social media, online purchases, mobile apps, or public records. This raises questions about transparency and adequacy. Can consent really be considered if people are unaware their data is part of an AI training set? Can it be considered specific if the future uses of that data are impossible to predict?

Adding to the complexity. A single, standardised consent mechanism cannot adequately serve the diversity of AI use cases. Just as privacy training for employees must be tailored to the risks faced by different teams, marketing teams managing cookies face different risks than product teams analysing behavioural data. Consent should, therefore, be tailored to the ways AI is applied.

Nuance is essential. It means moving away from static, generic disclosures and toward contextual systems that reflect how data is processed in real time. Especially as AI consent requirements overlap with existing privacy regulations.

Consent must be dynamic

Unlike traditional software, AI models are dynamic. They learn and adapt as new data flows in. An algorithm designed to improve shopping recommendations might later be applied to predictive analytics in a new domain. This raises a consent problem. The permission originally given by a user may not extend to a secondary use.

To address this, organisations must treat consent not as a one-time checkbox but as a continuous relationship. As AI systems evolve, so too must the processes for communicating with users and securing their ongoing agreement.

By moving from permission to partnership, companies can shift from seeing consent as an obstacle, to an opportunity to strengthen user trust

The solution lies in moving from a mindset of “permission” to one of “partnership.” This requires a proactive and transparent approach. One that clearly communicates how data will be used, ensures that consent is specific and informed, and allows individuals to withdraw their consent easily at any time.

To achieve this means embedding privacy into the very foundation of how AI systems are built and maintained. Practices like Privacy by Design, regular Data Protection Impact Assessments (DPIAs), and appointing privacy champions within teams should become the norm. DPIAs can highlight when a new AI feature materially changes the use of personal data, triggering re-consent or added safeguards. Privacy champions can help ensure practices stay aligned with both regulation and user expectations.

How technology can help

Thankfully, technology can help. Consent management platforms provide organisations the ability to manage user preferences dynamically, ensuring that changes are captured and respected in real time. AI auditing tools can monitor how data is used, track model evolution, and flag when consent boundaries risk being crossed.

For example, a consent management system might automatically prompt users to review their permissions when a model begins applying their data to a new purpose. Similarly, an auditing tool might reveal “model drift,” showing when an AI system starts producing outputs beyond its original scope. These mechanisms help ensure that consent is treated as part of a living, responsive process rather than a one-off transaction.

Consent can lead to competitive advantage

As AI technologies become more deeply engrained into society, robust and meaningful consent mechanisms are vital. After all, people’s willingness to use AI tools − and by extension, the sustainability of it as a tool for change − depends on trust.

Organisations that view consent as a static legal formality risk regulatory penalties and the erosion of user confidence. In contrast, those that prioritise transparency, user empowerment, and ongoing oversight will build stronger, more durable relationships. They will be held as leaders in an industry where ethics and compliance increasingly shape competitive advantage.

Consent for a sustainable future

The future of AI demands that organisations rethink how consent is obtained, maintained, and respected. By moving from permission to partnership, companies can shift from seeing consent as an obstacle, to an opportunity to strengthen user trust.

This transition requires continuous communication, adaptive processes, and technological tools. But above all, it requires a cultural shift. An understanding that individuals are not passive data points but active stakeholders in the AI ecosystem.

Emilie Kuijt, Data Protection Officer, AppsFlyer

Emilie Kuijt

Emilie Kuijt is a legal specialist (PhD) with a strong interest in connecting business and policy work. She is currently Data Protection Officer at AppsFlyer

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE