When AI Meets Identity

Navigating Opportunity and Responsibility

March 5, 2026

#PrivacyInPracticeCA

Introduction: AI as a Tool and Challenge

Artificial intelligence has transformed identity verification over the past several years. AI-powered systems can detect sophisticated fraud attempts that rule-based systems miss. They can verify identity at scale, which is impractical for human review alone. They can adapt to emerging threats in ways static systems cannot.

But AI also creates privacy considerations that require careful management. AI systems typically require training data, often personal information. They can make decisions that are difficult to explain or challenge. They can exhibit biases that discriminate against particular groups.

Thoughtful organizations treat AI as a powerful tool that requires deliberate governance to serve beneficial purposes while preventing harmful ones.

This article examines AI in Canadian digital trust and identity: the genuine value AI provides, the privacy considerations it creates, and how organizations can implement AI responsibly.

Where AI Adds Genuine Value

AI capabilities have materially improved identity verification. Acknowledging this value is essential for balanced assessment.

Fraud Prevention

AI detects sophisticated attacks that rule-based systems miss. Modern fraud attempts use techniques specifically designed to evade rule-based detection: synthetic identities assembled from factual and fabricated elements, documents manipulated to preserve apparent authenticity, and behavioural patterns designed to appear normal.

AI systems trained on large datasets of legitimate and fraudulent activity can identify subtle patterns that distinguish fraudulent transactions from legitimate ones. They can detect anomalies that human reviewers and rule-based systems would miss. This capability genuinely protects individuals and organizations from fraud.

The threat landscape continues to evolve. Attackers use increasingly sophisticated techniques, including their own AI capabilities. Canada’s National Cyber Threat Assessment 2025-2026 concluded that AI technologies are lowering the barriers to entry and enhancing the quality, scale, and precision of malicious cyber activity.[1] Organizations that rely solely on static rules face growing vulnerability to adaptive attacks.

Speed and Scale

AI enables verification volumes that human review cannot match. A large financial institution may need to process millions of identity verification events daily. Human review of each event is economically and practically impossible.

AI systems can process these volumes while maintaining consistent application of verification criteria. They can provide near-instant decisions that support customer expectations for immediate service. Speed and scale enable services that would otherwise be impractical.

Deepfake Defense

AI-powered detection is increasingly essential for identifying synthetic media. Generative AI has made it possible for anyone with a computer to create convincing fake images, videos, and audio. The National Cyber Threat Assessment noted that generative AI tools enable threat actors to create realistic audio and visual content impersonating trusted individuals.[2] Traditional verification approaches that assume the authenticity of documents and biometrics are vulnerable to these attacks.

AI-powered liveness detection can identify synthetic faces. Document analysis can detect AI-generated or manipulated documents. Behavioural analysis can identify patterns inconsistent with genuine user interaction. These capabilities represent essential defences against an emerging threat category.

Identity verification providers, including Canadian organizations, are developing these defensive capabilities. The same AI techniques that enable deepfake creation can be applied to deepfake detection. Organizations that invest in these capabilities protect their customers and the broader ecosystem.

Privacy Considerations: What Requires Attention

AI creates specific privacy considerations that responsible organizations must address. These are factors to manage thoughtfully.

Training Data

AI systems learn from data. Identity verification AI typically requires training data that includes personal information, such as real identity documents, biometric samples, and transaction patterns. How this training data is obtained, used, and protected raises important privacy questions.

The OPC’s 2024-2025 survey found that 88% of Canadians are at least somewhat concerned about personal information being used to train AI systems.[3] This concern is well-founded. Training data can be misused, breached, or retained beyond its training purpose. Research has also demonstrated that AI systems can reproduce elements of their training data in unexpected ways. Researchers at Google DeepMind showed in 2023 that adversaries can extract gigabytes of training data from production language models, including models aligned to prevent such outputs.[4]

Privacy-preserving approaches to training are available and maturing:

Federated learning trains models across distributed datasets without centralizing personal information. The AI learns from data that never leaves user devices or originating systems. The European Data Protection Supervisor has identified federated learning as a potentially important technique for privacy-by-design compliance, while noting that it does not eliminate all data protection risks and should be combined with other privacy-enhancing technologies.[5]

Synthetic data generates training datasets that preserve statistical properties without containing real personal information.

Differential privacy adds mathematical noise to training processes, preventing the extraction of individual-level information.

Data minimization uses only the minimum training data necessary for the specific purpose.

Adoption of these techniques in the identity verification sector varies. Some organizations have begun incorporating these approaches, while for others, deployment remains aspirational. Organizations selecting AI providers should inquire about their training data practices and prefer providers that demonstrate privacy-preserving approaches.

Bias and Fairness

AI systems can exhibit biases that produce discriminatory outcomes. If training data underrepresents certain demographic groups, the resulting system may perform poorly for those groups. If the training data reflects historical discrimination, the system may perpetuate it.

The FPT Joint Resolution explicitly warned against the increased risk of discrimination in digital identity systems.[6] This warning applies particularly to AI-powered components. The U.S. National Institute of Standards and Technology’s Face Recognition Vendor Test, the most comprehensive scientific evaluation of face recognition performance across demographic variables, found that the majority of algorithms tested exhibited demographic differentials, with some algorithms 10 to 100 times more likely to misidentify a face from certain demographic groups.[7]

Bias in identity verification has concrete consequences. A system that more frequently rejects legitimate users from particular demographic groups can impose real harm: denied access to services, additional friction and delay, and the dignitary harm of being treated differently. Addressing these risks is both a fairness imperative and a business necessity.

Regular bias audits, testing of system performance across demographic groups, and remediation of disparities when found are important components of responsible practice. The OPC’s 2025 Guidance for Processing Biometrics emphasized the importance of accuracy and testing for biometric systems, including testing by individuals or entities with appropriate expertise.[8] Ongoing monitoring is preferable to one-time assessments, recognizing that bias can emerge over time as systems, populations, and attack patterns evolve.

Transparency and Explainability

AI systems can be challenging to explain. Deep learning models may produce accurate results through processes that even their developers cannot fully articulate. This “black box” quality creates challenges for transparency and accountability across the industry. Canada’s federal, provincial, and territorial privacy commissioners have identified explainability as a core requirement, stating that organizations should make AI tools explainable to users and provide users with the opportunity to access and correct their personal information.[9]

When identity verification decisions affect individuals, denying access, triggering additional scrutiny, or flagging fraud, individuals reasonably want to understand why. The FPT Joint Resolution emphasized that the privacy implications of identity ecosystem design, functions, and information flows should be transparent to all system users.[10] Achieving this transparency with some AI approaches remains an active area of development.

While legitimate business interests in protecting proprietary systems exist, meaningful transparency about verification decisions supports trust. Organizations can provide explanations at appropriate levels of abstraction without revealing security-sensitive implementation details.

Explainable AI techniques are advancing. Organizations should prioritize approaches that balance capability with explainability and prepare for rising regulatory expectations for AI transparency.

Function Creep and Surveillance

AI capabilities developed for identity verification could be repurposed for surveillance. A system designed to verify identity could, without proper safeguards, also be used to track individuals. Biometric recognition capabilities could become monitoring capabilities. Behavioural analysis for fraud detection could be applied to other purposes, such as behavioural profiling.

The FPT Joint Resolution addressed this directly: personal information in an identity ecosystem should not be used for purposes other than assessing and verifying identity, and ecosystems must not allow tracking or tracing of credential use for other purposes.[11]

Preventing function creep requires deliberate architectural and governance controls. Systems should be designed with purpose, with technical limitations enforced, not just by policy. Access controls should prevent unauthorized use cases. Audit mechanisms should detect attempted misuse.

Organizations should consider not just what their AI systems do, but what they could do if repurposed. Building constraints into the architecture is more reliable than relying solely on policies to be followed.

Responsible AI Practices: What Good Looks Like

Organizations committed to responsible AI in identity verification can draw on several emerging practices. These practices support both public trust and effective risk management.

Privacy-Preserving Techniques

Federated learning and on-device processing keep personal data distributed rather than centralized. Synthetic data and differential privacy enable training without individual-level exposure. These techniques are increasingly available and represent the direction of responsible practice, though adoption across the identity verification sector is still maturing.

Privacy-preserving AI also reduces organizational risk. Data that is never centralized cannot be breached from central systems. Organizations that adopt these techniques protect themselves while protecting users.

Regular Bias Testing

Bias audits across demographic groups should be conducted regularly, not once at launch. Performance metrics should be disaggregated to identify disparities. Remediation processes should address identified disparities. Where possible, results should be documented and shared to advance industry practice.

Bias testing should be ongoing because bias can emerge over time. Changes in user populations, attack patterns, and system updates can all introduce or exacerbate bias. Continuous monitoring is preferable to one-time audits.

Purpose Limitation by Design

AI systems should be architecturally constrained to their intended purposes. Access controls, audit logging, and technical limitations should prevent unauthorized use cases. Systems should be designed so that misuse is difficult, not just prohibited.

Meaningful Transparency

Organizations should provide clear explanations of how AI is used in verification processes. When decisions affect individuals, appropriate explanations should be available. Organizations should prepare for increasing regulatory expectations around AI explainability.

Human Oversight

AI decisions with significant impact should include appropriate human oversight. This does not mean human review of every transaction, which would defeat the scaling benefits of AI. It means human oversight of edge cases, appeals processes, and system-level performance.

Ongoing Governance

AI systems require ongoing attention, not just initial design. Models can drift as populations and attack patterns change. New capabilities may create new risks. Regular review and updates are essential.

The Generative AI Challenge

Generative AI deserves particular attention. The same technologies that enable deepfake attacks are transforming the landscape of identity verification threats.

Synthetic Identity at Scale

Generative AI enables anyone to create convincing fake documents, photos, and videos. What previously required specialized skills and expensive tools now requires only access to freely available AI models.

This accessibility dramatically increases the potential scale of synthetic identity attacks. Synthetic identities, assembled from combinations of real and fabricated elements, can now include compelling AI-generated photos and documents. Organizations must assume that attackers have access to generative AI and design defences accordingly.

The threat extends beyond simple document forgery. Generative AI can create increasingly convincing synthetic faces, voices, and documents, challenging traditional verification approaches. The entire verification chain is potentially vulnerable.

Arms Race Dynamics

Generative AI creates an arms race in identity verification. Defensive AI identifies synthetic content; attackers use AI to create more convincing synthetics; defenders develop more sophisticated detection; attackers respond.

This dynamic is not new. Security has always involved the evolution of offence and defence. But generative AI accelerates the cycle and increases the capability available to attackers. Organizations must plan for continuous capability evolution, not static defences. They must monitor the frontier of generative capabilities and update their defences correspondingly.

Detection of AI-generated content, artifact analysis, and multi-modal verification all provide defensive capabilities. Organizations that invest in these capabilities can work to stay ahead of the threat.

Collaborative Defense

“Artificial intelligence is accelerating the digital world at extraordinary speed — strengthening fraud detection, defending against deepfakes, and enabling trusted digital interactions at a scale we’ve never seen before. But as AI becomes more powerful, the responsibility to deploy it thoughtfully grows just as fast. Privacy-protecting AI must become the foundation of digital identity systems, ensuring that innovation does not fuel a generative AI ‘arms race’ that erodes trust.

The real opportunity lies in collaboration. Identity verifiers across the ecosystem have a unique ability to share intelligence, strengthen collective defences, and build secure, transparent, and accountable systems. When done responsibly, digital credentials can unlock safer and more inclusive participation in the digital economy — giving citizens confidence, choice, and convenience while protecting the integrity of high-value transactions across Canada.” 

– CJ Ritchie, Executive Advisor,
Cybersecurity and Government and Public Sector Practice, EY Canada

Identity verification providers across the ecosystem are investing in defences against generative AI. These investments serve not just individual organizations but the broader ecosystem. Shared threat intelligence and collaborative defence improve outcomes for everyone.

DIACC supports information sharing about emerging threats and defensive techniques. A breach at any organization erodes trust in the entire ecosystem, making collective defence a shared interest.

DIACC’s Role and Direction

DIACC is working to support a collaborative governance approach to AI in identity verification.

DIACC’s working groups are exploring opportunities to:

  • Develop best-practice guidance on the use of AI in identity verification, drawing on emerging standards and leading practices from multiple jurisdictions.
  • Create assessment frameworks that organizations can use to evaluate their AI practices, with a view to supporting future certification processes.
  • Monitor emerging developments in AI capability and governance, informing updates to guidance as the landscape evolves.
  • Support information-sharing on AI threats and defences, recognizing that collective security benefits everyone.

Canadian providers have an opportunity to demonstrate responsible AI practices that contribute to evolving international standards. Organizations that invest in responsible AI governance can build trust and differentiate themselves in markets increasingly concerned about AI practices.

The Vision: AI That Serves Privacy

AI capabilities are too valuable for fraud prevention and scalable verification to forgo. The goal is AI that serves privacy rather than undermines it.

This requires deliberate effort. Default AI development practices often conflict with privacy protection. Training data centralization, opaque decision-making, and the potential for function creep must be actively addressed.

But responsible AI is achievable. Privacy-preserving techniques exist. Governance frameworks can constrain misuse. Transparency can support accountability. Organizations that invest in responsible AI demonstrate that capability and ethics can coexist.

DIACC is committed to supporting responsible AI in Canadian digital trust and identity verification. This investment serves everyone’s interests by protecting individuals, managing organizational risk, and building the trust that enables AI benefits to be realized.

Next Week

Article 7 examines Learning from EUDI: International Lessons for Canadian Success
What Canada can learn from the EU Digital Identity Wallet – successes to emulate and challenges to anticipate.

Footnotes

[1] Canadian Centre for Cyber Security, National Cyber Threat Assessment 2025-2026. The assessment states: “AI technologies are almost certainly lowering the barriers to entry and enhancing the quality, scale, and precision of malicious cyber threat activity.”

[2] Canadian Centre for Cyber Security, National Cyber Threat Assessment 2025-2026. The assessment notes that generative AI tools enable cyber threat actors to create realistic audio and visual content impersonating trusted individuals (i.e., deepfakes).

[3] Office of the Privacy Commissioner of Canada, 2024-2025 Public Opinion Research on Privacy Issues. The survey found: “88% are at least somewhat concerned about their personal information being used to train AI systems.” See also: Prioritizing privacy in a data-driven world: 2024-2025 Annual Report to Parliament on the Privacy Act and the Personal Information Protection and Electronic Documents Act.

[4] Nasr, M., Carlini, N., Hayase, J., Jagielski, M., Cooper, A.F., Ippolito, D., Choquette-Choo, C.A., Wallace, E., Tramèr, F., and Lee, K. (2023). “Scalable Extraction of Training Data from (Production) Language Models.” Google DeepMind. Published at ICLR 2025.

[5] European Data Protection Supervisor, TechDispatch #1/2025: Federated Learning, June 2025. The EDPS notes that federated learning “can potentially bring certain advantages from a personal data protection perspective,” but “it should not be taken for granted that FL solves all the problems, as some risks will persist.”

[6] Federal, Provincial and Territorial Privacy Commissioners and Ombuds with Responsibility for Privacy, Joint Resolution on Digital Identity, September 20-21, 2022, St. John’s, Newfoundland and Labrador. The resolution warns that “The benefits of a digital identity ecosystem must not come at unacceptable consequences, such as: … increased risk of discrimination.”

[7] Grother, P., Ngan, M., and Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. NIST Interagency Report 8280. National Institute of Standards and Technology. The study evaluated 189 algorithms from 99 developers using 18.27 million images.

[8] Office of the Privacy Commissioner of Canada, Guidance for Processing Biometrics — for Businesses, August 11, 2025. The guidance addresses accuracy and testing requirements, accountability, and privacy-by-design principles for organizations using biometric technologies.

[9] Federal, Provincial and Territorial Privacy Commissioners, Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies, December 7, 2023.

[10] Federal, Provincial and Territorial Privacy Commissioners and Ombuds with Responsibility for Privacy, Joint Resolution on Digital Identity, September 20-21, 2022, St. John’s, Newfoundland and Labrador. The resolution states: “The privacy implications of identity ecosystem design, functions and information flows should be transparent to all users of the system.”

[11] Federal, Provincial and Territorial Privacy Commissioners and Ombuds with Responsibility for Privacy, Joint Resolution on Digital Identity, September 20-21, 2022, St. John’s, Newfoundland and Labrador. The resolution states: “Personal information in an identity ecosystem should not be used for purposes other than assessing and verifying identity or other authorized purpose(s) necessary to provide the service. Ecosystems must not allow tracking or tracing of credential use for other purposes.”

The Privacy Scorecard

A practical tool for measuring digital identity services against the FPT privacy principles. Assess your organization’s implementation across architecture, policy, user experience, and ecosystem coverage. It is not a compliance checklist or legal advice. Use it to spark conversation, explore unfamiliar concepts, and identify areas worth digging into further.

Access the Privacy Scorecard

Follow the Series