Nov 3, 2025 in Nouvelles, Politique et positions by DIACC

DIACC AI Consultation Submission to the Federal Government

October 31, 2025 – Canada has the opportunity not only to develop world-class AI capabilities, but also to build an ecosystem where AI innovation and responsible deployment are enabled by a strong foundation of digital trust, identity, authentication, and interoperability. DIACC’s mission is to accelerate the adoption of digital trust by enabling privacy-respecting, secure, interoperable digital trust and identity verification services through the DIACC Pan-Canadian Trust Framework (PCTF).

In this submission, we outline how investments in trust infrastructure, standards and verification can help deliver four key outcomes: scale Canadian AI champions, attract investment, support adoption and foster responsible, efficient deployment of AI systems.

About DIACC

The Digital ID and Authentication Council of Canada (DIACC) is a non-profit public–private coalition created following the federal Task Force for the Payments System Review. DIACC’s mission is to accelerate the adoption of digital trust by enabling privacy-respecting, secure, and interoperable identity systems.

DIACC is the steward of the Pan-Canadian Trust Framework (PCTF)™ — a public and private sector, industry-developed, standards-based, technology-neutral framework designed to enable scalable, certifiable digital trust infrastructure that meets the needs of governments, businesses, and individuals.

The DIACC PCTF has been developed in collaboration with experts from federal, provincial, and territorial governments as well as industry and civil society. It supports verifiable credentials, authentication services, fraud prevention, and information integrity across the Canadian digital economy.

Scaling Canadian AI champions and attracting investment

A major barrier for Canadian AI firms is not solely algorithmic innovation, but the ability to build scalable, trusted solutions that can be easily integrated with government and industry systems — particularly in regulated sectors. To scale, Canadian AI companies must demonstrate trustworthiness, security, privacy compliance, identity/credential verification, and interoperability — all of which raise costs and complexity when the underlying infrastructure is fragmented or weak.

Further, investors increasingly look for ventures that not only have technical sophistication but also strong risk management, data provenance, identity assurance and governance frameworks;   Canada can differentiate itself by emphasizing trusted AI ecosystems.

Recommendations:

  • Recognize identity, authentication, verification and trust-framework services (e.g., the DIACC PCTF) as critical infrastructure to underpin secure and trustworthy AI ecosystem scaling — and include funding streams, procurement support and regulatory recognition accordingly.
  • Introduce targeted incentives (grants/tax credits) for Canadian AI firms that embed standards-based verifiable credentials, identity proofing and interoperability from day one — thereby lowering investor risk and improving export readiness.
  • Foster public-private collaborations where government platforms adopt standards-based digital credentials (for authentication, identity verification, data-sharing) and invite Canadian AI firms to build on those platforms — this creates domestic anchor opportunities and global reference cases.
  • Promote and fund initiatives that allow Canadian AI firms to export trust by aligning Canada’s trust-framework credentials with international equivalents (e.g. UK identity frameworks) so that Canadian-built AI solutions come with built-in identity/credential assurance for global markets.

Enabling adoption of AI across industry and government

Adoption by industry and government is facilitated when the infrastructure for authenticating, verifying identity, sharing data, and managing credentials is streamlined and standards-based. AI solutions deployed in real-world workflows often hinge on knowing who is interacting, what credentials they hold, which data sources are valid — not just the AI model itself.

Fragmentation in identity verification, digital credentials and interoperability across jurisdictions (federal/provincial/territorial) also increases friction, slows procurement and reduces the number of “ready” integration points for AI vendors.

Recommendations:

  • Deploy a reusable digital credential/single sign-on system for government services (federal, provincial, municipal) modelled on widely used private-sector login tools. This makes it easier for government agencies and vendors (including Canadian AI firms) to plug in.
  • Encourage government procurement frameworks to demand standards-based trust services (identity proofing, verifiable credentials) as part of AI solutions — thereby embedding adoption readiness from the procurement side.
  • Provide and consume standardized capability services offered by the public and private sectors (identity/credential verification, verifiable data sources, API hubs) that AI firms can access respecting privacy, leveraging a consent-based framework,  rather than each reinventing, reducing cost and time-to-market.
  • Support industry-government collaborations in regulated sectors (e.g. health and finance) where trust and identity verification matter first — by creating pilot environments that leverage trustworthy identity and credentials as the foundation for AI deployment.

Building safe, reliable and trustworthy AI systems, and strengthening public trust

Public trust in AI is undermined when the authenticity of interactions, data and verified identities cannot be reliably determined — for example, synthetic identities, manipulated documents, fraud-enabled onboarding, and unverified credentials all impact trust and impede safe AI deployment.

Identity assurance, verifiable credentials and trustworthy provenance of data and interactions are vital to enable AI in environments where safety, ethics, regulation, and accountability matter (e.g. financial decisions, cross-border labour credentials).

A standards-based trust framework such as DIACC’s PCTF can support traceability, transparency and audit capability in AI workflows, making systems safer, more explainable, and more investable.

Recommendations:

  • Fund the adoption and certification of privacy-respecting, standards-based identity, verification and credential-issuance systems (e.g. the DIACC PCTF) across sectors that will use AI.
  • Recognize identity verification, credentialing and data provenance as core components of AI governance frameworks (not just “nice to have” add-ons), and include them in AI risk-assessment, certification and procurement guidance.
  • Invest in research and development of identity and credentialing tools that are specifically tailored for AI use-cases (e.g. verifying data source authenticity).

Building enabling infrastructure, including data, connectivity and skills

While data and connectivity are widely recognized as AI-enablers, equally critical is the infrastructure of trust, including identity frameworks, verifiable credentials, authentication services, and certification of trust services — without which data sharing, inter-jurisdictional collaboration, and large-scale deployment face bottlenecks.

Digital sovereignty is also critical. Canada must ensure that infrastructure (cloud, data centres, identity/trust services) aligns with domestic values, jurisdictional control and regulatory frameworks in order to attract both domestic and foreign investment that values provenance and security.

Recommendations:

  • Invest in Canadian-based trust infrastructure, including domestic cloud and data centres, specifically for identity/credential/trust-services, to support AI readiness, digital sovereignty and economic resilience (as previously recommended by DIACC).
  • Ensure that interoperability standards for identity, credentials and trust-services are integrated into AI infrastructure planning — enabling cross-sector and cross-jurisdiction data flows, credentials reuse, and reduced duplication of onboarding/verification.
  • Support development of shared digital identity and credential hubs, which can serve as infrastructure building blocks for AI-enabled systems, enabling smaller firms or remote/Indigenous communities to access AI infrastructure.
  • Link infrastructure investment to skills and operational readiness, and include training programs for identity/trust-service management, credential issuance and verification, and interoperable system design, ensuring the human infrastructure is aligned with the technical.

Conclusion

Scaling Canada’s AI champions, attracting investment, accelerating adoption, and building safe and trusted AI systems all rest on a foundation of digital trust, verifiable identity, credentialing and interoperability. By recognizing and investing in trust infrastructure as a core enabler alongside data and connectivity, Canada can create a differentiated and competitive AI ecosystem.

DIACC welcomes further collaboration with federal partners and key stakeholders to implement standards-based trust frameworks, support interoperable credentialing and enable Canada’s AI ecosystem to flourish on the global stage.

Thank you once again for the opportunity to provide this input.

Joni Brennan
President, DIACC

Share