Apr 20, 2026 in News, Papers & Projects by DIACC
Disclaimer
This guide is published by the Digital ID & Authentication Council of Canada (DIACC) for general informational purposes only. It does not constitute legal, regulatory, financial, or professional cybersecurity advice. Organizations should assess recommendations in light of their risk profile, regulatory obligations, operational environment, and existing security posture. DIACC encourages readers to consult qualified professionals before making security investments or policy decisions. References to specific technologies, standards, or product categories are illustrative and do not constitute endorsement of any particular vendor or solution.
Small and medium-sized enterprises face disproportionate exposure to AI fraud. Enterprise-grade biometric injection defence is often cost-prohibitive for organizations with fewer than 500 employees and limited IT budgets. But the most effective defences against deepfake-enabled fraud are procedural, not technological, and many cost nothing to implement.
This guide provides fourteen practical actions that can be deployed within days. Most require only staff time and process change; the heaviest investment is in organizational discipline, not technology.
A note on terminology: While this guide uses “deepfake” as the most accessible term, technical specialists will recognize that the underlying threat is broader. Deepfakes are one attack vector within the category of biometric injection attacks, which include manipulated photos, synthetic videos, and compromised camera inputs. State-of-the-art defences focus on Injection Attack Detection (IAD) across multiple layers. The procedural controls in this guide complement, rather than replace, certified technical detection systems.
The urgency is real. The Canadian Anti-Fraud Centre recorded $638 million in reported fraud losses in 2024, up from $577 million in 2023. Only 5–10% of fraud is reported, suggesting actual losses may be an order of magnitude higher. Equifax Canada data shows synthetic identity fraud in credit applications nearly tripled in a single year.
This guide is written for:
Cost Scale
$ = Staff time and process change only: no technology procurement required
$$ = Modest direct cost: per-user credentials, consultant hours, or policy premium adjustments
$$$ = Significant investment: technology procurement, vendor evaluation, or infrastructure change
Cost: $ (staff time) | Impact: High
For any changes to banking details, high-value invoice payments, or wire instructions, organizations should call back using a pre-existing number already recorded in their CRM or vendor file. The number provided in the email, message, or video call that initiated the request should never be used for verification.
This single control would have prevented the $25.6 million Arup deepfake loss. A finance employee joined a video call where every participant, including the purported CFO, was an AI-generated deepfake. Had the employee called the CFO’s known direct line before authorizing the transfers, the fraud would have collapsed.
Important: When receiving a call-back, verify the caller’s identity before sharing any confidential information. Fraudsters increasingly use vishing (voice phishing) to impersonate legitimate companies. Until strong caller authentication is widely deployed, treat inbound calls with appropriate skepticism and never volunteer sensitive data unless you initiated the call to a number you independently verified.
Implementation: Write a one-paragraph policy. Distribute it to relevant staff. Treat compliance as an operational priority.
Cost: $ (staff time) | Impact: High
Organizations should consider implementing a mandatory delay period – 24 hours is a common benchmark – on any “urgent” request to change payroll deposits, redirect vendor wire instructions, or modify banking details, regardless of who appears to be making the request.
AI-driven fraud relies on urgency. Any control that introduces a deliberate pause into high-risk workflows degrades the attacker’s ability to exploit real-time deception. If a request is legitimate, a short delay will not matter. If it is fraudulent, that delay may be the difference between loss and prevention.
Implementation: Document this as formal policy so employees have organizational backing to resist pressure. The phrase “our policy requires a hold period on all payment changes” removes the individual from the decision and makes social engineering significantly harder.
Cost: $ (staff time) | Impact: High
No single individual, regardless of seniority, should have unilateral authority to initiate a wire transfer or payment above a defined threshold. The appropriate threshold will vary by organization, but implementing dual authorization for transactions above a level that reflects your normal operating pattern is a well-established treasury management control.
Implementation: Review your banking platform’s authorization settings and configure dual-signatory requirements for transactions above your chosen threshold. If your current platform does not support this capability, this should be a factor in your next provider evaluation.
Cost: $ (staff time) | Impact: High
For long-term client and vendor relationships, particularly in legal, accounting, and real estate, organizations should establish a non-digital “safe word,” authentication phrase, or challenge-response protocol during initial in-person or high-trust onboarding. This key should then be required for all future remote identity verifications involving sensitive instructions.
This simple protocol defeats any deepfake that has not compromised the pre-shared secret. A threat actor can clone a client’s voice, generate a convincing video of the client’s face, and produce forged documents, but they cannot know a phrase exchanged privately in a meeting room.
Selecting strong secrets: Choose authentication phrases that cannot be easily guessed through social engineering, avoid children’s names, birthdays, or publicly available information. Use random phrases or inside references known only to the parties involved. Never transmit the secret digitally after initial establishment, and refresh it periodically (e.g., annually) through secure channels.
Implementation: At your next in-person meeting with key clients and vendors, agree on a challenge-response phrase. Record it securely in your relationship management system (not in email). Apply it consistently.
Cost: $$ (per-user credential cost plus IT configuration) | Impact: High
Standard SMS and email-based multi-factor authentication is increasingly vulnerable to AI-driven social engineering, SIM swap attacks, and real-time phishing proxies. Organizations should evaluate phishing-resistant authentication methods, such as FIDO2-based security keys, passkeys, or other hardware-backed credentials, that cannot be bypassed through deepfake social engineering, intercepted via SIM swap, or defeated by AI-generated phishing.
The key capability to prioritize is phishing resistance: authentication that is cryptographically bound to the legitimate service and cannot be replayed or intercepted by an attacker, even one using sophisticated real-time social engineering.
Verifiable Credentials (VCs) offer additional advantages. They are phishing-resistant and carry identity attributes that can support richer verification workflows. As the VC ecosystem matures and more services support credential presentation, organizations should monitor PCTF-certified VC solutions as a next-generation option.
Implementation: Identify employees with access to financial systems, client data, or identity verification workflows. Evaluate phishing-resistant authentication options compatible with your existing platforms (most major cloud productivity suites and banking portals now support multiple phishing-resistant methods). Budget for both primary and backup credentials per user.
Cost: $ (IT staff or consultant time) | Impact: Medium-High
Generative AI has made phishing emails grammatically perfect and contextually convincing. Human-layer detection is no longer a reliable primary defence. Email authentication protocols, specifically SPF, DKIM, and DMARC, help ensure that spoofed emails purporting to come from your organization’s domain are flagged or rejected by receiving mail servers.
Implementation: Work with your IT provider or email hosting service to configure SPF, DKIM, and DMARC records for your email domain. A phased approach is recommended: start with DMARC in monitoring mode to verify legitimate mail flows, then move to enforcement. Most major email platforms provide configuration guidance at no cost.
Cost: $ to $$ (travel, scheduling, delayed onboarding) | Impact: Medium
Wherever geographically feasible, organizations should consider conducting initial identity verification with a new client or high-value vendor in person. This session can establish a trusted reference baseline: a known phone number, a pre-shared authentication protocol (Action 4), a verified signature, and direct personal contact without a screen.
This baseline anchors all future remote interactions. If someone contacts you claiming to be this person but cannot produce the pre-shared key, or if the phone number doesn’t match, you have a reliable signal that something may be wrong.
Implementation: Build this into your client and vendor onboarding process. For existing high-value relationships where onboarding was entirely remote, consider scheduling an in-person or independently verified verification session when practicable.
Cost: $ (review) to $$ (policy upgrade or endorsement) | Impact: Critical for loss recovery
Many standard cyber insurance policies exclude losses arising from an employee’s “voluntary” transfer, even when a deepfake impersonation induced that action. This coverage gap means organizations may be uninsured against the most likely AI fraud scenario they face.
Demonstrating due diligence through security controls such as those outlined in this guide may reduce premiums or improve coverage terms. Insurers increasingly use account classification to assess risk; organizations without basic procedural defences may face higher costs or exclusions.
Implementation: Request your insurer’s specific policy language on social engineering coverage, funds transfer fraud, and AI-enabled impersonation scenarios. Verify whether deepfake-induced “voluntary” transfers are explicitly covered. If they are not, discuss endorsement options with your broker or evaluate alternative providers. The premium difference is typically modest relative to the exposure.
Cost: $ (policy + leadership alignment) | Impact: High
Organizations should explicitly define which requests can never be approved via email, video call, or messaging alone, regardless of who appears to be making the request.
AI fraud succeeds when perceived seniority overrides process. Deepfake impersonation attacks regularly exploit “I’m the CEO, just do it” scenarios. A documented “no exceptions” rule removes ambiguity and protects employees from pressure.
Implementation:
Create a one-page authority matrix stating:
Reinforce this rule verbally in leadership meetings, so staff know it is genuinely supported.
Cost: $ (process documentation) | Impact: High
The first occurrence of any sensitive action (first wire to a vendor, first payroll change for an employee, first change of authorized signatory) carries disproportionate risk.
AI fraud actors often exploit “first-time” workflows because organizations lack historical patterns to spot anomalies.
Implementation: Require a second communication channel, not just a second approver, for all first-time-sensitive actions (e.g., phone call + system approval or in-person + platform approval). Document this as a permanent onboarding rule, not a discretionary check.
Cost: $ (training materials) | Impact: Medium-High
Employees often comply with fraudulent requests not because they are convinced, but because they don’t know how to safely refuse or delay a request from someone who appears senior or urgent.
Providing pre-approved refusal language gives staff a powerful defensive tool against social engineering.
Implementation:
Distribute 3–5 approved phrases such as:
Reinforce that using this language is compliance, not obstruction.
Cost: $ (CRM configuration) | Impact: Medium
Organizations often store “verified” phone numbers and contacts across emails, spreadsheets, and ad hoc notes, making it easier for fraudsters to inject false information over time.
Implementation:
Create a single, restricted source of truth for:
This can be a secure CRM field or internal system, but not email or chat. Restrict editing rights and audit changes quarterly.
Cost: $ (staff time) | Impact: Medium
Many staff still assume video = real. A short, focused briefing on current AI impersonation capabilities materially improves skepticism without inducing fear.
Implementation:
Run a 15-minute internal briefing covering:
This should be framed as permission to slow down, not an added burden.
Cost: $ (tracking process) | Impact: Medium
Near-misses (attempted fraud that didn’t succeed) often go unrecorded, wasting valuable intelligence.
Implementation:
Create a lightweight internal log for:
Review quarterly to detect patterns and adjust policies. This creates learning without blame.
This guide addresses immediate, low-cost procedural defences. It is not a substitute for a comprehensive security architecture. Organizations should also evaluate:
During standard video calls, some techniques may reveal a real-time deepfake, such as asking the person to pass a physical object in front of their face or turn their head rapidly to profile. These checks can sometimes cause visible artifacts in the deepfake overlay.
However, the reliability of visual detection is degrading as the underlying technology improves. Research indicates that face-swap tools are specifically adapting to handle occlusion and rapid motion. Studies have found that only a fraction of people can correctly identify all deepfakes when presented with a mix of real and synthetic content.
Professional-grade detection: Visual inspection alone is insufficient for high-stakes decisions. Certified Injection Attack Detection solutions employ multiple layers of defence:
Visual inspection should be treated as a supplementary signal, not a primary control. Organizations that rely solely on video calls for high-value decisions should plan to evaluate PCTF-certified liveness detection and injection attack defence capabilities as budgets and operational needs allow.
If this guide is useful to your organization, three steps follow:
DIACC – Where Digital Trust Means Business
contact@diacc.ca | diacc.ca