Author: Mélissa M’Raidi-Kechichian. Additional contributions made by members of DIACC’s Outreach Expert Committee.
Digital identity aims to make our life easier, avoid long waiting lines to prove who we are, and access online spaces in a secure way while safeguarding our personal data. It also enables organizations to establish their online identity and protect users against online threats. Falling under the umbrella of the broad tech ecosystem, digital identity is evolving along with the responsible tech movement, itself growing and already impacting Artificial Intelligence (AI) governance, as seen in the principles of the UNESCO Recommendation on the Ethics of Artificial Intelligence and the EU AI ACT. According to the US based think tank dedicated to grow the responsible tech ecosystem All Tech Is Human, responsible tech is ensuring that technology is aligned with the public interest, concerned with reducing the harms of technology, and promotes diversity in its space. The responsible tech movement may not be a framework but rather a school of thoughts, it brought many practitioners to wonder about the question of responsible tech practices regarding users’ data privacy.
Along these lines, a growing number of citizens (i.e., users), became aware of how precious and sensitive their personal data is, how it is being used, for better (easier and faster access to bureaucracy) and for worse (the centralization of individual identifiers in insecure and opaque systems, data shared without consent with data brokers, or fuelling Weapons of Math Destruction*, for instance), especially thanks to the work of scholars and data activists. The latest shed light on the social implications of AI in a vulgarized and accessible way. Scholars and activists explained the risks of technology and lack of regulations over personal data in plain English and by using relatable use cases, through documentaries and workshops.
The concept of “good” digital identity aligns with DIACC’s mission to provide the framework to develop a robust, secure, scalable, and privacy enhancing digital identification and authentication ecosystem for Canadians, but is yet to be defined and standardized by this industry. Building on Kim Cameron’s digital identity work legacy, better known as the Laws of Identity, this blog aims to explore the possibilities and meanings behind the concept of “good digital identity”.
“Good” is an adjective that seems to carry a subjective meaning depending on the point of view of a practitioner (or user, for that matter). Since the term has not yet been defined and standardized by the digital identity industry, we are interpreting the word good as a digital identity practice that is beneficial for most and does not harm or disadvantage any users. Before getting into the principles of good digital identity, let’s note that “consent” is a recurring term in this blog piece. In this case, consent refers to the clear and informed decision of a user to permit access to their data to a third party. This consent should not be transferred from one third party to another without the knowledge and consent of a user.
The first principle of good digital identity is privacy, and gathers the first 3 laws of Kim Cameron on identity. Users should have control and consent over how their data is being used, and minimize data sharing to the minimum required to access a service depending on the context of use.
Interoperability, which echoes law 5 of Kim Cameron, should be a core component of good digital identity. Users should be able to choose between different providers of digital identity solutions, and these distinct entities should be able to trust each other in a common framework.
Digital identity should have a social and collective benefit rather than a solely individual benefit. Therefore, digital identity must be inclusive and serve traditionally marginalized communities that have been excoded** rather than encoded in digital “solutions” in the past. That means moving consciously and with care rather than moving fast and breaking things, until or alongside a legal framework that guarantees protection of users’ rights, integrity, and privacy.
Digital identity should be optional, as an alternative to traditional identity verification, until the widespread use of digital identity is proven efficient, safe, and secure for every user so that our traditional identity verification process gradually falls into disuse on its own. It is important to recognize that a number of people are worried about digital identity, what it means for them, and what it implies. In some cases, it is due to personal experiences as minorities (in the Canadian context, an example would be Indigenous communities specifically, who have a record of having been the victim of crimes committed by public entities), in others it is due to disinformation and unfounded narratives, or even lack of accessible public resources about digital identity. One might argue that it would be an avoidable financial cost if the traditional methods had a sunset date, rather than to invest in maintaining both processes. However, users should be left with a choice regarding digital identity.
Digital identity should be understood by any user, regardless of their background, in order for them to make informed decisions and consent, meaning that resources to understand digital identity and how it impacts a user must be available in plain English and use relatable use cases as examples. Explainability not only empowers users, it also gives less space for speculations that feed disinformation.
Fairness and trust should be the red line guiding good digital identity. Fairness and trust come from acknowledging and fixing mistakes. To do so, good digital identity should have a mechanism that bridges the gap between users and providers in order for users to share feedback and adapt its practices when needed and avoid ongoing unnecessary harm that have historically correlated with the implementation of a new technology.
Good digital identity should promote transparent data processes, storing, and use. Transparency allows for greater accountability and therefore also aligns with fairness and trust.
Scalability was previously formulated by Kim Cameron in his seventh law of identity. Good digital identity should allow users to share only the desired and necessary parts of their identity, and therefore be scalable across contexts and needs, i.e., following data minimization principles.
Regularly auditing digital identity solutions should be part of good digital identity practices. These audits should be comprehensive but also meaningful and relevant for the users.
In this blog, we explored what the concept of good digital identity could mean, following principles that would lead to a practice that is beneficial for most and does not harm or disadvantage any user. However, the concept of good digital identity should be collectively defined by practitioners in the field in order to standardize and adopt a definition. Beyond actionable principles, good digital identity should also promote a culture where a diversity of voices participates, and where users’ consent, choice, and control over their personal data prevails.
- * Weapons of Math Destruction is a term coined by data scientist Cathy O’Neil in her book “Weapons of Math Destruction: how Big Data increases inequality and threatens democracy”. The term describes mathematical models (algorithms) that attempt to quantify human traits and characteristics and have an opaque data sourcing and processing that, as a consequence, leave big scale damages on one or several communities.
- ** Excoded is a term coined by Dr. Joy Buolamwini, MIT, in her work conducted at The Algorithmic Justice League that she founded, that refers to the minorities excluded from the data that composes the code of models powering technologies. The excoded are usually the most negatively impacted from said technologies.