Remarks of Pam Dixon at the First Digital Trust Convention held at OECD in Paris; WPF co-sponsor

The first Digital Trust Convention was held in Paris at OECD Headquarters on 15 November, 2024. This event focused on addressing the problems of how to establish trust in people and information in digital spaces, with approaches, instruments, and measures that are effective and sustainable. The conversation included the challenges created by synthetic content generated or impacted by AI. The event was sponsored by the OECD, VDE, the World Privacy Forum, Mila Quebec AI Institute, the Partnership on AI, and others. Dr. Sebastian Hallensleben of VDE and CEN-CENELEC was the program chair and the core organizer leading the Convention. 

WPF Executive Director Pam Dixon was in Paris to participate in person. Dixon’s prepared remarks delivered at the Convention included three key points: 

Solutions must first do no harm: Solutions to digital information problems and synthetic content must improve the situation without creating additional harms, risks, or privacy problems. Content provenance systems tied to legal digital identity, for example, can be a recipe for misuse and serious challenges to trust down the road. See point 2. 

Identity and synthetic content cautions: Proposals for embedding digital identity —especially digital forms of legal identity— into synthetic content in metadata or in other ways is often put forward by well-meaning people and groups. Frequently, proponents of embedded digital ID that may be unaware of the broader socio-legal-technical context for digital identity ecosystems and related challenges. Embedding forms of legal identity into synthetic content is a controversial and problematic proposal. (This does not refer to ensuring account holders are known.) There are meaningful and appropriate legal boundaries regarding how legal identity, which is often mandatory, may be repurposed in many jurisdictions.

Long-standing identity guardrails exist to facilitate trust in identity ecosystems and prevent political and other abuses of identity; digital and otherwise. These guardrails must be respected; synthetic content challenges should not cause a run for the rampant, unfettered use of legal identity, especially when these uses introduce risk and when the state-of-the-art behavioral-based analysis is rapidly advancing. For example, signals-based approaches to data analysis to detect and address coordinated inauthentic behavior is a large and productive area of research. See as a starting place The coordination network toolkit: A framework for detecting and analyzing coordinated behavior on social media (T Graham, S Hames, E Alpert, May 2024, Journal of Computational Social Science, open access.

Identifying social media account holders is a policy that is already in place and has been for many years. Rather, the conversation here relates to proposals and discussions regarding embedding digital ID and / or biometrics into meta data across multiple content types. This is a different matter than identifying account holders.

Socio-technical context must be respected: Jurisdictions must be able to adopt solutions appropriate for their context. Solutions originating in the global north are appropriate for the global north, but these solutions may be a poor fit for global majority jurisdictions which will have different needs based on their contexts. Each jurisdiction must have the elbow room they need to develop solutions that work in their socio-technical context. Solutions that create divisions of “standard makers” and “standard takers” need to be avoided. 

Synthetic content challenges, including mis and dis-information, deep fakes, and other misleading content, require multifactoral solutions at deeper layers of the infrastructure that are robust and will provide long-term systemic improvements. These solutions should reduce risks, including risk to privacy. The extensive, deep, infrastructure-level solutions such as were mandated by the Supreme Court of India in response to extensive structural trust and privacy problems in India’s complex digital Aadhaar ecosystem are more in line with the level and breadth of solutions that will be needed to address the digital trust issues in synthetic data. For additional background on this, see the many research publications since 2020 onward. A broad starting point is Digital India: Past, Present and Future. (Springer Nature, 2024.) 

From left to right, panelists include:

Winston Ojenge, Head of digital economy program, African Center for Technology

Pam Dixon, Founder and Executive Director, World Privacy Forum

Tim Clement-Jones, Liberal Democrat House of Lords Spokesperson for Science, Innovation, and Technology, House of Lords, UK

Mark – Boris Andrijanic, Senior Fellow, Atlantic Council; Vice President, Kumo.AI

Monica Brezzi, Head, Governance Indicators and Performance Division, OECD.

Melisa Basol, Moderator, Founder of Pulse