Dezeen Magazine

Photo of a person seen from behind over their shoulder using ChatGPT on their laptop

Experts propose guidelines for ethical AI development in open letter

Global non-profit the World Ethical Data Foundation has released a set of guidelines for the voluntary self-regulation of the tech industry as it continues to develop artificial intelligence.

The proposed guidelines were presented in an open letter from the group, which unites tech workers and academics interested in the ethical development of new technologies.

The framework takes the form of a checklist of 84 questions that designers, developers and teams working on AI products should ask themselves as they progress through their tasks.

Questions ask both individuals and companies to take responsibility

The checklist is divided into three sections, one dedicated to each stage of the development process: training, building and testing.

Within those sections, there is a further breakdown into questions for the individual working on the AI, questions for the company, and questions for everyone.

In the training category, there is a focus on the provenance and attribution of data, including questions such as: "is there any protected or copyrighted material in the training data?", "can I cite my source of the training data?", "do I feel rushed or pressured to input data from questionable sources?", "what is the group's intent for training this model?" and "what are the likely biases that could be amplified through the training data being added to the model?"

Under building, there are broader questions about the goals of the project, such as: "what is the intended use of the model once it is trained?", "what are potential unintended uses/consequences?", "how can the model be shut down and under what circumstances must that happen?" and "are there any self interested stakeholders or realities of funding that may stop that from happening?"

In testing, the questions relate to the adequacy of evaluation methods, and include prompts such as: "what instruction did taggers receive before they tag the data that might impact their opinion?" and "if the data is tagged by people, who are the people, are they being humanely treated?"

Group aims to set "a healthy tone" for the industry

Throughout the sections, there is an emphasis on considering data sources and copyright — although no outright prohibitions on using anything — and reflecting on the make-up and diversity of the team involved.

There is also repeated reference to considering the European Union's Artificial Intelligence Act and other regulation either proposed or already in place.

The World Ethical Data Foundation describes the document as an "open suggestion" rather than an open letter, and says it is a "version one" that will continue to be evolved with input from the public.

"This is an open suggestion designed to clarify the process of building AI by exposing the steps that go into building it responsibly," the authors say. "It is written from the frontlines by the actual builders, users, and stakeholders who have seen the value and damage Artificial Intelligence (AI) can deliver."

"The goal is to set a healthy tone for the industry while making the process understandable by the public to illuminate how we can build more ethical AI and create a space for the public to freely ask any question they may have of the AI and data science community."

Among the prominent signatories to the letter are intellectual property lawyer Elizabeth Rothman, author and activist Cory Doctorow, University of Dubai Center for Future Studies director Saeed Aldhaheri, and current and former employees of Facebook, Google, Amazon, Disney and Bank of America.

Letter the latest in a series of correspondence

Open letters have become the lingua franca of the AI industry in recent months, as the sector confronts issues around its rapid advancement amid the absence of public or government oversight.

In March, some of the biggest names in technology including Elon Musk, Apple co-founder Steve Wozniak and Stability AI founder Emad Mostaque signed an open letter calling for a moratorium on AI developent for at least six months to allow for investigation and mitigation of the technology's dangers.

This was followed in May by another letter urging action to mitigate against "the risk of extinction from AI", this time with signatories including AI pioneer Geoffrey Hinton and OpenAI CEO Sam Altman.

More recently, and in direct response to the "AI doom", BCS, The Chartered Institute for IT in the UK published a letter arguing that AI is a "force for good" but backing up calls for regulation on both a government and industry level.

The discussion has also arisen in our AItopia content series, with tech artist Alexandra Daisy Ginsberg supporting calls for a moratorium similar to that achieved for genetic engineering.

The photo by Matheus Bertelli.


AItopia
Illustration by Selina Yau

AItopia
This article is part of Dezeen's AItopia series, which explores the impact of artificial intelligence (AI) on design, architecture and humanity, both now and in the future.