Is meaningful participation and accountability in AI possible?


Prompt to ChatGPT-4. A black and white illustration featuring 10 diverse individuals from around the world, each deeply engrossed in learning about Generative AI.

The release of ChatGPT and similar chatbots like Google’s Bard and Anthropic’s Claude has re-ignited the discussion around artificial intelligence (AI) in social, work, financial, health, education, law, and most other spheres. A common challenge with these and other AI and machine learning (ML) systems is their lack of transparency and accountability. Algorithms decide what we see in our social media feeds. CV screeners select who moves forward for a job application. Automated decisions in our financial systems determine who is accessible for a loan. All of these opaque decision-making systems are out of reach for most of us; yet they are having huge impact on our lives and our societies.

How might we create more space for participation by those outside of the tech field in the design, testing, deployment, and assessment of these models? Who should participate, when, how and why? Would meaningful participation help improve accountability for these systems? What would greater accountability look like and what might it achieve?

On October 23 we gathered a group of interested (and interesting!) people together for a Technology Salon on these themes, catalyzed by lead discussants: Megan Colnar, Executive Director, Accountable Now, Aarathi Krishnan, UNDP’s former Senior Advisor on Strategic Foresight and Risk Anticipation and Amira Dhalla, Privacy and Security Rights Expert. The Rockefeller Foundation hosted us in their lovely new meeting space.

Key takeaways from the conversation:

Algorithms are not new!

They’re in the mainstream conversation now but have been deeply embedded into our lives for ages: Netflix, Spotify, Amazon, social media and online platforms use algorithms all the time to recommend content and purchases. The issue is that people don’t understand how algorithms work and how they are used on us.

This is especially true for historically oppressed communities because of deep seated power imbalances in our societies. See Bad Input, for example, a series of short documentaries that show the effects of discriminatory algorithms in financial lending, facial recognition, and healthcare. Technology while touted as a tool of liberation, is just as often used for oppression, noted one discussant. We need only look to examples of Afghanistan, Pakistan, Ukraine, and the current conflict in the Middle East for examples of this.

Prompt to ChatGPT-4. Illustrate the concept of elastic perceptions that can take into consideration diverse views and values of groups and individuals.

Meaningful participation in AI is hard.

Some non-profits are building their own systems, and this helps get “closer to right,” but it’s really difficult. Because people don’t entirely understand how algorithms work, they might not be able to fully participate. In public consultations in the past, “communities felt they were being talked at, not listened to,” as one discussant said. Public hearings have been tried but there are privileges associated with who can access and speak at these sessions.

We need to push outside the box and figure out what it looks like to do participation right. As one discussant said, we also need to move beyond Western, individualistic notions of ethics, values, and standards to more elastic perceptions that stretch to include and prioritize other views and visions.

Where are the spaces for participation?

When is participation ideally needed? How much and for how long? asked one Salon participant. A discussant also questioned how and where people can truly participate in something as technical and complex as creating AI models. Using a real case example, the question was asked: Can an Ethiopian farmer interact in a meaningful way with a high-tech project that uses drones, satellite footage, big data, and AI models? What would that look like? Isn’t it better to think about participation at the point where AI decisions affect that farmer?

What kind of participation?

“Ladders of participation” are often used to better plan and rank different kinds of participation efforts, from “education” (which is less active, and lower on the ladder) to decision-making which is at the top. For AI, however, a case might be made for investment in “education” said some Salon participants, given how important it is for people to understand this highly technical field better if they are going to participate more meaningfully at technical levels.

Participation is possible at many other levels, too; however, for example designing the entire exercise (What is the AI for? What does it do?) Giving input into what the AI “wrapper”or user interface looks like and making sure it’s intuitive for this hypothetical Ethiopian farmer. Giving input into what needs to be customized for different users and locations. Agreeing on the necessary accuracy rate for the decisions being made by AI and determining where is there more or less appetite for error. These are some conversations that often go missing, and where participation would be hugely beneficial.

The whole AI process could be mapped out or visualized, for example, and varying kinds of participants, types and levels of participation could be considered at many different points. Care needs to be exercised however in terms of “representation” – who gets to represent other people? Who decides who participates?

Prompt to ChatGPT4: Create a black and white illustration of a regulation with teeth with a few accent colors.

‘Participation washing’ is a real problem.

Meaningful participation requires trust – yet public trust in organizations and government is at a historic low. So we need to focus a lot on building trust. Some good practices include: avoiding one-off participation, ensuring clear communication, and helping people understand more about what they are participating in. All of this requires time and budget.

What tends to happen is tokenistic or extractive participation. Tech companies ask people to participate with no recognition of their time and efforts. Civil society organizations are expected to organize participation in processes without sufficient budget. With so many platforms reducing their trust and safety teams, voluntary “nice to have” things like participation and safety are cut out of the budget, it was noted. Some call this “the ebbs and flows of the market,” said a participant, “but we can’t have ebbs and flows when it comes to human rights.”

Additionally, community concerns are very well documented already — how do we avoid wasting people’s time and ensure that their inputs are meaningful for them and also add value to the AI process? Unfortunately, participation can lead to rubber stamping or unwitting endorsement. As one discussant said, participation doesn’t always lead to accountability, but accountability is necessary for participation. It’s important to level the playing field with clear commitments, specific consequences for not following standards or fulfilling agreements, and mechanisms with teeth. Without laws that obligate companies and governments to provide space for civil society and/or public participation in these processes, it will always be voluntary, insufficiently resourced, and conducted at the whim of tech companies.

Who is best positioned to push for participation and accountability?

Various theories of change were debated. One Salon participant felt that academics and journalists are the most equipped and effective at making real and lasting change. This idea was backed by examples of academic research that has led to changes being implemented by AI companies. Model cards, the gender shades paper in facial recognition, the hiring audit that became law in New York City all came out of academic research.

Another said it can be difficult, however, for journalists to access insider information at technology companies in order to conduct investigative journalism. “People are afraid of getting sued.” In addition, the media industry is not good at measuring change unless it becomes a congressional hearing, meaning that other kinds of change might not be recognized and valued (See this Salon on tech-focused journalism).

Others looked to civil society, given its vast experience tackling complex issues. Consider civil society’s past role in calling out monopolistic power, the surrendering of goods to the private sector, capitalist overreach, and issues related to privacy, abortion rights and big tech, said one discussant. Values-based standards could lead us to some kind of agreement on how to address AI concerns.

One person questioned the role of civil society overall and the nature of civil society movements and their ability to effect lasting change through protest. However, another person countered that “social movements aren’t only protest.” Civil society normally uses a range of strategies and tactics including engagement with media, soft internal advocacy with tech companies, and insider work on legislation and policies. Movements work to create leverage and openings for change and “protest creates visibility and demand.” Leverage is a key concept, said another, it brings moral suasion – making these entities feel as though they have reputational costs if their policies don’t live up to their aspirations.

Others asked why all the labor of fighting for freedom was put on civil society, when states and multilateral organizations should also be held to account for the decisions they make when they design and implement systems. A multi-stakeholder approach that involves civil society, private sector, and government was emphasized by one participant, who said that the learning from Internet governance processes of the past 20 years needs to be taken into consideration.

Prompt to ChatGPT-4. Illustrate the idea that “It’s not the tools, it’s the system. There are many levers and roots to change.”

It’s not the tools, it’s the system.

From a systems perspective, said another discussant, “remember that it’s not the AI tools themselves, it’s the systems within which they are applied. This is a complex system. There are many levers and roots to change. Change that isn’t grounded in society has a shorter shelf life. We have lots of evidence about social movements creating change. Let’s look at Black Lives Matter, at Chile ousting a dictator, at other social movements. What are the mechanisms that are driven by the public and that become real and drive greater accountability?”

Finally, it was raised that big tech companies are currently self-governed and driven by profit maximization. Governments are very far behind in their understanding of how tech works, and they do a poor job of regulating the industry because they don’t understand it and they are swayed by certain interests. There is a role for civil society in working together with governments to enact regulations as well as holding government accountable for doing so via social movements.

Even if we improve participation and accountability, models will still fail in many contexts and timelines, pointed out one discussant. Who is accountable for harms that emerge 3 months, 6 months, or 6 years down the line? Models often do not account for local complexities. “We need to understand the nature of harm, inequity, injustice, and oppression that systems are designed to privilege or we’ll keep making mistakes we’ve made before. How do we make these systems more decolonial?”

There are lots of examples and resources to draw from!

Salon participants shared several resources that can help us to further these ideas and this work, including:

Our next Salon

Our next Salon on December 4th, 9-11am ET, will explore how AI is contributing to spreading (and reducing) mis- and disinformation and other online harms with lead discussants from Witness, Wikimedia and UN.

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Leave a Reply

Your email address will not be published. Required fields are marked *