Tips and tensions: What does it look like to design organizational AI policies (Part 2)
Over the past three years, internal use of commercial AI tools has grown. Given the array of ethical and operational challenges and uncertainties related to AI, many non-profit and philanthropic organizations are designing and rolling out guidance and policies on internal AI use. This process has proven to be both enlightening and challenging, requiring multiple conversations, research, and collective reflection in multiple areas across various teams. It is not as easy as it might sound to create a policy that meets the needs of the organization and can also stay up to date, considering how quickly the AI landscape shifts!
At our November 10th, 2025, Technology Salon NYC, about 30 of us gathered in person to unpack some of the tensions with defining ethical, sustainable, and responsible internal policies, guidance, or positions on the use of AI. We were joined by a stellar group of conversation catalysts – Damini Satija, Amnesty Tech; Brooke Watson Madubuonwu, American Civil Liberties Union; Deborah Brown, Human Rights Watch; Peter Micek, Access Now; and Leah Frazier, Lawyers’ Committee for Civil Rights Under Law – all of whom are working on their organizations’ internal AI policies and governance. The Tech Salon session piggybacked nicely on the online session we ran a week earlier (read more at the link!) as part of The MERL Tech Initiative (MTI)’s Community of Practice.
Below we share a summary of the key takeaways from the Tech Salon. Note: We run Salons under Chatham House, so we have not attributed the points below to individual speakers or Salon participants.
1. Slow adoption is not anti-innovation: it’s rights-respecting
Many of the individuals who attended are part of organizations that do human rights advocacy and research in areas connected to tech, digital rights, artificial intelligence, and more. Consequently, ensuring alignment between their internal uses of AI and their advocacy in research was among their biggest priorities when designing policies. In other words, “making sure organizations’ insides match their outsides.”
Achieving this consistency is not without its challenges: Those working on small tech policy teams inside large organizations sometimes struggle to be heard. “There is still a disconnect between the work we are doing and the rest of the organization,” lamented one person. Other Salon participants described that bringing broader ethical considerations and human rights concerns to the process of drawing red lines around the use of AI takes time and requires engagement throughout different areas of the organization. Crucially, an overall sentiment that echoed in the gathering was that, though there seems to be increasing pressure to adopt AI all around us, as a sector we should be ok with taking the time it takes to make rights-based, values-and-mission aligned, thoughtful decisions. Having the “fear of missing out” or “falling behind” shouldn’t overshadow our broader values and goals as a sector working on social issues, often with vulnerable people and groups, including protecting human rights.
As one of the speakers put it, “slow adoption of AI is not ‘anti-innovation’ – it’s rights respecting.” During the conversation, we were invited to reflect on the fact that if an organization can’t roll out AI by conducting appropriate impact assessments, consulting impacted communities meaningfully, and identifying rights-respecting pathways, maybe it’s not responsible to adopt AI in that case. Adoption of AI can and should be slow, measured, and specific.
Most organizations at the Salon that are setting policy are doing so due to concern about potentially causing harm to their constituencies and consequently losing their trust. They are also worried about AI – with its tendency towards inaccuracies, hallucinations, bias, and cheapened writing quality – detracting from their credibility and making their work less effective. Wider issues that some felt have not been sufficiently explored included the climate effects of AI and harm from data centers and the labor issues related to data work that fuels AI.
2. Staff engagement and AI literacy are a key part of policy development
Organizations described feeling intense pressure to adopt AI, making it difficult to set a policy that might restrict it. They are also finding widespread uninformed uptake of GenAI among staff, with little understanding of the consequences. Many participants spoke about unevenness in how different teams and staff members across the same organization use AI: the tools they use, the reasons they are using AI, the level of formality with which they decide to use tools, and how transparent they are with the decision to use AI in their work.
As such, one of the biggest challenges is understanding how staff are actually using or interested in using AI. Based on the experience of the participants, this requires creating safe spaces for people to inform you of how they are using these tools, avoiding ‘shaming’ people (either for using AI or for not using AI) and ensuring you are able to engage different teams in the process (not just teams that are already involved with or sensitive to issues relating to technology and AI, but people across the organization who will have varying levels of knowledge and experience with AI use and AI ethics).
A good practice shared is to implement guidance and guidelines before rolling out the policy. As mentioned above, designing a policy may take time, so one good way to start is by creating at least some level of guidance about appropriate tools and uses. One participant shared that they have an internal working group that works with staff members, based on their needs and challenges, to determine whether or not the use of AI is appropriate in a given situation. They created a Vendor Questionnaire where staff and vendors explain why GenAI is needed, and the working group evaluates whether the AI tool meets accuracy, bias, and transparency requirements. Having the questionnaire is also a way to understand the demand for AI tools and how people envision AI could help them in their work. For all of this, having internal capacity to do this work with staff is key.
The importance of education and AI literacy was also raised repeatedly at the Salon as a complement to any AI Policy or Guidance. “Staff don’t yet all have a clear understanding of what is internal, what is confidential, what is public data,” said one person; yet policies tend to set parameters around whether AI can be used on these different kinds of data.
Some organizations are thinking about the use of AI within harm reduction frameworks. “People are going to use it, so it’s like ‘I’d rather have you do it in the house.’ And then how can we mitigate the harms? People have really different perspectives,” said one person. Others wondered if just having a policy or approving an internal tool could backfire by then encouraging more AI use, where the goal is really how to promote thoughtful and responsible uptake of AI, if at all, and an understanding of when it is actually useful.
3. Addressing “Trojan Horse” and “Shadow” AI
Large organizations can turn off access to a specific chatbot or prohibit the download of a tool or upload of data. IT teams are able to see what platforms people log onto from their institutional devices. It is very hard, however, to know what people are doing on their personal devices and if they are using unauthorized AI (or “Shadow AI”) on them. Small organizations without IT teams or those where people use their own devices do not have the capacity to restrict this or manage it as easily. Large institutions with low tech capacity or pressure to be innovative may also be lax about, unaware of, or unconcerned by the risks of AI integration, especially if they are trying to drive efficiencies with these tools.
Having any control over the use of AI within existing platforms is increasingly a challenge. As one person said, “If someone is signing to a GenAI or LLM platform, that is one thing. But integration of AI into Salesforce and Google and others is another thing.” There is no space for organizations to negotiate the terms when working individually with large platforms, and there are concerns with companies such as Asana turning on Asana Intelligence and then using an organization’s internal data to train AI and sell that to others.
One person working at a private sector company spoke about AI notetakers being automatically added to all calls, including notetakers whose presence goes undetected and undisclosed, which could be illegal. Concerns were raised about AI notetakers “producing notes and errors about what people are saying, yet it’s there, just sitting on their servers, waiting for a subpoena.” As opposed to justifying the use of GenAI, one person said that in their private sector environment, people are being asked to justify not using AI, which they find very concerning.
Another person expressed concern at the unprecedented access to personal and sensitive data that Google AI tools have, raising concerns about it having been turned on for Gmail at Universities. “Google’s Gemini can pull out information, and students don’t realize that. They might be sending emails with questions about their immigration status, their mental health, or other health issues. There is no clear disclosure about it. People are using chatbots as a diary with no idea that data is leaking elsewhere. Students are unaware of the risks.”
How do organizations handle AI integrations and embedded AI? One person said they work closely with their IT team, and they turn all embedded AI off by default. In approved platforms, when AI gets integrated, IT works to stay on top of things. They also keep a close eye on vendors and have a policy of cancelling contracts if they disagree with the use of AI. The option of refusal is not always open to small organizations, however. And addressing it organization by organization is not a sustainable solution, as some noted. “There is really no space to negotiate terms, but we can’t build power in the space if we don’t negotiate the terms. They are hiring people to deal with us; they are staffing up. We need to address this as a sector, not one by one, as individually we have no power.”
Addressing the challenges with AI has highlighted that big tech companies own all our data, which reinforces the importance of data protection as a collective action question. “This is not really about whether individuals put data into a system or trust EULAs [end user licensing agreements]. We need government policies to protect us and a push towards regulations.” One person raised the need for a major policy initiative – an AI Civil Rights Act – with guardrails for those that develop and deploy AI. This might include aspects like pre-deployment assessment of an AI tool, post-deployment assessment, regular monitoring, and a paper trail so that people can vindicate their rights if it harms them.” A clear position for many organizations was that in the absence of regulation, writing and enforcing internal policies is quite difficult.
4. Addressing partner and community use of AI
Several organizations mentioned challenges addressing the use of AI by partner organizations – a big question for which no one had a clear solution. “We have a whole AI policy, but now we need to talk about what to do when working with partners and consultants. What do we do if partner work does not meet our policy?” One organization, for example, has a practice of hiring local artists and videographers for the production of images and audiovisual content. A partner went against those norms by using AI-generated images, provoking discussions about whether the project visuals needed to be redone. Another Salon participant wondered, “Are we being clear enough?” At the same time, they noted, “It’s a cost issue and partners can do things cheaper with AI… There is a huge economic crisis for NGOs now because of government cuts, and lots of people see AI as an efficiency tool, a cost saver.”
Widespread use of AI in the legal sector, where many Salon participants are working, was raised as another issue. ¨Legal conferences are competing to use more and more AI. They love it!¨ In one instance, a partner used AI without disclosing, and on further review, it was discovered that cases had been hallucinated. In the human rights and legal spaces, “we have to assume our counterparts are using AI and GenAI, and we need to acknowledge that we have limited control over this.”
When it comes to community or public use of AI, it is even more difficult to create and implement policies related to AI use. One person described an IT security help desk that has seen a 40% increase in “really well-crafted questions and help requests,” which the organization assumes are being written by AI. A legal help desk at an organization has also seen an increase in AI-generated requests. In both cases, many of the requests fall outside the scope of the help desks and have created extra work for teams who need to sort through what is a real request and what is AI-generated.
In the midst of these challenges with AI-generated content, it was recognized that AI can also help overcome equity issues related to language and writing. AI is used by many partners for translation and proposal writing, which usually needs to be done in English, and some organizations are hesitant to adopt policies that totally prohibit its use because, in some cases, AI can support equity, inclusion, and accessibility. For example, notetakers are sometimes used for accessibility reasons, and an organization may be legally obliged to allow them in these cases.
Readings and resources:
- Part One of our Tips and Tensions discussion
- This Generative AI Vendor Questionnaire, created by the American Civil Liberties Union.
- This AI Vendor Assessment Tool, created by Revolution Impact and The MERL Tech Initiative for decision-makers who work in the international development, humanitarian, and social impact sectors and who need to assess AI vendors.
- A big discussion point at the end of the Salon was the impacts of AI and more data centers on local communities. The Computer Says Maybe podcast episode “Let Them Eat Compute” was recommended as a good overview of these issues in the US and ways that local communities are fighting back.
Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic, please get in touch! Please contact us if you would like to discuss sponsoring a Salon or offering financial support for our work!
You might also like
-
Event: What are the resources we need to navigate AI, gender and MERL?
-
Event recap: The Humanitarian AI Countdown and humanitarian knowledge production with Kristin Sandvik
-
Research Digest 2: State of AI Adoption and Competencies for Evaluators for Made in Africa AI in MERL
-
Bias in, bias out? How we’re understanding more about gender bias in LLMs
