
As an organizational developer keenly focused on AI integration, I observe that civil society organizations (CSOs) globally are grappling with a complex, urgent task: defining clear, ethical, and practical positions on the use of artificial intelligence. This is not merely an IT challenge; it is a fundamental discussion about organizational values, risk tolerance, and future relevance. The resulting policies must be implementable, responsible, sustainable, and capable of adapting to the constantly shifting landscape of AI development.
The Foundational Struggle: Literacy and Shifting Sands
One of the foremost challenges in policy creation is the significant variation in AI literacy and understanding across staff. Organizations struggle to promote openness regarding AI use without creating an environment of shame or judgment, which can prevent staff from disclosing their experimentation or asking vital questions. Compounding this difficulty is the constant evolution of AI tools, making policy guidance quickly obsolete. The required effort to track these changes can feel like a full-time job for internal support staff. Furthermore, defining an appropriate organizational risk appetite for AI usage across different departments remains a hurdle for many.
Navigating Policy Design and Process
To establish effective internal agreements, a multi-stakeholder and participatory approach is essential, especially in larger organizations, ensuring input from leadership, human rights experts, technology teams, translation teams, and operational staff. Organizations often choose different starting points for their policy frameworks, ranging from a focus on high-level principles to specific use-based or tool-based rules. A successful method involves a combination of these approaches, initially being principles-led to lay a foundation, followed by a phased, sequenced approach focusing on specific tools and implementation toolkits. A critical best practice is to begin the process by conducting staff consultations and anonymous surveys to understand how people are currently using tools, what problems they are trying to solve, and what their desired use cases and hard lines might be.
The Core Tension: Values Versus Efficiency
A significant tension CSOs navigate is the conflict between realizing immediate efficiency benefits and adhering strictly to organizational ethics and human rights standards. Staff who are chronically under-resourced often see immediate value in leveraging AI. However, many tools, particularly those provided by dominant technology companies, are viewed as inherently unethical due to their supply chain issues, environmental impact, or labour implications. This forces CSOs to reconcile using tools where they feel they have limited ability to resist due to big tech’s market dominance, while simultaneously trying to “walk the talk” on principles they advocate externally, such as demanding rigorous governance and human rights compliance.
Defining Boundaries and Red Lines
Internal policies must clearly delineate boundaries, particularly concerning sensitive information. Common hard lines include prohibiting the input of partner data or sensitive personal data into generative AI tools. Policies often emphasize supply chain considerations, aiming to do no harm to privacy, information integrity, the environment, and the intellectual property (IP) of creatives, such as local artists and photographers. A key analytical distinction drawn by some organizations is between “discrete” tools (tools chosen for use) and “integrated” AI (features embedded within standard platforms like email software), recognizing that different principles must apply to each.
Fostering an Ethical and Productive Culture
To manage internal dynamics, organizations must overcome the potential for a “snob and shame” dynamic, where some staff are fearful while others are heavy users. A crucial learning point is enabling staff to use AI ethically by simply telling them they do not have to use it. This reduces pressure and encourages open disclosure, helping avoid “shadow use” where staff use tools secretly and potentially compromise data. Policies should seek to enable curiosity and creativity while empowering people to say a “hard no” if a tool conflicts with their personal or ethical perspective. Furthermore, organizations should provide training on effective use, treating AI as a “thought partner” and ensuring users maintain critical thought regarding the output.
Designing for Adaptability
Given the rapid pace of change, policy must be inherently iterative. Policy iteration cycles must be established, with some organizations planning periodic reviews on a six-month cadence. This adaptive approach requires critical feedback loops between policy guidance and implementation experiences. It is also important to integrate research findings on AI’s impact (such as new human rights analyses or growing evidence on environmental effects) to continually refresh existing policies. Overall, a critical recommendation is giving the organization permission to move slowly and be measured, grounding the policy work in the mantra that responsible innovation is not stifling but is about creating space for the right innovation.
Best Practices for Practical Guidance
To make ethical principles implementable, organizations should adopt a function-specific lens, recognising that different uses (e.g., communications drafting versus processing large datasets) require different ethical norms. Practical tools that have proven successful include decision trees to help staff determine appropriate use cases, simplifying complex choices. To mitigate security risks, organizations should steer staff toward using services with single sign-on and review API access, extending security protocols to third-party tools rather than attempting the futile game of “whack-a-mole” by blocking everything. Additionally, organizations benefit from proactively defining what “success” or “good” looks like for any internal AI experimentation.
Structural Transparency and Accountability
Transparency is key not only in external documents requiring disclosure of AI use but also internally. Organizational agreements should outline the policy decision-making process itself, ensuring that policy updates or approvals do not become a “tech team black box”. For complex tasks, such as evaluation or research where data integrity and replicability are paramount, organizations must set clear guidance and expectations at the beginning of any project, making continuous adjustments based on performance and usage feedback. Finally, having a formally adopted policy provides leverage to push back against external pressures, such as donor demands to use AI where it violates internal ethical commitments.
Establishing AI policy is akin to setting a course in shifting currents. It demands not a perfect, static map, but rather a robust, values-based compass that allows the organization to sail deliberately, continuously adjusting its sails based on constant feedback and new knowledge.
Inspired by a discussion today on MERLtech.org. Text and Image supported by AI.