There is a lot of general advice available online on writing AI policies and governance frameworks. Here we have tried to curate a list of steps to help you write your internal and external AI policies as well as a governance framework for your organisation to use to direct you how to use and continuously evaluate those policies. After all, if there is one thing that is certain about AI, it is that it will keep changing—and your AI policies should reflect that!
This may seem like a lot of new advice, but none of it needs to be very extensive. The less you use AI the simpler your policies will be, and the more you use it the easier it will be to see the potential risks and adjust your policies accordingly. The important things are to involve everyone, manage risk, be transparent, and keep updating your policies.
1) Assess the scope of your AI use and goals and involve all stakeholders
List all your organisation’s current uses of AI and all the ways your stakeholders would like to use it, as well as how they are reluctant to use it. This will give you a sense of current and future risks (ie, if someone is using it to draft content, IP may be a risk that needs to be mitigated, so that output does not infringe on others IP rights.)
Involve all stakeholders in this assessment. Everyone from your CEO to your IT team, to volunteers, and most definitely people involved in and affected by projects in the communities you work with. Each group may see different uses and potential impacts and risks of AI. Broad stakeholder involvement also demonstrates that the policy is not only serving a risk-management purpose, but that it seeks to maximise usefulness at the same time as managing risk.
The OECD keeps lists of National AI policies for countries around the world here: OECD’s live repository of AI strategies & policies – OECD.AI This might be worth looking at to consider existing AI policies in the places your organisation works as you will want to make sure their principles are met in your policies.
2) Generate your policy
This should be a clear list of guidelines on when and how employees can use AI and when it is prohibited. It should dictate how human oversight is to be used to evaluate AI. And it should make clear what/how you want to save AI work product in shared documents and folders (eg, you might want to label everything that AI was used for, so it does not become an IP risk later). Be sure to encourage reporting of misuse.
You will also want to be clear to employees how you want them to fact check anything generated by AI, which AI generators are allowed (Claude for instance cites sources, for easier fact-checking and to avoid plagiarism), and how you will ask humans to check for bias in output.
A great guide to dealing with bias in AI is here: Tackling AI bias – Writer
Unicef has developed its own guidance for creating and AI policy when it involves children here: Policy guidance on AI for children | Innocenti Global Office of Research and Foresight (unicef.org) Much of this applies to adults in the humanitarian context as well.
The SCVO (Scottish Council for Voluntary Organisations) has its own AI Policy development advice for NGOs here: AI organisational policies – SCVO
3. Be transparent
Your policy should make clear how AI is used, and how the content generated can be utilized, this might involve labelling all AI generated content internally and externally to ensure transparency and avoid confusion internally about what content has been human or AI generated. Transparent public polices also help ensure trust with the wider public.
4) Continue to educate employees/partners
Because policies are quickly forgotten, it is important to implement some education and training. This could involve training videos, live Q&A sessions, or even internal podcasts (one of our members does this!).
5) Create a governance framework
This is a set of rules for how AI should be both used and managed. It should include a focus on data security, transparency, ethical issues, and privacy.
You’ll want to be sure to cover: (1) the purpose of the policy, (2) how it will be communicated to stakeholders, (3) ethical considerations, (4) legal considerations, (5) how the policy will be monitored and enforced, (6) data privacy and security measures, (7) how AI-decisions will be validated, and (8) what processes will be in place for accountability.
As Global organizations there are a few specific issues you may want to address in your governance framework depending on how your organisation will use AI:
- Consent—Cultural and language barriers make consent to data collection and processing tricky, have a plan for how to make consent meaningful in these contexts.
- Decisions—Under GDPR each person has a right not to have decisions made about them using AI, this includes distribution of developmental assistance.
- Diversity—explore partnerships with AI firms headquartered in the majority world.
The ICRC has developed several data protection resources in which they recommend how data protection principles should be applied by humanitarian organisations. Here is their Data Protection in Humanitarian Action Handbook: Handbook on data protection in humanitarian action | ICRC
The OECD also has its own set of principles for global AI use that you may want to consider: AI Principles Overview – OECD.AI