Create a Nonprofit AI Policy

Whether you like it or not, many of the staff at your community benefit organization may already be using artificial intelligence (AI) tools in their daily work. And they should! AI and machine learning have enormous potential to streamline work, eliminate repetitive tasks, connect systems, analyze data and more. Just shutting down AI use entirely may not be practical. That means you need an AI policy. And you need it now.

Making an AI policy is the right thing to do. Your staff deserve guidelines on what they can and can’t do. Your customers, audiences, donors and service recipients deserve to know how you are using their data and interacting with them. Your board will want protection against the legal and regulatory risks of using AI.

The good news is that there are many examples of AI policies already sitting out on websites for you to use as starting points. Get together a cross-functional group, including direct service providers and/or front-line staff if you are that kind of organization, and start a discussion. Create a draft, circulate it around, finalize it and make a plan to communicate it to your organization.

Here are some starting principles (adapted from a great article on Causewriter.ai):

  1. Trust and Transparency: Explain how technology is used in decision-making, ensuring decisions are auditable and planning ways to inspect AI-driven decisions afterwards. Disclose how data is used and stored.

  2. Privacy and Data Security: Explain how the organization protects user privacy. Do not store or use data beyond its stated use or duration. Detail how users can opt out of sharing their data. Consider integrating means to combat AI phishing and fraud into your overall IT security policies.

  3. Responsibility and Accountability: Establish who is responsible and accountable for the outputs of AI systems. This includes identifying the roles of various stakeholders like developers, testers, and management in ensuring ethical use of AI.

  4. Addressing Bias and Fairness: AI training data often contains implicit bias, such as more images of White people than of people of color. Explain ways you plan actively to identify and mitigate biases in algorithms and data. This is crucial to prevent AI systems from perpetuating or amplifying societal biases.

  5. Ethical Standards for Vendor Selection and AI Development: Ensure the safety and security of the AI systems themselves, such as protection from cybersecurity risks and ensuring that AI does not cause physical or digital harm. This may involve creating standards for the AI vendors you choose.

  6. Adaptability and Continuous Learning: AI ethics is a rapidly evolving field. Dedicate your organization to continuous learning and adaptation of policies and frameworks as new ethical challenges and technological advancements emerge. Create a regular review cycle and procedure for your AI policy, which is more likely than other policies to evolve rapidly.

  7. Legal and Regulatory Compliance: Although it may seem obvious, show how you will ensure compliance with laws and regulations, stressing the importance of aligning AI practices with legal standards and preparing for future regulations in the field of AI ethics. The rapidly evolving nature of AI laws and regulations suggests integrating this review into your regular AI policy review cycle.

  8. Specific Policies for Your Particular Work: Look at ways that AI use may impact your organization’s main work, especially where these policies may reflect on the integrity of your mission. You can look to both non-profit and for-profit companies for some examples. Today some media organizations such as Wired state how AI will be used for writing, editing text, editing photos and generating ideas. Some healthcare providers such as Cigna discuss how they might use AI to improve patient outcomes.

Previous
Previous

You Need an AI-First Nonprofit

Next
Next

WaPo: AI Can Make You More Productive