Don’t Let AI Fool Your Vision: The 6-Step Guide To Creating An AI Policy
Build guardrails before you scale, protect your mission from the hidden risks of AI.
In one of my recent notes, I shared an idea—
and soon received a challenge.
That sparked a whole chain of thoughts in me.
It started with a tiny question, grew into a new note idea, and then exploded into even more reflections that kept me thinking.
Looking at it from another person’s perspective, I realized, there are so many important issues we hadn’t considered.
I was lucky to receive a flood of thoughtful responses, insightful takes from brilliant minds. I truly appreciated every one of them.
When
shared his ideas about creating a healthy AI Use Policy, something clicked.So today, I’m excited to welcome Joel as a guest.
He’s the creator of Leadership in Change, where he shares practical tools to help modern leaders thrive through change.
You can check out his newsletter below 👇
Joel’s taking the lead from here. Hope you enjoy!
As “AI” now becomes mainstream with ChatGPT, Gemini, Claude, and other LLMs being all over the media, leaders are having a fear of falling behind while unsure of how to move forward. I have spoken to many leaders across different organizations, friends with whom I have brainstormed for years, and I see over and over that they are accelerating AI adoption, but sadly, often rushing in without a map.
A few months ago, a church leader told me something I haven’t forgotten:
“We’re automating everything we can, emails, social posts, reports. It’s saving us hours every week.”
I asked, “How are you avoiding misuse as you automate? Do you have an AI policy?”
She paused.
That pause? It’s exactly where most leaders are right now.
AI promises speed without sacrifice. And after years of burnout, tight budgets, and digital overwhelm, that’s a deeply tempting offer. It feels like breathing room. Efficiency. Relief.
But here’s what we’re seeing in the wake of that relief:
Ethical questions are being postponed
Cultural consequences are being ignored
Risks of misuse are not seen
And the deeper risk isn’t what AI does to us, but what WE do with AI. The responsibility is on your shoulders, and on mine.
If your organization is using or planning to use AI in any form (and I venture to say at least some in your team are using it), now is the time to ask:
Do we have a clear, shared policy that protects our mission? Or are we scaling without guardrails?
In this post, I’ll walk you through a simple, practical framework to create an AI Acceptable Use Policy (AUP) that your team can actually use. Whether you're a nonprofit, ministry, or mission-aligned business, this will help you adopt AI without losing what makes your work human.
📉 The Facts We Can’t Ignore
Let’s name the urgency. Most mission-driven organizations still haven’t set the groundwork for AI. Maybe it was a lack of prioritization, lack of expertise, or lack of time, but the risk is the same.
In fact, I recently ran across the following statistic:
72% of organizations do not yet have AI policies or guidelines in place.
— Diligent, 2024 survey
This isn’t just a tech issue, it’s a leadership tension.
Diligent’s survey of leaders across nonprofits, education, healthcare, and local government found a telling pattern: strategic governance is lagging behind AI adoption. These aren’t fringe voices either, 30% of those surveyed were board presidents or chairs, 25% were superintendents, and nearly 19% were board secretaries or administrators.
The pressure to “keep up” with AI is real. But when innovation races ahead of values, risk increases dramatically and leaders inevitably have to turn to damage control.
Why Guardrails Matter for Mission-Driven Work
You’re not running a tech company. You’re stewarding a vision. So the way you adopt AI has to reflect your values, your mission. It cannot function in a silo.
Here’s what can go wrong:
You won’t know what’s AI-generated.
No system = no visibility. Your team may unknowingly publish untracked, unlabeled content.You risk putting out false or unchecked data.
AI doesn’t fact-check. Unreviewed content could mislead donors, partners, or your board.You won’t be ready if regulations tighten.
If laws require AI disclosure, you’ll have no audit trail for what was human vs. machine.You’ll lose clarity and voice across your team.
Without shared rules, people default to what’s fastest, not what’s faithful or accurate.
But with the right framework, AI can become a force multiplier for what matters most.
What a Healthy AI Use Policy Looks Like
Most organizations don’t need a 30-page legal doc.
They need a clear, shared agreement about how AI gets used, and where it doesn’t.
Here’s a simple, 6-part outline I’ve used with nonprofits, ministries, and values-driven orgs to start building their AI policy with clarity and integrity:
1. Purpose Statement
Clearly articulate why your organization is engaging with AI.
“Our organization uses AI tools to enhance productivity, improve access to information, and support creative workflows. We are committed to using these tools ethically, transparently, and in ways that reflect our mission and values.”
2. Guiding Principles
List 2–4 guiding principles that will guide all AI use, which serve as decision filters for your organization. For example:
Human-focused - Does this tool enhance or erode the human element of our work?
Transparency - Would our stakeholders feel misled if they knew AI was involved in a particular output?
3. Approved Use Cases
Name the areas where AI is welcome as a support tool.
✅ Examples:
Drafting internal summaries
Generating content ideas for human review
Analyzing trends or donor data
Translating content for multilingual audiences
4. Prohibited Use Cases
Clarify what should never be automated or outsourced.
Examples:
Hiring or performance evaluations
Original donor communications (without human review)
Verification of stats or quotes in external communications
5. Review & Oversight Process
Build in accountability, don’t let AI become “set it and forget it.”
All AI-generated content must be reviewed by a designated human lead before publication.
6. Dissemination & Team Training
Don’t just write the policy, teach it.
Even the clearest policy won’t help if your team hasn’t seen it, doesn’t understand it, or can’t apply it. Every team member should be trained on your AI policy, not just tech or comms teams.
Here are some ideas:
Introduce this policy in new staff onboarding
Host short, annual refresher workshops or brown-bag sessions
Share real examples of good and poor AI use
Bottom Line
Your AI policy doesn’t need to be perfect.
It just needs to be real, readable, and rooted in your mission.
Ready to Build Yours?
If you're ready to turn these six steps into something your team can actually adopt, I’ve created a free AI Policy Template to get you started.
It’s simple, customizable, and built for mission-driven teams. Whether you’re in a church, nonprofit, or small values-aligned business, this will help you draft your policy without starting from scratch.
👉 Download the AI Policy Template (PDF)
Use it as a base. Share it with your board. Tweak it with your team. The important part is to start with clarity.
TL;DR – If You Only Remember This:
Ethical leadership in AI begins before the first tool is installed.
Guardrails protect your mission.
Most leaders feel rushed into AI, guardrails slow you down just enough to lead wisely.
Curious, what’s one thing you would never delegate to AI?
Really appreciate
for sharing his insights. If his take resonates with you, go ahead and hit that subscribe button!
This is great and I've brought this to the attention of all of my clients. Do you mind if I share a link to this article in my newsletter and channel this week?
Recently, I received some replies on my article that had a similar meaning. Like you, it also made me think a lot.
As a solo entrepreneur, I don’t have an AI policy with a lot of formal detail like you’ve outlined—but I realize I’ve been working with some unspoken rules too (now you’ve named it: AI Policy).
Thanks for a great article, very practical, realistic, and effective in a time when more people and companies are actively applying AI.