Ethical AI Guardrails
Staff are using AI in different ways, with no shared rules. Leadership needs answers it can defend to a board, a funder, or the team. We help your organization make the decisions, draft the working rules, and own the policy.
What this solves
Your team is using AI, and the policy conversation keeps getting stuck. Discomfort around data security, breaches of stakeholder trust, and dissonance between your mission and AI use leads to needing values-driven guardrails.
Generic AI policy templates answer the question “what should an AI policy say?” That’s the wrong question. The right question is: where does AI belong in our work, where does it not, and what would a careful person decide on a Tuesday morning? Templates can’t answer that for your organization. The Guardrails process produces rules that come out of conversations with the people who use AI in your work, anchored to your values, and pressure-tested against real scenarios. The output looks like a policy. The value is in the decisions that produced it.
What we deliver
Your team leaves with working rules it can use, your board can read, and your funders won’t dismiss. The core deliverable is the AI Policy & Decision Guide, written for your specific organization, with companion materials staff actually use.
- AI Policy & Decision Guide (v1.0). A 4 to 6 page working policy, written for your specific organization. Marked “designed to evolve” with a documented next review date.
- Tool tier list. The AI tools your team actually uses, with conditions, account guidance, and last-verified dates.
- Data handling rules (green / yellow / red). Plain-language data classification with examples from your work, plus a personal-vs-organizational-account fallback rule for tools your org hasn’t formally approved yet.
- Values-based decision criteria. Five to seven “what we protect” statements drawn from your mission, used as the standard when a specific rule doesn’t apply.
- Where automation belongs and where it doesn’t. An explicit boundary statement covering AI as assistant, analyst, operator, agent, or proxy, with defaults set for your work.
- Human review standards. What requires review, who reviews, and at what stage. Allowed / restricted / not-yet / prohibited use lists.
- Staff one-pager. Short staff-facing summary your team can read in two minutes.
- Stress test report. Three real scenarios from your work tested against the draft policy, with revisions where the policy was unclear, silent, or contradictory.
- Policy Owner and review cadence. A named person inside your team who owns the policy after we leave, plus the schedule for keeping it current.
- Rollout notes. What to say to staff, board, and funders in the first 30 days.
Delivered as a Google Doc set (editable) plus PDF (shareable). Includes a 30-day check-in.
Available if scoped during the fit call:
- Fuller role guides for managers, programme staff, operations, communications, or external collaborators
- Risk register specific to your work
- Rollout pack for managers (talking points, FAQ, escalation paths)
- 90 to 120 minute team workshop to brief staff on the new rules
- Board briefing or external responsible AI statement
- Public-facing AI use page
- NIST AI RMF alignment notes (without claiming certification or compliance)
- Deeper tool or vendor review (scoped separately where appropriate)
How it works
Two to six weeks from kickoff to v1.0, plus a 30-day check-in. Built around your mission, not someone else’s template.
Understand
We learn how your team uses AI today, plus the concerns and goals driving the engagement.
- Short fit call (free)
- Async readiness intake
- Review of any existing draft policy
- Short survey of key internal stakeholders
Convert
We turn the concerns and values surfaced in Phase 1 into decision criteria and working rules.
- One facilitated work session with a small group of leaders and staff
- Values to “what we protect” statements
- Programmability boundary: where automation belongs in your work
- Tool tiers, data classification, and agency-ladder defaults
Draft, test, and hand off
We deliver v1.0 of the AI Policy & Decision Guide, stress-tested against three real scenarios from your work.
- Drafted policy with companion materials
- Stress-tested against three real scenarios from your work
- Handoff session with your Policy Owner
- 30-day check-in to learn what surfaced after rollout
Where automation belongs and where it doesn’t
The next two years of AI aren’t only about better drafting. They’re about AI taking action: sending messages, updating records, running multi-step workflows, talking to people on your behalf.
Mission-driven organizations have a specific question to answer: where does it serve our work to make things more software-like, and where would that flatten the trust, dignity, voice, or judgment our work depends on? Guardrails helps you decide that on purpose, before the choice gets made for you by whichever staff member tries the new feature first.
Built to survive serious staff questions
The actual stress-test scenarios come from your real work, surfaced during intake. Examples of the themes those scenarios usually fall into are below. We test the draft against scenarios that hit these themes, then revise the policy where the answers are unclear, silent, or contradictory.
Themes we test against
- Are we using AI to replace human expertise we say we value?
- What data should never go into these tools?
- When do we still pay translators, reviewers, or subject-matter experts?
- How do we justify AI use given environmental concerns?
- Who is accountable if AI-assisted work is wrong?
- Would we be comfortable explaining this use to a funder, partner, journalist, or affected community?
How concerns are handled
- Some become clear rules
- Some become review gates
- Some are assigned to an owner
- Some require legal, security, vendor, or board review
- Some remain open and are scheduled for later review
The goal is not to make every ethical concern disappear. The goal is to decide how each concern should be governed.
Best fit if…
- Your staff is already using AI informally
- The organization has started and tabled conversations about AI policy
- Leadership wants shared AI rules
- Staff have thoughtful concerns about AI, and leadership wants a process that takes those concerns seriously
- The organization is trusted to protect sensitive information and populations, including:
- Vulnerable communities
- Public claims
- Research
- Advocacy
- Donor relations
- The team needs to make a decision about which vendors and tools work
- Staff needs independence and clarity on edge cases
- Board and funders are raising questions about AI
What we do, and what we don’t
What we do
- Facilitate decisions across leadership, operations, and programme staff
- Turn values and concerns into usable decision criteria
- Draft working rules in plain language
- Create data, tool, review, and escalation guidance
- Test the policy against serious staff questions before handoff
- Produce materials your team can roll out and update
What we don’t
- Provide legal advice
- Conduct security audits
- Review vendor contracts
- Issue compliance certifications
- Sign off on regulatory exposure
- Configure tool admin settings (available as add-on)
- Guarantee that any specific funder, regulator, or auditor will accept the result
Where any of those reviews are needed, we’ll flag it and recommend a next step.
Who delivers this
- Dan Garmat and Caroline Stauss, founders and directors of AlignIQ.
- 10 years of data science experience working with high-sensitivity data.
- Advanced degrees in statistics, communications, and curriculum and teaching.
- We help purpose-driven teams use AI ethically and practically.
Confidentiality and boundaries
We use public information and client-approved, non-sensitive materials. We do not ask you to share confidential, privileged, client-identifiable, donor-identifiable, HR, or safeguarding-sensitive data. If sensitive material is relevant, we design around it using redacted examples, categories, or process descriptions. Sensitive, confidential, privileged, or client-identifiable information should not be entered into public AI tools.
Pricing
Starter engagement from €4,500 + VAT. This covers the standard Starter scope: facilitated work session, draft policy and companion materials, stress test against three real scenarios from your work, handoff session with your Policy Owner, and a 30-day check-in.
Larger engagements (broader stakeholder groups, fuller role guides, risk register, rollout pack, team workshop, board briefing, multi-department rollout, or deeper tool guidance) are scoped after a fit call. What shifts the rate: team size, data sensitivity, number of stakeholders involved in approval, tool-guidance depth, and how much rollout support you want.
Smaller nonprofits and grant-funded teams may qualify for community-rate options where capacity allows. When budget is constrained, the container is smaller, not lower-quality: fewer stakeholders, lighter documentation, narrower tool guidance, or shorter follow-up.
Prices listed in EUR. USD pricing available for US and international clients on request. Excludes legal advice, security audit, vendor contract review, and compliance certification.
Book a fit call
A short, no-cost call. We’ll confirm whether Ethical AI Guardrails fits your situation, identify the right scope, and clarify whether wAI-Finder, training, workflow support, or outside review would be a better next step.
Frequently asked questions
We already have a draft policy. Is that wasted?
No. Most teams that come to us have something. We start with what you’ve drafted, identify the decisions that are still unresolved, and build from there. The existing draft becomes part of the intake and review process.
Will the policy hold up to a board or funder review?
It is structured to be defensible: each rule is tied to a decision the team has made and can explain. Most boards and funders want to see that the team has thought it through, not that the policy mirrors a specific template. Where formal legal, security, or compliance review is needed, we’ll flag it.
Do you write the policy alone, or do we?
Together. We facilitate the decisions, surface tradeoffs, and draft language. The decisions are yours. The result is a policy your team can explain because they made the calls, not a document they inherited.
What is the deliverable actually called?
The AI Policy & Decision Guide. The offer is called Ethical AI Guardrails because it’s about the working rules and the process that produces them, not only the document. The Guide is the central artifact, plus companion materials.
What if we’re not sure policy is the right next step?
Then wAI-Finder (AI Risk & Opportunity Triage) may be the better starting point. Guardrails fits best when policy, data boundaries, tool choices, or staff rules are already the live issue.
What if staff still disagree after rollout?
Some concerns will not be fully resolved in a short policy project. We help the team decide which concerns become rules, which become review gates, which need an owner, which require outside review, and which should be revisited as AI changes. The policy is marked v1.0, designed to evolve.
What if our staff doesn’t follow the rules?
That is often a signal that the rules don’t fit how the team works, or that rollout needs more support. Our co-build process is designed to surface this before the policy is published. We also produce staff-facing rollout notes so the rules arrive with context, not just text.
How do we keep this current as tools change?
The policy is built so your team can update it. The decision criteria, data stoplight, tool guidance, and review cadence are documented at a level that survives many tool changes. Each tool entry has a “last verified” date and a re-verification cadence. For larger shifts, advisory follow-on or a refresh engagement is available.
Can you align to NIST AI RMF or another framework?
Yes, where useful and separately scoped. We can add NIST AI RMF alignment notes without claiming certification or compliance. The goal is to show how your decisions relate to a framework, not to replace formal review.
