Improve one recurring workflow with AI support, human review, and reusable patterns.
For proposals, reports, research briefs, board updates, funder communications, and other repeated outputs where quality and trust matter. Most first engagements run as a 4 to 6 week pilot. Scoped after wAI-Finder.
A fit call is for buyers who already know which workflow they want to improve.
What AI does, and what humans still own
What AI does
- Structures source material
- Retrieves past work and approved language
- Drafts first versions
- Checks against requirements
- Flags missing evidence
- Creates variants and tone adjustments
- Prepares review-ready outputs
What humans still own
- Judgment calls
- Final quality decisions
- Stakeholder relationships
- Strategic framing
- Sign-off on every high-stakes output
- Anything sensitive, confidential, or values-laden
AI drafts are not delivered as final outputs. Human review is built into the workflow at the points where it actually matters.
The four workflow tracks
Pick one workflow to upgrade first. Most nonprofit and mission-driven teams fall into one of these four tracks. Knowledge retrieval and past-work reuse runs through all of them.
Proposal, grant, and funder-response workflows
RFP review, past-performance retrieval, proposal outlines, first drafts, compliance and reviewer-pass checks, funder-response packages.
Reporting, MEL, board, and funder-update workflows
Metrics summaries, programme reports, board updates, funder updates, variance narratives, QA checks against report requirements.
Research-to-brief and research-to-public-output workflows
Long research reports turned into public briefs. Litigation or policy research turned into communications. Source synthesis with argument preservation. Cited summaries.
Communications and content-suite workflows
Newsletter drafts, stakeholder emails, campaign materials, content variants, translation and tone variants, consistent boilerplate across multiple assets.
Knowledge retrieval and past-work reuse
Finding prior proposals, approved language, past reports, case examples, policies, and reusable templates. Most engagements include this as a layer underneath the chosen track, regardless of which one you pick.
What you get
- Current-state workflow map: where time is lost, where quality breaks
- Future-state workflow design with AI support at the right steps
- Source and document structure (which inputs go where, in what form)
- Data-boundary plan: what can go into the workflow, what must be redacted, and what should stay out
- Prompt and instruction set, written and reviewed for your context
- Reusable output template
- Review gates and a quality checklist
- Measurement plan with a real baseline
- Handoff notes
- Improvement backlog: things to refine after the experiment ends
How the 4 to 6 week experiment runs
Most first workflow upgrades run as a 4 to 6 week pilot across four phases. The container is bounded so the team knows what they’re committing to.
Reusable patterns
The point of the experiment isn’t a one-off success. It’s a set of patterns the team can apply to similar work later. By the end, you have a working template, a prompt set, a review checklist, and a documented process the team can repeat or adapt to other workflows in the same track.
Good fit and not a fit
A workflow upgrade pays back when the work is repeatable and the outputs have a clear shape. It’s a poor fit when the value is mostly in live relationship-building or judgment that cannot be meaningfully formalised.
Good fit
- Recurring outputs the team produces over and over
- Inputs and outputs that are well-defined (you know what “good” looks like)
- A clear human reviewer who signs off before publication
- Repeatability across at least 4 to 5 instances over the next 6 months
- Much of the work involves assembling, drafting, checking, or adapting material against clear criteria
Not a fit
- One-off projects with no second iteration in sight
- Work where the value depends mainly on live relationship-building, negotiation, or judgment that cannot be meaningfully formalised
- Workflows where the only data path involves sensitive or confidential material that can’t enter the AI tools you use
- Outputs where success is measured entirely by human judgment that’s hard to formalise
- Outputs your team doesn’t have time to standardise before AI support is added on top
Quality, review, and measurement
Every workflow is built around human review at the points where review actually matters. Review gates and quality checklists are part of the system, not an afterthought.
What we measure (as targets, not guarantees)
- Hours saved per output, vs the baseline we measured in week 1
- Number of revision cycles, before and after
- Time-to-first-draft
- Repeated manual steps eliminated
- Quality checks passed on first review
- Risk events flagged and addressed (e.g., sensitive data inputs caught and rerouted)
A note on agentic AI
Some workflow upgrades include early agentic patterns: reviewer assistants, structured draft checks, source-grounding, task routing, or reusable assistant workflows. We use these patterns where they’re a clear fit. We don’t use them to look impressive. The goal is a workflow that produces better outputs faster, with appropriate human review.
Confidentiality and boundaries
We use public information and client-approved, non-sensitive materials. We do not ask you to share confidential, privileged, client-identifiable, donor-identifiable, HR, or safeguarding-sensitive data. If sensitive material is relevant, we design around it using redacted examples, categories, or process descriptions. Sensitive, confidential, privileged, or client-identifiable information should not be entered into public AI tools.
Not legal advice. Not a security audit. Not a tool reseller relationship.
Pricing
Scoped after wAI-Finder. The base engagement is a fixed-scope 4 to 6 week pilot for one recurring workflow, not an open-ended automation build. What shifts the range: how high-stakes the output is, how complex your source material is, how many stakeholders sign off, the level of accountability on the output, and how much implementation support your team wants on staff handoff. Prices listed in EUR. USD pricing available for US and international clients on request. Excludes VAT where applicable. Smaller nonprofits and grant-funded teams may qualify for community-rate options where capacity allows.
If you already know which workflow you want to improve and you’re confident in the brief, request a fit call to scope the engagement directly without going through wAI-Finder first.
Two ways to start
Most teams start with wAI-Finder so the workflow choice and the success measures are evidence-based. Some teams already know which workflow they want to improve, in which case a fit call is faster.
Frequently asked questions
Why do we start with wAI-Finder?
Because the right workflow to upgrade isn’t always the obvious one. wAI-Finder produces an evidence-based recommendation: which workflow has the highest value-to-risk ratio, which one the team will actually adopt, and what the success measures should be. Skipping it works if you already know all of that.
Do we need an AI policy before doing this?
Not always. If the workflow uses only public or low-risk materials, we can often design safe boundaries inside the workflow itself. If staff are already using AI with sensitive data, or if the workflow touches client, donor, HR, safeguarding, or confidential material, we may recommend AI Governance QuickStart before or alongside the workflow upgrade.
Will this work with the AI tools we already pay for?
Usually yes. We design around the tools you already have access to (and the data boundaries that come with them). Where a different tool is materially better for your workflow, we’ll flag it and explain the tradeoff, but we don’t sell or resell tools.
Which track is right for us if we have several candidates?
That’s exactly what wAI-Finder is for. If you’d rather decide internally, the rough heuristic is: pick the workflow that’s most repetitive, most time-consuming, and where the inputs and outputs are well-defined. Avoid workflows where most of the value is in live relationship-building, negotiation, or judgment that cannot be meaningfully formalised.
What if the workflow doesn’t save us time?
That’s why we measure week 1 baseline and week 4 test on the same kind of output. If the experiment shows no time savings and no quality improvement, the post-experiment memo says so and explains why. Not every workflow is a fit for AI support, and finding that out early is itself useful.
Who on our side owns the workflow after handoff?
Whoever does the work. We design the handoff session around the team that will actually run the workflow week after week, not the leader who commissioned it. The artefacts (template, prompt set, checklist) are documented for the doers, not for executives.
Can we upgrade more than one workflow at a time?
Yes, though we usually recommend running them in sequence. The first workflow becomes a working pattern the team understands. The second one is faster because the team knows what review gates look like and how to measure success.
What does “agentic” actually mean for this work?
Practical, bounded patterns that string a few AI steps together with checkpoints: a reviewer assistant that flags missing requirements, a draft check that compares output against a checklist, a research helper that retrieves past work with citations, or a workflow that routes a draft through several review stages with clear stop points. Not autonomous agents acting unsupervised.
