Why a familiar conversation about ethical AI feels different when it comes from the C-Suite
Ethical AI. Responsible use. Human oversight. None of this is new. Most people working in social impact have heard versions of it for a couple of years.
If I had to sum up AlignIQ, I’d say we’re a business that sets up AI impact frameworks for companies who make “making impact” their business. We’ve stated this so many times! We’re risking yawns here from the repetition (if not the tautology of the sentence).
But there’s a new messenger in town, and that’s what’s making all the difference, especially in contributing to the legitimacy of our business. Ethical AI Starts Here: Lessons From The Nonprofit Playbook By Scott Brighton: It didn’t really tell me anything new, but it helps to reframe AI ethics away from an issue of academics, compliance, or tech thinkpieces. Instead, it’s executive leadership practice. It’s top-down, on behalf of a corporate foundation, and published in the mainstream.
For the past few years, ethical AI has largely lived in a document-based, not do-based context. What makes this Forbes piece feel different is that it models leadership, not simply piles on recommendations. Far from a compliance checklist, this is a nonprofit CEO telling other moneyed people who have skin, leadership, or just plain money in the foundations game that there is a responsible way to lead the AI revolution within a nonprofit organization.
Why should I care? Policies can be delegated. Leadership cannot.
When ethical AI is communicated as an executive stance rather than an operational requirement, it changes how organizations internalize it. The question stops being “Are we following the rules?” and becomes “Is this aligned with how we be?” Thus AI ethics is where it ought to be: out of the weeds of legal review and into strategic decision-making.
Ethical AI can stop being a side conversation! It can stop being a constraint!
It’s part of business operations.

Lemme work in some juicy jump-outs for me that wouldn’t just be backlinks to our website.
“Trust has always been the nonprofit world’s currency, but even that can lose value when technology outpaces responsibility.”
This just makes me connect to the idea that AI seems to be moving so fast that no reflection can catch up to it. Is it the baby with the bathwater (AI slop!) that folks risk chucking? Or aren’t nonprofits in need of strategies to use it for more than just speed or efficiency, but actually turning creative ideas into practicable ways of moving forward?
“By modeling ethical AI now, nonprofits set the standard for every sector.”
Hard agree. I don’t think that most companies use the nonprofit playbook to do business, but I do think that businesses, in their efforts to raise capital and to get good PR, do research into operationalizing do-goodery. And there are plenty of reputable companies who get to be poster children for doing AI wrong: Deloitte, Adobe, and McDonalds (who are at least famous in the Netherlands for this knee-slapper) are just a few whose examples other companies want to avoid.
“Governance is second nature in the nonprofit world. Funders and boards already expect full transparency, and that same discipline must guide how organizations adopt AI.”
If anything, this just helps to settle my nerves in terms of product market fit. There should be plenty of nonprofit orgs that like us, that really like us and what we do.
BUT what it might signal could help us shift our position, in a way. If nonprofit CEOs are publicly articulating AI norms, then organizations no longer need hype translators or basic “what is AI” explainers. Luckily we also specialize in operationalizing values, translating these into workflows, and building shared literacy in org teams.
Ultimately, when this CEO writes publicly about ethical AI, he’s signaling that AI isn’t to be feared but it’s also something that requires deliberation. Leading by example is maybe even deeper than infrastructure. Staff with different ideas of using AI, different middle managers telling individual contributors different things about what is or isn’t allowed: these are things to move past. Staff should now ask better questions of each other… not just ChatGPT.
Every month, I try to seek out a solid “AI for Good” story for evocative content we can wrestle with, and I thought December would be interesting to see because so many organizations are kicking into high fundraising gear and may be building their PR machine around solid narratives AND tech acceleration.
I guess this month, the real story was that a leader used a mainstream platform to tell us that he wants to use AI responsibly! Lowkey, this is how AI actually gets aligned with human values.
Wanna know more what we be about?
Why I Started AlignIQ: A Vision of Human-Centric AI : Ethical AI frameworks matter at a leadership level.
Aligning AI With Purpose: Our Do’s & Don’ts: Responsible use of ethical AI.
Delegating to AI: A Small Experiment in Letting Go:Human oversight as the real norm-setter.


Leave a Reply