by Dan Garmat + Caroline Stauss
The landscape of nonprofit fundraising is shifting dramatically as artificial intelligence reshapes how people discover and donate to causes they care about. In a recent conversation, we explored insights from the Whole Whale podcast hosted by George Weiner (Chief Whaler [?!]) and Nick Azulay (Senior Digital Strategy Manager). Appropriately for us, their cast focuses on data, tech, and AI trends in the nonprofit sector. Now that we’re slithering out of 2025 and galloping into ’26 (thanks Chinese zodiac) Dan and Caroline want to piggyback on this conversation: what do these changes mean for organizations trying to make a difference?

The Shadow Donation Problem
One of the more concerning developments is what’s being called “shadow donations.” This occurs when platforms like GoFundMe create optimized pages for well-known organizations like the Red Cross—but these pages aren’t actually operated by those nonprofits. By using search engine optimization tactics, these intermediary pages rank higher than the actual nonprofit websites, meaning donations flow through third parties who take a cut before the money reaches its intended destination.

This raises an important caution: while AI tools can be helpful for researching organizations that align with your values, it’s increasingly important to verify you’re donating directly through official channels rather than intermediary platforms.
The Rise of Answer Engine Optimization (AEO)
As more people turn to ChatGPT and similar tools instead of traditional search engines, a new challenge has emerged. Previously, someone searching for nonprofit information would land on an organization’s website, browse around, and potentially sign up for emails or make a donation. Now, AI tools extract information directly and present it to users without driving traffic to the actual sites.
This shift from Search Engine Optimization (SEO) to Answer Engine Optimization (AEO) (sometimes known as GEO–generative engine optimization) already shows measurable impacts: less attention, fewer email subscribers, and fewer donors. The problem is that unlike traditional SEO, there’s no clear playbook yet for how nonprofits can optimize for AI-powered search.
Quality Over Quantity: The Content Pendulum
For years, the advice was to create high volumes of content, like listicles, blog posts, and “10 ways to…” articles designed to rank in search results. But AI can now generate this “Tier 2 content” instantly, flooding the internet with AI slop.
The prediction we discussed was that the pendulum is swinging hard back toward quality. Organizations need to focus on “tier 1 content”: stuff that’s based on genuine knowledge, expertise, and mission-driven insights from real people with real experience. This authentic content can stand out in an increasingly artificial landscape.
The Return to Personal Connection
Perhaps counterintuitively, as the digital world becomes more automated, person-to-person contact is becoming more valuable than ever. While AI can handle many of the routine tasks in development work, the ability to create sincere, personal connections that cut through the noise requires untemplate-able human skills.
This means nonprofit professionals need to develop new approaches to relationship-building that require genuine creativity and human connection. As we discussed in Wanted: Nonprofit Staff, the future of nonprofit work requires balancing technical capability with deep human judgment.
Measuring What Matters
In times of uncertainty and resource constraints, it’s tempting to track everything. But staying sane and effective means focusing on what truly matters. The recommendation: if you’re going to track anything, operationalize your mission.
Rather than trying to maintain all your previous measurements during a period of cutbacks, identify what impact you’re still having and create new metrics that reflect that reality. When you can bring your attention back to your mission through concrete, measurable indicators, you have both a grounding mechanism and data on whether your strategies need to change.
Creating a Practical AI Policy
Here’s a reality check: staff at your organization are already using AI tools, whether or not you have a policy in place. The goal shouldn’t be to create so many rules that people feel unsafe experimenting, however. Optimize boundaries and constraint to enable safe experimentation.
Key elements of a practical AI policy include:
- Use accounts with data training turned off: This is the strongest protection you can implement
- Define how to handle protected information: Be clear about what shouldn’t be shared with AI tools
- Invest in proper tools: For nonprofits, this means taking advantage of discounts (20% from OpenAI, up to 75% from Anthropic) and using team accounts where data protection is built in
- Enable experimentation: The technology is too powerful to leave unused because of per-person costs
The alternative—relying on individuals to remember to turn off data training—creates unnecessary risk for a relatively small cost savings.
The “Meat Tokens” Concept
One fascinating framework is thinking about “meat tokens versus intelligence tokens.” Essentially, human mental energy versus AI processing power. There’s a limited amount of mental energy we can subject ourselves to day in, day out. AI tools, while not 100% reliable, can now perform at PhD level in a few limited situations.
The strategic question becomes: what work truly requires your best human judgment, and what can you safely delegate to an AI assistant? By conserving your “meat tokens” for work that genuinely requires your unique perspective and expertise, you can have greater overall impact.
Looking Ahead to 2026
One common concern that folks have about using AI is the environment. Resource usage isn’t really the problem, as much as the water bottle TikTokers want you to think (they know that it requires more water to pump up the water that they’re dumping out to make a point, right?). The reality is that data centers’ construction represents the largest environmental cost, not their operation. For nonprofits trying to maximize their impact with limited resources, the question isn’t whether to use AI tools, but how to use them strategically while maintaining the human elements that make your work meaningful.
Two other major threats loom for the coming year:
Content dilution (AI slop): This is guaranteed to be an issue. Guard against it through proper training and clear policies.
Platform dependency: The dismantling of regulation in 2025 means we haven’t yet seen all the impacts on business ethics and platform behavior. Expect 2026 to be challenging in this regard.
By the end of 2026, total AI avoidance will likely be impractical for most organizations. Instead, come up with routines to use AI tools thoughtfully while preserving what makes your organization valuable: your authentic voice and actions.
The organizations that will thrive won’t be those that resist these changes or those that thoughtlessly automate everything. They’ll be the ones that find the right balance between leveraging AI for efficiency while doubling down on the real human connections and mission-driven work that no algorithm can replicate.
[In this post, we used AI for polish, not purpose.]


Leave a Reply