AI is incapable of curiosity and empathy that comes from experiencing the world, so it can’t help with the core work of social missions or scientific research. Wanting to learn about and make the world better comes from our experience, but the work to do those things doesn’t always need a direct connection to that primary motivation, as we know all too well. Some of that work can be delegated to those who don’t share the vision, and this is where AI can shine. Christian Cawley calls out in a TechRadar piece, AI appropriately helps with dozens of mini-tasks and in the process, amplifies the core impact of research work. As the article says:

“Study and research are being supercharged by AI, which has heralded a new age of industrial-scale theoretical exploration.”

Mission-amplification comes because “the heavy lifting of manual research is now being done by AI.” Meanwhile, “the researcher remains in control, defining tasks and rules.” In a publish-or-perish world, skillfully drawing lines between what computers can do and what humans must do is necessary to be ready for tomorrow. “Monitoring processes and overseeing AI is vital to ensuring the end results meet the demands of the research,” and doing that well involves evolving knowledge and skills. This supports that it’s unsafe to automate the most mission-critical work as “AI results must be balanced with human judgment, insight, and creativity.”

But for tasks that don’t require human empathy and curiosity, what are the tasks AI can and can’t do in research? Can they be applied more generally to “supercharge” the core mission work of any nonprofit or B Corp?

Tasks to consider delegating

First, the tasks that the article says AI already helps with successfully. Here we’ve added similar tasks for any mission. These example analog tasks, although valuable and sometimes rewarding, can consume time and hinder the team’s unique contribution. They’re tasks a team might like to delegate to a competent assistant, here grouped by the research process:

Discover

ActionResearch task (mentioned by TechRadar) Analog for nonprofits / B Corps
Automateliterature reviewsscans for best practices and funder priorities
Identifyrelated contentadjacent case studies, model programs, or potential funders & partners
Simplifytexts with NLPpolicies, grant drafts, and forms to plain language versions
Deciphercomplex docs/datasetsand extract requirements and deadlines from RFPs & regulations
Bridgedisparate academic disciplinespopular methods from one sector to another

Hypothesize

ActionResearch task (article)Analog for nonprofits / B Corps
Generatehypotheses at an accelerated rateproposals for program tweaks to lift enrollment and engagement
Revealgaps & overlooked hypothesesunderserved segments & opportunities
Triageviability of ideas, allowing for “shotgun” multi-hypothesis testingpilot ideas before field tests
Modeltheoretical approachesscenario cost/impact trade-offs

Analyze

ActionResearch task (article) Analog for nonprofits / B Corps
Collectdataand ingest public data, CRM info, and exported logs
Mine datafor correlationsto surface patterns in donations, attendance, or intake forms
Analyze& process rapidly& generate descriptive stats for outcomes by site or segment
Detectnew patternsearly-warning signals for risk of dropout or high service demand
Interpretbeyond human capacitymultiple joined data sources to see who benefits and who is left out
Visualizedataexecutive “bird’s-eye” overviews & equity breakouts for boards
Synthesize data to test newmethodsdashboards

Write

ActionResearch task (article) Analog for nonprofits / B Corps
Provide feedbackon writing clarityto tighten grant narratives & check readability
Prepcitations & bibliographycomplete references in evaluation reports
Preprocessto accelerate peer-review stepas a compliance sweep for required disclosures or citations & for a pre-mortem simulation of a tough reviewer
Detectcitation issuesdubious or dates sources before publication

Govern & Operate

ActionResearch task (article) Analog for nonprofits / B Corps
Summarizepapers & reports80-page research or RFPs into action lists or briefs
Findpotential reviewersoutside advisors or ethics reviewers
Handlepreviously tedious steps de-dupe, reformat, batch tagging, redaction, transcription, OCR, form prefill, email templating, codebook generation, data-entry validation

The analogs demonstrate a haphazardness in the tasks that AI tools are capable of handling. In fact, their capabilities are sufficient for some organizations and workflows, but not for others. Finding opportunities available today benefits from marrying reported successful use case ideas to your workflows and testing whether the AI tool can, in fact, help free up capacity for the human aspects of your work.

This is an act of applied creativity that benefits from experimentation and experience. But acquiring that experience involves safe experimentation, which involves understanding when working with these tools, where people need to keep oversight.

Tasks the AI assistant can’t do

TechRadar’s piece is clear that while today’s new crop of advanced tools can augment work for a mission, humans must set purpose, verify, and decide. It identifies the required human oversight steps throughout the research process that are relevant to any effort to enlist AI’s assistance in creating social value.

Discover

Human oversight stepImpact/what it changesRisk if skipped
Define tasks & rules for AIKeeps scope & data aligned to mission & within policyIrrelevant outputs; privacy mistakes
Maintain healthy skepticismPrevents plausible nonsense from spreadingActing on wrong summaries & translations 

Hypothesize

Human oversight stepImpact/what it changesRisk if skipped
Select and own hypothesesFocuses tests on real, achievable impact questions that merit testingChasing shiny but low-impact ideas
Design ethical pilotsProtects people; yields reproducible resultsBiased samples; consent gaps

Analyze

Human oversight stepImpact/what it changesRisk if skipped
Higher-level analysis, strategy, & creativityAllows for contextualizing outputs & seeing implicationsDashboard theatre; misallocated effort
Monitor & oversee AICatches bias, performance decay, and out-of-scope useQuiet errors at scale
Guard against bias & inaccuraciesProtects equity and research integrity; keeps analyses fair and reliableSkewed conclusions; inequitable resource allocation; credibility loss

Write

Human oversight stepImpact/what it changesRisk if skipped
Verify & challenge resultsPreserves accuracy and trustHallucinated facts; misfit bland tone
Authorship & originality guardrailsReliability, trustworthiness, and originality of outputPlagiarism risk; credibility loss

Govern & Operate

Human oversight stepImpact/what it changesRisk if skipped
Set guidelines & governance (Responsible-use policies)Shared rules, literacy, and enforceable practice as a foundationInconsistent practices; reputational risk; lost opportunities for responsible users
Protect privacy & securityProtects people and orgsData leaks; contractural breaches
Ongoing stakeholder dialogueEnsure AI remains an advanced toolBecomes a replacement for original thought and decision-making
Beware over-dependencePreserves human agency and critical thinking; prevents deskilling and false certaintyPassive use of AI; weaker problem-solving; misplaced trust in plausible but wrong outputs
Balance with human judgmentAligns outputs with context, values, and stakeholder standards; sustains legitimacyMechanistic decisions; tone/fit problems; loss of trust with funders and communities
Make final decisionsAccountability and fairnessOptimizing metrics over mission


While these seem like common sense, they require uncommon intentionality in practice. As the piece argues, “responsible use must be governed, and that is where guidelines are required.” That tends to be led by human experts with an eye toward changing the world.

Conclusion

In the same way AI is supercharging research, it can supercharge any mission. In the way that science has a hypothesis and tests it, socially beneficial work has a hypothesis and tests it. Yet for social missions, there’s no hope of today’s AI experiencing genuine empathy, so the human drive to learn and improve the world remains irreplaceable. However, not all of the work of an organization with a social mission requires this level of human sensitivity.

To identify the parts of workflows that can be delegated to AI, experimentation is required. Experimentation benefits from understanding the limitations of these systems and where human oversight is needed. That benefits from clear rules and guardrails, and a necessary step to that is AI governance.

More on humans’ necessity to be the true creators.

More on humans’ necessity to be humane creators.

More on our founding human’s drive to build AlignIQ.

[In this document, we used AI for polish, not purpose.]


Leave a Reply

Your email address will not be published. Required fields are marked *