Part 1: Introduction
AI is useful for charities now. Not in theory, not with caveats, not in 2027. Practical applications work and most organisations can start getting value out of them today. That's a shift from when we wrote the first edition of this playbook a year ago, when a lot of charity leaders hadn't tried AI themselves yet. Most now have, and the conversation has moved on from "should we?" to "where does this actually help?"
In the past year we've worked with charities doing things with AI that nobody would have expected two years ago. We've also spent time with organisations still trying to figure out where to start. Both of those experiences shaped this book. If you're already well into your AI journey, there's plenty here on the harder organisational questions. If you haven't started yet, it's never too late, and the starting points are easier than they were a year ago.
Here's what we've learned. The organisations getting the most from AI don't start with the technology. They start with something they already understand well: feedback they can't get through fast enough, donors they're losing and not knowing why, case notes that sit unread because nobody has time. AI turns out to be good at exactly that kind of work. But it only gets you somewhere useful if someone who understands the problem is driving it.
The harder bit is organisational. It always is. The technology mostly works. But changing how people work, getting data into shape, that's the real job, and it's not a new problem.
This technology is moving fast enough that nobody can predict the detail of where it goes. But that's not a reason to wait. The organisations getting the most from AI started with small experiments and built from there. You can do that at any point, and the starting points are better now than they were a year ago.
This edition is structured around how charities actually work. Parts 1 to 3 cover the context and the organisational questions that apply regardless of what you're using AI for. Then you'll find practical guidance by function: fundraising, service delivery, operations, impact measurement, communications, and data. Each section includes recipes for specific problems, available as step-by-step guides at AI Recipes for Charities. Start with a problem you have, find the matching section, and see what's involved.
Pick a problem. Try a recipe. See what you learn.
1.What's changed since the first edition?
The most significant change is direct experience. When we wrote the first edition, many charity leaders either hadn't used generative AI themselves, or were using it infrequently. Now almost everyone has tried it, formed their own impressions, and is asking harder questions about where it adds value.
The technology has moved fast, fast enough that changes which would normally take years have happened in months. A year ago, AI was good at generating text when asked. Now it can sustain complex, multi-step work: building applications, analysing datasets, managing workflows across multiple systems. Things that required a developer six months ago can now be done by someone who understands the problem.
Last year we wrote about how, "AI is also now multimodal - you can upload images, documents, spreadsheets and audio files for analysis", which seems a bit quaint now. The tools have stretched so far beyond the it-can-also-look-at-pictures-now that what felt on the edge of things last year already feel routine.
AI has become part of the standard tools charities use: Microsoft 365 Copilot, Salesforce Einstein, features within Google Workspace. Many organisations are paying for AI without consciously adopting it. Whether that's delivering value is another question.
AI now takes actions, not just generates content. These "agentic" systems browse the web, send emails, update databases, and complete multi-step tasks. They're in the tools charities already use, and people are adopting standalone agents independently, often before their organisation has a policy for them. We cover agentic AI in more detail later.
The sector context has shifted too. Financial pressures have intensified resulting in redundancies, funding cuts, increased demand. AI could help stretched teams do more, but implementing it requires investment that feels impossible right now. We've tried to address this with practical advice and implementation guides.
2.The pace is not slowing down
The capabilities available now are well beyond what existed when we wrote the first edition of this playbook, and that was only a year ago.
The recent leaps have been particularly striking. Tools like Claude Code can now build functional applications, create internal tools, and automate workflows without deep technical expertise. AI can write, test, and debug code across entire projects, not just suggest snippets. To give one example: Hearing Dogs for Deaf People used AI-assisted prototyping to test three new revenue concepts in a three-week sprint, producing financial projections, mock product photography, and functional websites. None turned out to be viable - but finding that out quickly and cheaply, before committing real investment, was the point.
This isn't just about coding. The models themselves have become substantially more capable at reasoning, at handling complex tasks, at working with large amounts of information. Features that felt experimental a few months ago are now reliable enough to build on.
One way to see how much has changed: the research organisation METR tracks how long AI can work independently on real tasks that require judgment and problem-solving. In mid-2024, the best models could reliably handle tasks of about three to five minutes. By early 2025, that had tripled to around fifteen minutes. By late 2025, it had reached roughly half an hour. The trajectory is exponential, with capability doubling roughly every seven months (METR).

For charities, this creates a genuine challenge. How do you plan when things shift this quickly? How do you make investment decisions when the tools might be different in six months? There's no easy answer, but the organisations navigating this well tend to share some characteristics: they have someone watching the horizon, they build in regular review points, and they hold their goals firmly while staying flexible about how they get there.
The risk isn't just moving too fast. It's also moving too slow - waiting for things to settle when they might not settle for years, and falling further behind organisations that are learning by doing.
The risk isn't just moving too fast. It's also moving too slow - waiting for things to settle when they might not settle for years.
3.The enthusiasm isn't universal
Not everyone is convinced. A quarter of UK workers fear losing their jobs to AI in the next five years (Acas). Forty percent say they'd be fine never using it again. The anxiety isn't unfounded: over 50,000 job cuts in 2025 were directly attributed to AI, with the tech sector hit hardest (Challenger, Gray & Christmas). The roles most affected are those involving routine information processing.
The gap between how experts and the public see AI is pretty lopsided. Seventy-six percent of AI experts believe the technology will benefit people, but only 24% of the general public agree. Forty-three percent think AI is more likely to harm them than help them (Pew Research, 2025). Obviously, this matters for charities because your beneficiaries and supporters are part of that public, not the expert group.

The gap between leadership and staff is striking. Forty percent of C-suite executives report saving eight or more hours a week with AI. Two-thirds of non-management workers say they're saving less than two hours, or nothing at all (SectionAI, January 2026). People are being asked to adopt tools that might make their roles redundant, while also being told those same tools will make their work better. That's a difficult message to reconcile.
There's also some scepticism about whether AI is delivering on its promises. Microsoft has cut Copilot sales targets. Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept (Gartner, July 2024). Talk of an AI bubble is now constant.
Much of the big investment is chasing artificial general intelligence: systems that could match human capabilities across all domains. That may or may not arrive. But the practical applications in this playbook, using AI to analyse feedback or forecast demand, are more modest. They work now, and they don't depend on the next breakthrough.
The practical applications in this playbook work now, and they don't depend on the next breakthrough.
4.Infrastructure dependency and the environmental impact
The dominant AI tools are American: OpenAI, Anthropic, Google, Microsoft. Most charities already depend heavily on US-controlled infrastructure for email, cloud storage, CRM, and collaboration tools. AI adds another layer to an existing dependency.
With an unpredictable US administration, this carries risks that would have seemed abstract a few years ago. What happens if policy changes affect access, pricing, or data handling? There's no easy answer, but this is probably a conversation for board level: understanding where your organisation is building dependency on infrastructure you don't control, and what that means for resilience.
The environmental impact of AI is real, and it's growing. Charities with sustainability commitments, and those working on environmental issues, need to understand what's happening, even if individual organisational use is a small part of the picture.
The scale of the problem
Data centres currently consume around 415 terawatt-hours of electricity globally, roughly 1.5% of all electricity used worldwide. The IEA projects this will more than double by 2030, reaching 945 TWh, equivalent to the entire electricity consumption of Japan. AI is the main driver of that growth, with electricity demand from AI-optimised data centres expected to quadruple by the end of the decade (IEA).
Water is a growing concern too. Data centres need large volumes for cooling, and consumption is rising fast - a single large data centre in Iowa used a billion gallons in 2024 (Google). In absolute terms the current scale is small: all US data centres together account for less than 0.01% of freshwater consumption, and a single AI prompt uses about two millilitres (Masley, 2025). But the growth rate matters, particularly in regions already under water stress where data centres compete with homes and farms for supply.
Here in the UK, data centre expansion is putting pressure on both power networks and water supplies, particularly in South East England. Thames Water has flagged the significant volumes of water required for data centre cooling in the region. The Royal Academy of Engineering published a report in 2025 calling for mandatory environmental reporting by data centres and urging the government to embed sustainability as a criterion in AI policy and procurement.
The tension for environmental charities
There's a particular irony for charities working on environmental issues. AI can help you do your work better, analysing satellite imagery, modelling climate impacts, processing environmental data at scale, as Surrey Wildlife Trust's Space4Nature project demonstrates. But using it contributes to the very problem you're trying to solve.
This isn't a reason to avoid AI. It's a reason to be deliberate about when and how you use it, and to be transparent about the trade-offs. The same applies to any charity with net zero commitments or sustainability policies. AI should be part of those conversations, not exempt from them.
Individual charity use of AI is a tiny fraction of global AI energy consumption. The systemic questions (how data centres are powered, where they're built, how they're regulated) are policy questions, not organisational ones. But charities have a voice in those conversations, and environmental charities in particular have a responsibility to engage with them. The UK government's AI Opportunities Action Plan, published in January 2025, contained no mention of the environment. That's a gap worth pushing on.
We cover what this means practically for your organisation, including how to factor AI into sustainability reporting, in Part 2.
5.We can see (some) of the consequences now
When we wrote the first edition, much of the discussion about AI's impact was speculative. Now we’ve all witnessed the changes happening.
People are using AI for support. According to a Harvard Business Review analysis of thousands of online discussion posts, "therapy and companionship" is now the number one use case for ChatGPT, ahead of "organise my life" and "find purpose" (HBR, April 2025). AI is available at 2am, doesn't have a waiting list, doesn't judge, and responds immediately.
The tech industry is responding to this demand, developing AI products specifically for emotional support. And there's mounting evidence it can help: eight weeks of regular use of Therabot, a chatbot created by researchers at Dartmouth College, reduced symptoms in users with depression by 51% (NEJM AI, March 2025). Many participants reported feeling the chatbot cared about them.
Whether this is good or bad is complicated. AI might give inaccurate advice or miss safeguarding concerns. But when waiting lists are months long and services are overstretched, people are turning to what's available. For charities providing advice or support services, this raises difficult questions. Should you build your own AI-powered support, use AI to extend what you already offer, or focus on what AI can't do?
How people find information is changing. AI-generated answers now appear at the top of search results. People ask ChatGPT questions they would previously have typed into Google. The direction is clear: AI summaries can cause an 18-64% decline in organic traffic (Bounteous), and around 69% of searches now result in no clicks at all as users get what they need directly from the search engine, up from 56% a year earlier (Similarweb).
For charities, the impact is already measurable. Between 2024 and 2025, charity website traffic growth slowed from 15% to 12.5%. More concerning: charities experienced the single biggest drop in ranking pages of any sector analysed, falling 28.9% (Tank, October 2025). AI Overviews now appear in one in five searches (Pew Research), and when they do, click-through rates drop by roughly 30% (BrightEdge). Google Ad Grants campaigns report average drops of 47% in click-through rates and 60% in actual clicks (Torchbox). A study of 17 US nonprofits found organic traffic down 13% despite brand name searches increasing by 19% - people are looking for these organisations, they're just not landing on their websites (M+R).

The shift is changing how people look for charities too. They're no longer searching "donate to homeless charity." They're asking: "Which charities are actually making a difference for homeless people?" AI systems answer by reading and synthesising information across the web. Whether your charity is part of that answer depends on choices you're making now about your digital estate. For charities that exist to provide information and support, this is a strategic question. If people increasingly get answers from AI instead of from you, what does it mean for your role? We address this in detail in Part Two.
Knowledge itself is being shaped. AI-generated content now makes up a significant proportion of what's published online. Synthetic voices narrate podcasts. AI-generated images illustrate articles. As AI systems increasingly mediate our access to information, there's a risk that understanding of issues gets filtered or distorted in ways that are hard to detect.
6.Technology might be catching up with our problems
One of our observations from last year is that technology might actually be catching up with some longstanding organisational problems. AI is particularly good at tasks that used to require either significant human time or specialist expertise: making sense of unstructured text, cleaning messy data, processing documents at scale, spotting patterns across large volumes of information. These are exactly the bottlenecks many charities have lived with for years because the solutions weren't affordable or accessible.
This doesn't mean AI solves everything. But it does mean some problems that felt intractable might now be worth revisiting.
The most important thing you can bring to AI isn't technical knowledge, it's deep understanding of your own work. The person who's been running your helpline for ten years, the fundraiser who knows why certain appeals work, the programme manager who understands why cases get stuck are the people who can identify problems worth solving.
The most important thing you can bring to AI isn't technical knowledge, it's deep understanding of your own work.
7.AI literacy or AI solutions?
Should you be investing in AI literacy across your organisation, or in specific AI solutions that solve problems? The honest answer is probably a bit of both.
Generic AI literacy has limits. Training everyone on prompting techniques or how large language models work doesn't automatically translate into better fundraising, better service delivery, or better data management. The number of use cases for AI and the types of applications are vast. We identified 89 recipes for this collection and could easily have found hundreds more. Expecting anyone to understand all the applications is unrealistic.
What matters more is domain-specific understanding. Fundraisers don't need to understand AI in theory but they do need to understand what AI can do for fundraising: personalising appeals, spotting lapsing donors, analysing campaign performance. Service delivery teams need to understand the capabilities of AI for triaging enquiries, summarising case notes and scaling support. The recipes are organised by function for exactly this reason.
For individual productivity, people do need enough literacy to use the tools themselves: knowing what AI is good at, where to be sceptical, and how to get useful outputs. This comes from experimenting with real work in their own domain, not from generic training. The National Fire Chiefs Council found this when building AI capability across UK fire services. Generic AI training would have meant nothing to fire professionals. What worked was connecting AI to their actual operational challenges: analysing building inspection reports, improving fire detection, making better resource decisions. Sixty professionals came away able to evaluate AI tools critically and engage with vendors from a position of knowledge rather than uncertainty.
For embedded workflows, literacy probably matters less. When AI is implemented well, the team using it doesn't need to know anything about AI. Breast Cancer Now's survey transcription system handles 20+ different NHS trust form variants. The team interacts with it as part of their normal workflow. They don't need to understand what's happening underneath. They're experts in their programme and the investment was in building the solution, not in building everyone's AI skills.
The measure of success isn't necessarily how many people have been trained. It's whether the work is better.
The measure of success isn't how many people have been trained. It's whether the work is better.
8.The shift to agentic AI
Until recently, most AI use in charities was conversational. You'd ask a question, get an answer, paste in a document and request a summary. The AI generated content, but you decided what to do with it.
Agentic AI works differently. Instead of generating responses, it takes actions: sends emails, updates databases, books appointments, moves files, executes code, browses the web. You give it a goal, and it figures out the steps to achieve it, choosing which tools to use along the way.
Agent capabilities are now part of the tools charities use: Copilot in Microsoft 365, Agentforce in Salesforce, features in your CRM that take actions on your behalf. If you've seen Copilot offer to send an email or book a meeting, you've already used agentic AI. Watching AI actually do things rather than just say things can feel genuinely strange. We've been working with this technology for a while now and it still catches us off guard sometimes.
The adoption has been rapid. OpenClaw, an open-source AI agent that connects to email, messaging, and calendars, was built by a single developer and went from a weekend project to 145,000 GitHub stars in ten weeks. Anthropic's Agent Teams feature coordinates multiple AI agents on shared work. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by the end of 2026, up from less than 5% in 2025 (Gartner, August 2025).
Multi-agent systems take this further. Rather than one AI agent working alone, multiple agents coordinate with each other: one researching, another drafting, a third reviewing. Anthropic's published research shows multi-agent systems outperforming single agents by 90% on research tasks (Anthropic). This matters for governance because when agents coordinate with other agents, not just with humans, traceability and oversight become harder still.
Building custom agents that connect to your specific systems still requires technical expertise, but general-purpose agents like OpenClaw are accessible to non-technical users. The gap between what requires engineering and what someone can set up themselves has narrowed significantly. Many organisations are encountering agentic AI through their existing tools without consciously deciding to adopt it.
The opportunity
Agentic AI is already taking on administrative work that absorbs disproportionate time: chasing information across systems, copying data between tools, sending routine updates, compiling reports from multiple sources. The NHS Copilot trial (covered in the operations section) showed staff saving 43 minutes a day on exactly this kind of work. That's real capacity being freed for work that needs human judgment.
The risks are different
But the risks are different from conversational AI. When AI is just suggesting text, you can ignore bad suggestions. When AI is taking actions, the consequences are more immediate.
When AI is just suggesting text, you can ignore bad suggestions. When AI is taking actions, the consequences are more immediate.
Accountability becomes harder. If an agent sends an email to a service user, updates a case record, or routes an enquiry to the wrong team, who's responsible? The person who set it up? The organisation? The AI provider? These questions don't have clear answers yet.
Traceability matters more. With conversational AI, you can see the exchange: what you asked, what it said. With agents taking multiple steps across multiple tools, understanding what happened and why becomes harder. If something goes wrong, can you reconstruct the chain of decisions?
Trust boundaries need defining. What should an agent have access to? Your email? Your CRM? Your finance system? The more access you give, the more useful it becomes, but also the more damage it can do if it goes wrong. Function calling hallucination is a real risk: the AI confidently using the wrong tool, or using the right tool incorrectly.
Errors compound. With conversational AI, a hallucination affects one response. With agentic AI, inaccurate information held in an agent's memory can affect multiple decisions over time. One wrong assumption cascades through subsequent actions.
Existing risks get amplified. Bias, hallucination, privacy concerns: these all become more serious when AI is acting, not just advising. An AI that gives biased advice is a problem. An AI that takes biased actions at scale is a bigger one.
Multi-agent complexity. When multiple agents coordinate, as they increasingly can, the traceability problem multiplies. Each agent may make reasonable decisions individually, but the interactions between them can produce outcomes none was designed for. Debugging what went wrong means understanding not just what each agent did, but how they influenced each other.
What the regulators are saying
The Information Commissioner's Office published guidance on agentic AI in January 2026 (ICO, January 2026). Their message is clear: despite the language of "agency" and "autonomy," organisations remain fully responsible for data protection compliance. The AI might be taking actions, but the accountability sits with you. Governance frameworks, however, have not kept pace with the speed of deployment. The gap between what agents can do and what regulators have guidance for is widening.
Some specific concerns for charities:
Purpose limitation. There's a temptation to give agents broad access to data so they can be more helpful. But "connect to everything and see what helps" conflicts with GDPR requirements to collect and use data for specified purposes. You need tight controls on what data agents can access, not open-ended permissions.
Inferring sensitive data. Agents may infer health conditions, support needs, or other special category data even when not explicitly given it. If you're working with vulnerable beneficiaries, you need to assess whether agents could infer sensitive information and either establish proper legal basis or implement technical measures to prevent it.
The right to explanation. When AI is involved in decisions that significantly affect people, they have rights to understand how those decisions were made. With agents taking multiple steps across multiple tools, providing meaningful explanations becomes harder.
Part 2: Charity digital in the age of AI
9.The website is no longer your front door
The fundamental shift
For twenty years, charity websites have been designed as destinations. You optimised for search engines that would send people to you. You measured success by traffic, time on site, bounce rates. The website was your digital front door.
That model is breaking down. You might rank first, but if AI summarises your content, people won't click through. You're being read, but not visited.
You might rank first, but if AI summarises your content, people won't click through. You're being read, but not visited.
The shift runs deeper than how people find charities. Many charities have become trusted content providers, creating health information, mental health guidance, rights advice, support resources with proper content standards and regular review processes. People are increasingly turning directly to AI for this content instead. The carefully reviewed charity expertise still informs what AI knows, but the trust relationship is lost.
Why charities are especially exposed
Charities are performing better than most sectors on raw traffic, largely because people searching for charity information are often intent-driven: they want to donate, volunteer, verify a cause. Those searches still result in clicks, for now. Across all industries, traffic growth collapsed from 26.3% to 3.7%, while charities slowed from 15% to 12.5% (Tank, October 2025).
But the content most affected is exactly the kind charities produce most: high-trust informational content explaining complex issues, educating people about conditions, clarifying rights and entitlements. If your content explains, educates, or provides information, Google's AI increasingly wants to summarise it directly rather than send people to you.
Google Ad Grants under pressure
For many smaller charities, Google Ad Grants have been the only realistic way to compete for digital visibility. The programme provides up to $10,000 per month in free advertising, and it changed how those organisations reached people online.
But the impact of AI Overviews on Ad Grants is severe. Lower click-through rates damage quality scores, making ads more expensive to run and less likely to appear prominently. Even when they do appear, they're often positioned below the AI Overview, competing for scraps of attention. The free advertising that levelled the playing field is becoming less effective, and organisations with paid advertising budgets are better positioned to adapt.
From keywords to questions
People used to search: "donate to children's charity." Now they're asking: "Which children's charities are actually making a difference?" "How much of my donation reaches the children?" "What's the difference between Save the Children and Barnardo's?"
They're not looking for your homepage. They're looking for answers. AI systems reward content that provides clear, direct, well-structured answers. The old SEO playbook focused on keywords and backlinks. The new reality prioritises what Google calls E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness.
Charities actually have an advantage here. You have experience because you're doing the work directly. You have expertise because your staff and volunteers understand the issues deeply. But that advantage only materialises if your digital content demonstrates it in ways AI systems can understand and reference.
The agentic web
Today, AI mostly summarises information and occasionally takes simple actions. But the shift toward AI that completes tasks is already underway. A supporter can now say to their AI assistant: "Donate £50 to a homelessness charity in Bristol." The AI browses the web, assesses options, and completes the donation without the supporter ever visiting a website. Or: "Sign me up to volunteer with an environmental charity on Saturdays." The AI finds options, compares them, and submits the application on the user's behalf. These scenarios are working in limited contexts today.
This is developing faster than most predicted. The building blocks - AI agents that can browse, evaluate, and act on behalf of users - are already functional. Nobody knows the exact adoption curve, and predictions about technology timelines are notoriously unreliable. But the gap between "possible in a demo" and "available to ordinary users" has narrowed sharply, and the direction is consistent enough that it's worth preparing for now rather than waiting.
If AI becomes a significant interface between your charity and the people you serve, your digital infrastructure needs to work for AI agents, not just human visitors. That means structured data, clear service descriptions, and information that machines can parse reliably, not just content that reads well to humans.
Competition for AI visibility
If AI becomes the primary way people discover charities, being invisible to AI is being invisible to donors. This creates a new form of digital inequality. Large charities with significant digital teams can afford to restructure their digital estates for AI visibility: implementing schema markup, building API endpoints, optimising for AI citations. Smaller charities risk being filtered out not because their work is less impactful, but because their digital presence doesn't meet AI systems' requirements for structured, easily parseable information.
There's also a reputational risk. AI systems don't always represent organisations accurately. If the information AI surfaces about your charity is outdated, incomplete, or drawn from unreliable third-party sources, you lose control of how people understand your work, and you may not even know it's happening.
Not all content is equally affected today. AI struggles to replace services people can use or act on: helplines, support groups, volunteer opportunities, interactive tools, real-time information, and transactional pages for donations and applications. AI more successfully replaces informational content: explanations of conditions or rights, general guidance, background information about causes, and educational content.
If your website is primarily informational, you're most vulnerable to AI displacement right now. If it provides services, tools, and actionable next steps, you're more resilient today. But that resilience has a shelf life. As AI agents become capable of browsing websites, filling in forms, and completing transactions on behalf of users, transactional content becomes vulnerable too. A donation page is safe from an AI Overview that summarises information. It's not safe from an AI agent that can navigate to it and complete the gift without the donor ever seeing your website.
What charities need to do now
Optimising informational content for AI to cite, and service pages for AI to find, is the right immediate priority. But the longer-term strategy can't simply be "make everything transactional." Charities will increasingly need to think about how their services work when accessed through AI agents, not just how their content reads when summarised.
The good news is that charities have shown resilience. Organic traffic growth is down, but not collapsed. Charities are adapting faster than many sectors. There are concrete steps any charity can take, falling broadly into two areas: making your content work for AI systems, and understanding how AI currently sees you.
Making your content work for AI systems
The shift from keywords to questions means your content strategy needs to start with what people actually ask. Look at your support emails, helpline logs, and social media messages. Structure content around those real questions, with clear direct answers in the first paragraph and evidence-based detail following. Use headings that reflect real questions. Make it easy for AI to extract and summarise accurately.
Structured data (sometimes called schema markup) tells AI systems what your information means. It's the difference between AI reading "We helped 1,000 people" and AI understanding "This organisation provided housing support to 1,000 individuals in 2025." This is technical work, but increasingly accessible. Tools like Google's Structured Data Markup Helper can generate the code, and some website platforms include structured data features. If you're working with developers or agencies, this should be a priority conversation, focusing on organisation schema that explains who you are, service schema describing what you provide and for whom, and event schema for activities and registration.
Your content also needs to demonstrate the expertise and trust that charities have. This means showing where content comes from: author bios, credentials, review dates, citations, expert review processes, and lived experience representation. If clinicians reviewed your health information, make that visible. If solicitors were involved in your legal guidance, say so. AI systems increasingly weight content that demonstrates genuine authority.
Beyond your own site, brand mentions matter more than they used to. AI systems increasingly draw on references to your organisation across the web, even without links. Social media presence, media coverage, being referenced in podcasts and discussions, appearing in sector conversations: these all contribute to how visible you are to AI. You can't control all of this, but you can influence it through PR, partnerships, and active participation.
Understanding how AI sees you
The simplest and most revealing step is to actually search for your organisation using AI tools. Ask ChatGPT about your charity. Search your services in Google's AI Mode. See what Perplexity says about your cause area and whether you appear. The results will tell you whether AI knows you exist, how accurately it describes your work, what sources it's citing, and where information is outdated or wrong. This isn't a one-time check. AI systems update regularly, and ongoing monitoring matters.
What this means for the people you serve
The shift to AI-mediated information access has serious implications for beneficiaries. If someone in crisis searches "mental health support near me," and AI summarises options without them visiting websites, several things can go wrong. AI might miss specialised services that would be most appropriate, recommend services at capacity or with waiting lists, misunderstand eligibility criteria, or not convey urgency or safeguarding concerns.
For people already digitally excluded, AI adds another layer of complexity. Your website, for all its imperfections, is something people can learn to navigate. AI systems require different skills: knowing how to ask the right question, interpreting generated responses, recognising when something is wrong. That's a reason to maintain your website as an accessible, navigable resource - not to abandon it in favour of AI-first approaches. The people most at risk of being left behind are often the people charities exist to serve.
Most charities making decisions about AI don't actually know how their communities feel about it or whether they're using it. Are your service users already turning to ChatGPT for the kind of advice you provide? Are your volunteers comfortable with AI-assisted coordination, or does it put them off? Do the people you work with trust AI, fear it, or have no experience of it at all? A charity whose beneficiaries are digitally confident young people faces entirely different considerations from one working with older people still getting comfortable with email. The answers will vary enormously, and assumptions are dangerous. Find out.
The chatbot question
Many charities are implementing AI chatbots to handle common inquiries, triage service requests, or provide basic guidance. Some are doing this thoughtfully. WECIL, a disabled-led charity, created "Cecil from WECIL", a chatbot that acts as an Easy Read translator, turning complex legal or medical documents into accessible information for people with learning disabilities. Here, AI is explicitly designed to reduce barriers rather than create them, and it was developed by an organisation led by the people it serves.
But not all implementations are this considered. When chatbots replace the helpline worker who recognises distress in someone's voice, the advisor who asks follow-up questions that reveal underlying issues, or the person who builds rapport over multiple interactions, something important is lost. The distinction isn't chatbots versus no chatbots. It's whether the implementation centres the needs of the people using it, or the efficiency goals of the organisation running it. Human alternatives must remain accessible and visible, not hidden behind multiple failed attempts to use AI.
Co-design, not afterthought
Goose, an AI marketing platform built for heritage organisations in partnership with the Arts Marketing Association and funded by the National Lottery Heritage Fund, is a good example of what co-design looks like in practice. There was real scepticism about AI in heritage: concerns about job losses, authenticity, craft. Rather than building the most obvious solution and hoping people would adopt it, the team spent months in twice-weekly sessions with over 40 heritage organisations, building and testing functional prototypes. They built and discarded a basic chatbot (too shallow), a trusted-sources system (too brittle), a funding application assistant (too narrow), and a fully autonomous agent (too much delegation, as users wanted to retain control). The "thinking partners" concept that became Goose only emerged through this iterative process. It wouldn't have been designed correctly without the co-design, and given the sector's scepticism, it wouldn't have been adopted without it either.
Noise Solution CIC found something similar working with young people on AI analysis of reflection videos from music sessions. The technology worked effectively, but implementing it revealed a deeper challenge about trust. As they reflected: "Looking back, we would invest even more in participant co-design from the outset, ensuring that young people fully understand how AI analysis works, and feel confident about how their data is used." The trust required for young people to engage took longer to build than the technology itself.
The pattern is consistent: you can't design accessible AI services without involving the people who might be excluded by them. If your charity exists to reduce inequality, and your adoption of AI increases barriers for the people you serve, you have a fundamental misalignment between mission and method. The question to keep asking is simple: who benefits from this, and who is being left behind?
You can't design accessible AI services without involving the people who might be excluded by them.
Capacity and the digital divide
None of this is straightforward for small charities with limited digital resources. Restructuring your digital estate for AI visibility requires technical knowledge, ongoing maintenance, and resources many organisations don't have. This is where sector infrastructure matters. Collective investment in shared tools, templates, and training could prevent a digital divide where only large charities remain visible to AI systems. Organisations like CAST, Charity Digital, and Charity Excellence Framework are working on this, but the pace of change is fast and the gap risks widening.
This isn't just a technical question about search rankings. It's a justice question about who has access to support. The organisations starting to think about this now will be better placed as AI becomes how more people find charities. Even small steps, like structuring your content for AI or checking how AI describes your organisation, can make a real difference.
This isn't just a technical question about search rankings. It's a justice question about who has access to support.
Part 3: Inside your organisation
We cover the functional areas - fundraising, service delivery, communications, data - in detail later. We've chosen that focus because there's immediate value in those areas and the barriers to getting started aren't too high.
But AI raises bigger questions that go beyond individual and team-level applications. Questions about how your organisation works, how you plan, how you structure teams, how you make decisions under uncertainty. This section is less about AI specifically and more about leading an organisation when the ground is shifting.
McKinsey's 2025 State of AI research offers some useful benchmarks. Seventy-eight percent of organisations now use AI in at least one business function, up from 72% a year earlier. But more than 80% aren't yet seeing meaningful impact at an organisational level. The gap isn't about technology, it's about how organisations are adapting. The single biggest factor in whether organisations see real returns is redesigning workflows. Yet only 21% have done this, whilst most are still overlaying AI onto existing processes (McKinsey, March 2025).
10.Managing risk
Not every AI use carries the same risk. Drafting a newsletter is different from processing safeguarding referrals. Summarising meeting notes is different from triaging service users. The challenge is that most charities don't have a structured way to think about this, so either everything gets treated as high-risk (and nothing happens) or everything gets treated as low-risk (and something goes wrong).
A simple framework helps: what could go wrong, who could be harmed, how would we know if it was going wrong, and what's our fallback if it fails? Higher stakes need more oversight. Lower stakes can move faster. The key is being deliberate about which category you're in, rather than applying the same level of caution to everything.
What different risk levels look like in practice
At the lower end, individual productivity tasks like using AI to draft emails, summarise documents or brainstorm ideas carry limited risk as long as a human reviews the output. That's the consistent pattern across the case studies in this playbook: AI drafts, transcribes, or tags, and a person checks the result before it goes anywhere. That human review step is what keeps low-risk applications genuinely low-risk.
In the middle, internal workflows that touch organisational data or involve multiple people need more thought. SCVO's approach to meeting transcription is a good example of proportionate governance. They chose to use Microsoft Teams' built-in transcription with their organisational Copilot account rather than third-party tools, specifically because it kept data within their existing Office 365 environment. They developed a retention policy for raw transcripts, sought consent before recording, and explained the process to participants up front. But they were also clear about limits: they wouldn't use this for high-stakes calls where accurate records were critical. That kind of honest boundary-setting, knowing where your approach is good enough and where it isn't, is more useful than blanket policies.
At the higher end, anything that touches the people you serve directly needs the most careful handling. Noise Solution CIC's work analysing reflection videos from young people in music sessions shows what rigorous governance looks like: staff check, review and contextualise all AI outputs before they're used, there's a formal governance stage before results are shared or reported, and the team is actively working on prompt-based bias mitigation to ensure responses are informed by equity and diversity considerations. This level of oversight takes time and resource, but it's proportionate to the stakes. These are vulnerable young people's reflections on their own wellbeing.
The closer AI gets to beneficiaries, the more attention it needs
Off-the-shelf tools like ChatGPT and Claude are general-purpose. They're not built for your specific safeguarding requirements, your service users' needs, or your regulatory context. For internal productivity, that's usually fine. For high-stakes interactions with the people you serve, you may need something more considered: AI that's been tested against real scenarios, constrained to verified information, and designed with humans in control of the decisions that matter.
Breast Cancer Now's transcription system is an example of this. Processing survey data from cancer patients across 20+ NHS trust form variants requires 99% accuracy. A general-purpose chatbot isn't appropriate. The system was engineered specifically for that context, with different AI approaches used for different parts of the process depending on what each step required in terms of speed, accuracy, cost, and consistency.
Drawing clear lines
Most organisations find it useful to establish some non-negotiables early: AI won't replace human support in crisis situations, won't make decisions about access to services without human involvement, won't interact with people in ways that disguise what it is. These aren't just governance statements, they're design constraints that shape what you build and how you build it.
Once you're clear on what's off the table, other decisions get easier. The space between "definitely fine" and "definitely not" is where judgment lives, and that judgment gets better with experience. Start with lower-risk applications, learn how AI behaves with your data and in your context, and build confidence before moving to higher-stakes uses.
11.Disclosure and accountability
When we wrote the first edition, transparency about AI use felt like a significant ethical question. Should you tell people if AI helped draft that appeal? Does AI-generated content need labelling?
A year on, things have moved. AI is now embedded in so many tools and processes that drawing a clear line around "AI-generated content" is increasingly difficult. Guaranteeing anything as AI-free is becoming harder. If someone uses Copilot to restructure an email, Grammarly to polish the language, and then edits it themselves, what percentage is "AI-generated"? The question doesn't really have a useful answer.
So the more practical question is what actually matters to the people we serve? Some things probably don't need flagging, like using AI to tidy up grammar or structure a first draft. Some things clearly do. If someone thinks they're talking to a person and they're actually talking to a chatbot, that needs to be clear. If AI is influencing decisions about who gets support or how cases are prioritised, people have a right to know. If you're using AI to personalise communications in ways that might feel manipulative if people knew, that's worth examining.
The Wildlife Trusts found that being upfront about AI use actually strengthened trust. When they used AI image generation to visualise rewilded landscapes for early-stage fundraising, they clearly labelled all outputs as AI-generated. Some funders were initially concerned about "fake" images, but clear labelling combined with expert ecological validation meant people engaged with them as conversation starters, not finished plans. Transparency made the work more credible, not less.
The British Museum's experience in January 2026 shows how badly this can go without proper review. The museum shared AI-generated images on social media showing a woman contemplating exhibits. Archaeologists quickly spotted the images were artificial and raised concerns about cultural representation: the AI figure appeared in traditional East Asian clothing in some images but wore Mexican-style attire while looking at an Aztec artefact, as if all cultures were interchangeable. The museum deleted the posts, then made things worse by unfollowing critics. The damage wasn't the AI use itself. It was the absence of review before posting and the defensive response afterwards.
A useful test is: would the person on the other end want to know? If they'd feel deceived or manipulated by not knowing, tell them. If they genuinely wouldn't care, you probably don't need to flag it. The principle isn't about disclosing every use of AI. It's about maintaining the trust that your organisation depends on.
12.Data as foundation and risk
Most useful AI applications need decent data to work with. That doesn't mean perfect data, but data you can find, understand, and trust enough to act on.
Most charities we work with have data scattered across systems, inconsistent formatting, duplicate records. What counts as a "donor" or "active supporter" means different things to different teams. Everyone aspires to a "single source of truth" but in reality many organisations aren't there yet.
The problem goes beyond blocking AI adoption. AI working with bad data produces confident, plausible, wrong results. If your CRM is full of duplicates and your AI is identifying "lapsing donors," some of those predictions will be wrong in ways that waste effort or damage relationships. If your case notes are inconsistent and AI is summarising them, the summaries will inherit the inconsistencies. AI doesn't fix messy data, it amplifies it.
AI doesn't fix messy data, it amplifies it.
There's a related opportunity that's easy to overlook. Most charities collect far more qualitative data than they ever use. Free-text survey responses, feedback forms, case notes, interview transcripts, open-ended evaluation questions. This material is rich with insight, but it's historically been too time-consuming to work with properly. Extracting themes from hundreds of open-ended comments by hand takes days that most teams don't have, so the data sits in folders or gets reduced to a few hand-picked quotes in a report.
AI changes the economics of this. It can surface patterns across large volumes of qualitative data in hours rather than weeks. The Brilliant Club found this when using Miro's AI tools to analyse hundreds of open-ended comments from students, teachers and tutors, saving significant analysis time on work that would previously have been prohibitively slow. WaterAid's Performance and Insight team ran a similar exercise using Google Colab to analyse 150 supporter survey responses alongside demographic data, turning weeks of potential manual analysis into a single workshop.
The bigger prize is connecting qualitative and quantitative data in ways that weren't practical before. You might know from your CRM that 200 service users dropped off in the last quarter. But the reasons why are buried in case notes, feedback forms and exit surveys that nobody has time to read systematically. AI can bridge that gap, turning unstructured text into something you can analyse alongside your numbers. The result is a fuller picture of what's actually happening, not just what's easy to count.
This plays out the same way across the sector. Charities that rushed into AI tools without sorting their underlying data found the AI confidently producing nonsense. The investment in data quality isn't glamorous, but it's what makes everything else possible.
The good news is that AI is making data cleaning itself more approachable. Tasks that once required specialist skills or expensive consultants can now be tackled by people closer to the work. You can use AI to find duplicates, standardise formats, spot inconsistencies and clean up years of messy data entry. Progress is possible from wherever you're starting. The important thing is to start, and to be honest about where you are.
13.Skills and structure
AI is democratising certain capabilities. Data analysis that once required specialists can now be done by people closer to the work. Internal tools that traditionally needed developers can be built by people who aren't technical. Product design and build is becoming more accessible to smaller teams.
What does this mean for how you structure teams? If a fundraiser can query the CRM in plain English, what happens to the data analyst role? If a programme manager can build a simple app to track outcomes, do you still need the same development capacity? If AI can produce a first draft of almost anything, what happens to the junior roles where people learned to write?
These questions don't have settled answers yet, but there are early signs of how roles shift in practice. In the NHS's largest AI scribe trial, covering 17,000 patient encounters across nine London sites, AI handling clinical documentation gave clinicians 23.5% more time for direct patient interaction. The role didn't shrink — it shifted toward the work that matters most. Street Support Network found the same with meeting coordination: the real gain wasn't the two hours saved per meeting, it was the shift from managing logistics to being fully present in conversations.
The pattern is that AI tends to absorb the mechanical parts of a role and leave the parts that require judgment, creativity and human connection (McKinsey, November 2025). The skills that matter shift from execution to judgment. Not "can you write this?" but "is this good enough?" Not "can you build this?" but "should we build this?" Not "can you analyse this data?" but "what does it mean and what should we do about it?"
New roles are emerging too. In the corporate world, AI Ethics Officer, AI Governance Specialist, and Responsible AI Lead are now established job titles, typically sitting across compliance, privacy, or legal teams. Most charities aren't hiring dedicated roles yet, but the functions still need to exist somewhere. Someone needs to be thinking about what data can go into AI tools, how outputs are reviewed, whether the organisation's use of AI is consistent with its values, and what happens when something goes wrong. In larger charities that might eventually mean a dedicated role. In smaller ones it might mean adding AI governance to an existing remit and making sure the person has time and support to do it properly. The risk is the same as with strategic leadership: if it's everyone's job, it's nobody's.
Street Support Network's advice is to treat AI "like a junior team member: it needs a proper induction, ongoing supervision, and clear boundaries." That framing is useful because it suggests a relationship that evolves as trust builds, rather than a tool you switch on and leave alone.
For now, it's probably worth investing in AI fluency across the organisation. Not deep technical skills, but enough understanding that people can use tools effectively and spot opportunities in their own work. Significant reskilling is expected over the next few years, and organisations that start now will be better placed than those that wait.
14.How work changes
The obvious use of AI is speeding up tasks you already do. Draft this email faster. Summarise this document quicker. That's real value, but it's not the whole picture.
Redesigning a workflow means rethinking how work moves through your organisation, not just making individual steps faster. It might mean changing who does what, removing steps that only existed because of human bottlenecks, or combining tasks that used to be separate. If your grant application process involves a fundraiser drafting, a manager reviewing, a director approving, and a designer formatting, AI doesn't just speed up the drafting. It might mean the fundraiser can now produce something closer to final, reducing the review loops entirely.
What workflow change actually looks like
Blood Cancer UK's user research workflow used to follow a familiar pattern: record an interview, spend two or more hours manually transcribing it, read through repeatedly to identify themes, manually tag and code the data, write up a summary, and share it as a document that might or might not get read. With AI handling transcription and initial theme tagging through Dovetail, the mechanical steps collapsed. But the interesting shift wasn't just speed. The researchers found themselves with time to do something they'd never had capacity for: creating short video insight reels drawn from the interviews themselves. Seeing and hearing real service users moved senior leaders to act in ways that written reports never had. The workflow didn't just get faster. It changed shape entirely, and the output became more valuable.
Street Support Network's meeting workflow tells a similar story at a smaller scale. Before, the cycle was familiar to anyone in the sector: back-and-forth emails to find a time, the meeting itself where you're half-listening and half-taking notes, hours afterwards writing up actions and follow-ups. They built a three-stage workflow using existing tools: Calendly handles scheduling automatically, Krisp records and transcribes during the meeting, and an LLM processes the transcript afterwards into a summary with actions and follow-ups. The time saving averages two hours per meeting, but as they reflected, "the biggest surprise was how much mental energy was freed up. It's not just the time saved - it's the elimination of constant scheduling stress and the confidence to say 'I'll send you that briefing this afternoon' knowing you can actually deliver."
Where the human effort moves
It's worth thinking carefully about where the work shifts to. AI might handle the first draft, but someone still needs to judge whether it's good enough. AI might summarise the case notes, but someone still needs to decide what to do. AI might identify the at-risk donors, but someone still needs to build the relationship. The work doesn't disappear - it shifts from execution to judgment, from processing to decision-making, from logistics to relationships.
The work doesn't disappear - it shifts from execution to judgment, from processing to decision-making, from logistics to relationships.
Understanding where work shifts to matters because it affects resourcing. If AI handles the drafting but creates more need for quality review, you haven't saved time - you've moved it. If AI surfaces insights but nobody has capacity to act on them, the investment is wasted. The organisations getting value from AI aren't just automating tasks. They're thinking about the whole workflow and making sure the human parts are properly resourced too.
Most organisations are still overlaying AI onto existing processes, making individual steps faster without rethinking how the work flows. That's a start, but it's not where the real value lies.
15.Buy, build or wait
Do you adapt your processes to fit commercial software, or do you pursue something custom that fits your processes?
Charities have historically leaned toward buying. The costs of custom development were prohibitive for all but the largest organisations. Better to compromise on fit than spend money you don't have on building something bespoke.
That calculation is changing. The costs of custom development are falling fast. What once required a team of developers over months can sometimes be built in days. Tools like Claude Code mean that people with some technical comfort but no formal development background can create functional applications. Custom is becoming accessible to organisations that couldn't have considered it before.
But custom still requires maintenance, expertise, and ongoing attention. If you build something, you own it - including when it breaks, when it needs updating, when the person who built it leaves. There's a reason off-the-shelf exists.
The CAST experiments show all three approaches working. Blood Cancer UK bought Dovetail, an existing research platform with AI features, rather than building custom tools. The fit was good enough and the team could start getting value quickly. The Brilliant Club used Miro, a tool they already had, when it added AI clustering features - no new procurement, no new budget line, just experimenting with capabilities that appeared in something they were already paying for. Street Support Network took a different route, assembling a meeting workflow from free and low-cost tools: Calendly for scheduling, Krisp for transcription, and LLM access for processing. None of the individual tools were custom, but the workflow they built by connecting them was.
Knowing when to buy, when to build, and when to wait is part of the leadership challenge. There's no formula, but some principles help: buy when the off-the-shelf is genuinely good enough and you need to move quickly, build when your needs are specific and the fit matters more than the convenience, assemble from existing tools when the components exist but nobody's put them together for your use case, and wait when you're uncertain and the cost of delay is low. The worst option is usually buying something expensive and then not using it, which is what happened to several charities that invested in Copilot licences before their internal data was ready to support it.

The worst option is usually buying something expensive and then not using it.
This is also changing how charities work with digital agencies. Many organisations have spent years in a cycle of writing requirements, commissioning vendors, managing backlogs, and waiting for sprint cycles to deliver. The overhead of managing that relationship often rivals the cost of the work itself, and product owners spend more time translating between the organisation and the delivery partner than thinking about what users actually need. AI is starting to shift this. Requirements become easier to articulate when you can generate wireframes or build a prototype that shows what you need rather than describing it in a brief. Things that used to require a support ticket and a two-week wait - a content update, a minor feature change, a data export - can increasingly be handled internally. RSPCA Coventry built a custom API feed for their website using AI-assisted development over about 40 hours, and found the result outperformed what human developers had delivered to the same brief. That doesn't mean agencies become irrelevant. Architecture decisions, security, scalability, and deep technical judgment still require specialist expertise. But the boundary shifts from 'we need you to build everything' toward 'we need you for the complex, high-stakes work.' That's a different commercial model and a different kind of partnership.
There's a subtler shift too. When development capacity was scarce and expensive, backlogs prioritised themselves. You could only afford to build the most important things, so the hard choices were partly made for you. As it becomes faster and cheaper to build, the question moves from 'can we afford this?' to 'should we build this?' That requires stronger product thinking, clearer prioritisation, and a willingness to say no to things that are possible but not valuable. The constraint shifts from capacity to judgment.
Where DIY ends
It's worth being honest about what AI-assisted development can and can't do. Tools like Claude Code and Cursor are genuinely powerful for prototyping, building simple internal tools, and automating straightforward workflows. If you need a form that saves to a spreadsheet, a dashboard, or a script that cleans up your data, someone on your team can probably build it.
But there's a gap between a working prototype and a reliable system. Connecting multiple data sources, handling edge cases, building something that works at scale, maintaining it when the person who built it moves on - these are different problems. The Breast Cancer Now transcription system processes 20+ NHS trust form variants at 99% accuracy, combining multiple AI approaches for different parts of the pipeline. That's not something you can vibe code in an afternoon. Goose went through months of iterative co-design with 40+ heritage organisations before the right approach even became clear. The front end might be buildable with AI assistance. The architecture, the data design, the understanding of what to build in the first place - that still requires experience.
The risk isn't that people try to build things themselves. That's good, and more of it should happen. The risk is that organisations assume everything is now DIY, hit a wall, and conclude AI doesn't work for them. Know which category your problem falls into before you start.
16.Innovation culture
What does innovation look like in a charity? You can't move fast and break things when trust is your currency and the consequences of failure could be felt by your beneficiaries and supporters.
You can't move fast and break things when trust is your currency.
But there's also a cost to not trying. Things may not settle for years, and the organisations learning the most are the ones experimenting now, even in small ways.
So what does innovation culture in a charity look like? It probably means permission to experiment within boundaries, safety to fail on small things, time carved out for learning, clear lines around what's off limits. It means celebrating what you learned from things that didn't work, not just celebrating successes. It means leaders who are curious rather than certain, who ask questions rather than just give answers. Virgin Money Foundation took this approach: their AI education was framed around helping people evaluate whether AI aligned with their values, not around persuading them to adopt it. The team moved from experiencing AI as external pressure to seeing it as something they could thoughtfully assess. That distinction - exploration rather than obligation - matters.
It also means being honest about constraints. Most charities don't have spare capacity sitting around waiting to be deployed on innovation. Experimentation has to happen alongside the day job, which means it needs to be protected time, not just good intentions. If it's not in the plan, it won't happen.
The culture question is real but it's not everything. You can have the most innovative culture in the sector, but if nobody has time to experiment, nothing will change. Culture enables innovation but resources make it possible.
17.Strategic leadership
AI doesn't belong to one person. It cuts across the organisation: HR is navigating skills gaps and changing roles, finance is making investment decisions, operations is seeing workflows shift. This needs to be a senior leadership team conversation, not something delegated to whoever seems most technical.
That's because AI implementation is about change management, not technology. It requires people who can make resource decisions, bring different parts of the organisation together, and maintain focus when priorities compete.
AI implementation is about change management, not technology.
The pace of change makes this harder. The person leading on AI needs to be watching the horizon, not chasing every new tool, but understanding when something significant shifts. The jump from "AI that suggests text" to "AI that builds applications" happened in months. Someone in your organisation needs to notice when these shifts create new opportunities or make previous decisions obsolete.
What leadership support enables
The most useful thing leadership can do is create space for experimentation, then pay attention to what comes back. Frimley Health's AI telephone assistant started as a single-trust experiment making post-operative follow-up calls to cataract patients. Based on results, it expanded across NHS South East, freeing around 90,000 follow-up appointments per year and cutting average cataract waiting times from 35 weeks to 10. That progression from experiment to regional infrastructure only happens when leadership treats initial trials as learning exercises, not just efficiency measures. If your board wants a practical starting point of their own, the board paper summarisation and strategic challenge recipes can help trustees engage with AI directly.
There's also a question of ambition. Most AI adoption in charities is still tactical: making existing work a bit faster, automating small tasks, saving time on admin. That's a reasonable place to start, but it won't transform what your organisation can achieve. The strategic leadership question is bigger: what could you do differently if AI worked well? What would it mean to reach twice as many people without doubling the team? The organisations asking these questions at board level, not just approving tools, are more likely to see real impact.
Planning when the ground keeps shifting
The five-year strategy has always been the backbone of charity planning. But things are moving too fast for traditional planning cycles. By the time you've written the digital strategy, consulted stakeholders, and got board sign-off, the tools have changed. The capabilities available when you finish the plan aren't the capabilities you planned for.
One approach is to hold goals firmly but methods loosely. Be clear about what problems you're trying to solve, but stay flexible about how you solve them. Build in regular review points - quarterly rather than annual. The organisations doing this well tend to have a roadmap that's explicit about what they're testing now, what they're watching, and what they've decided to leave alone for now. That's planning that creates room for learning rather than locking in approaches that might be overtaken.
This requires a different relationship with uncertainty. Not everything will work. Some investments won't pay off. The point isn't to get it right the first time - it's to learn fast enough that the mistakes are small and the wins compound. Street Support Network's advice applies here too: "You don't have to try to implement everything at once." Start with one element, learn from it, then build on what works.
The risk is that AI becomes everyone's responsibility and therefore nobody's. Or that it sits with IT when the real opportunities are in service delivery. Where AI sits matters - it's not purely a technology question.
Some charities are addressing this directly. The British Heart Foundation established an AI working group in 2023, alongside a wider community of AI users across the organisation and a formal AI strategy. They appointed a Chief Technology Officer and created a Technology Directorate, making it clear that AI sits at organisational level, not within a single team. More recently they ran structured workshops across every directorate to identify and prioritise AI use cases, ensuring that opportunities were grounded in each team's actual pain points rather than imposed from the centre. The Charity Commission highlighted BHF's approach as an example of what good looks like.
Not every charity needs a working group or a CTO. But someone needs to own the question of how AI is being used, where the risks are, and whether the organisation is learning from what it's trying. The 2025 Charity Governance Code now explicitly includes AI oversight within its Managing Resources and Risks principle. At minimum, that means boards should be asking: who is responsible for this, and are they resourced to do it properly?
There's also a question about leadership behaviour. If senior leaders aren't using AI themselves, aren't curious about it, aren't visibly learning, that sends a message. The charities where AI is gaining traction tend to have leaders who are experimenting alongside their teams, asking questions, and being honest about what they don't yet understand.
18.The investment question
At an individual level, the case is straightforward. A subscription costs around £20 per month. If the tool saves 30 minutes a week, it's paid for itself.
Organisational return is harder to pin down. When AI is embedded into workflows or services, the gains are real but tangled up with other changes. Breast Cancer Now's AI transcription service can scale cost-effectively while also increasing what it can deliver. That's not a simple productivity calculation, but it is a real one.
What investment looks like in practice
It helps to think in tiers.
At the low end, many AI features are now built into tools charities already use, or available on free tiers. The Brilliant Club experimented with Miro's AI features using the 10 free credits per user per month. The financial cost was zero, but they still needed time for prompt refinement and learning what worked. Even free tools have adoption costs.
At the team level, dedicated tools start to show clearer returns. Blood Cancer UK's Dovetail subscription costs around $45 per month per user. With two users in the UX team and free access for the rest of the organisation to view reports, the annual cost is roughly £1,000. The return: faster insights, better communication with senior leaders, and a UX function with more influence across the organisation.
At the workflow level, the investment shifts from subscriptions to design and integration time. Street Support Network assembled a meeting workflow from Calendly, Krisp, and LLM access. The individual tool costs are low or free, but making them work together required thought and iteration. The return: around two hours saved per meeting, plus what they described as "the elimination of constant scheduling stress." At 100 meetings a year, that's 200 hours of capacity.
The hidden costs
The subscription fee is rarely the real cost. Staff time to learn, process redesign, change management, building organisational confidence, ongoing quality review: these add up. The Brilliant Club found that even with free tools, "prompt refinement investment" was significant. Street Support Network's advice: "Don't try to implement everything at once. Start with just one element of a workflow."
When real-world infrastructure meets AI ambition
Breast Cancer Now's experience surfaced a different kind of hidden cost. The AI transcription system can handle complex survey forms accurately and quickly. But the existing scanners can only process a handful of forms at a time, creating a new bottleneck upstream. A bit of a reality check.
This is why proof of concept work matters. You find out what actually gets in the way before you've committed significant budget. The AI worked as promised, but getting the full benefit means the rest of the process needs to adapt. For any charity exploring AI, it's worth looking at the whole pipeline, not just whether the AI can do the job. Sometimes the real cost includes upgrading a scanner.
Three levels of investment
Individual productivity is relatively cheap: subscriptions, some protected time for learning. Quick wins, but limited organisational impact.
Capability building is different. Investing in people's skills, confidence and judgment takes time and often external support, but builds an organisation that can identify and act on opportunities as they emerge. We'd like to see funders supporting this more actively.
Investing in specific workflows and services can mean higher upfront costs, but the return becomes clearer: this service now costs less to run, or reaches more people.
Why AI doesn't fit traditional models
AI doesn't fit traditional digital investment models. Subscriptions are ongoing. The hidden cost in staff time (experimentation, learning, managing change) is significant. It's less like buying a CRM and more like ongoing capability investment.
And there's the cost of not investing. As the gap widens between organisations building AI capability and those that aren't, the cost of catching up later only increases.
19.Governance
You need enough governance to enable safe experimentation, not so much that it paralyses everything. At minimum, you need clarity on: what tools are approved, what data can and can't go into AI systems, where human review is required, who can approve new uses, and how to raise concerns. Clear enough that people can make sensible decisions without asking permission for everything.
Many charities find that developing an AI policy helps codify these decisions in one place. You don't need to start from scratch: CAST and SCVO have published practical guidance, Torchbox has been collaborating with the sector on tools to help organisations work through the key questions, and our AI acceptable use policy recipe walks you through writing one. The Charity Digital Skills Report found that 48% of charities are now developing AI policies, up from 16% in 2024 (Charity Digital Skills Report 2025). That's a sign the sector is taking governance seriously, but also that many organisations are still figuring it out. Borrowing from what others have learned is sensible.
What proportionate governance looks like
How much AI output gets reviewed varies enormously. Some organisations check everything before it's used. Others review very little. The right answer depends on risk: higher stakes need more oversight. But it's worth being deliberate about where you sit, rather than letting it happen by default.
The Wildlife Trusts took a clear but flexible approach to AI-generated imagery: "Staff agree on when and how AI images can be used, for example early-stage storytelling but not final plans." That's governance that enables rather than blocks. People know the parameters without needing approval for every use.
SCVO's meeting transcription process shows risk-based thinking in action. They use Teams transcription with Copilot to generate summaries, but staff "take a moment to review the draft summary to ensure it matches your recollection." And they're clear about limits: "We would probably not use this process for a high-stakes call where an accurate record was critical." Knowing where your approach works and where it doesn't is more useful than blanket rules.
The NHS's approach to AI scribes shows what embedded governance looks like at scale. When NHS England published its registry of approved AI scribe suppliers in January 2026, the standards were built into the framework: accuracy requirements, data handling rules, and clinical review expectations, rather than a separate bureaucratic layer clinicians have to navigate alongside their actual work.
Data boundaries
Free versions of AI tools often use your inputs to train future models. Paid tiers typically don't, but check the terms. Some data shouldn't go into cloud AI tools at all: personal data about beneficiaries, safeguarding information, confidential employee information.
SCVO chose to use Teams' built-in transcription with their organisational Copilot account rather than third-party tools specifically because it kept data within their existing Office 365 environment. That kind of thoughtful tool selection is governance too.
Shadow AI
Shadow AI is real: people using free tools without telling anyone because it's easier than the official route. The solution isn't cracking down. It's making the approved route easier than the unofficial one.
Shadow AI is real: people using free tools without telling anyone because it's easier than the official route. The solution isn't cracking down. It's making the approved route easier than the unofficial one.
SCVO's approach worked partly because using Teams transcription was simpler than finding alternatives. When the sanctioned tool is genuinely easier, shadow AI becomes less attractive.
Governing agents
When AI is taking actions rather than just generating content, governance becomes more urgent. The risks are covered in Part 1, but the practical questions are worth thinking through before you encounter them.
Start with boundaries: what data and systems can the agent access, and what actions is it permitted to take? "Connect to everything and see what helps" isn't a governance strategy. Be specific about whether it can send emails, update records, or make bookings, and start with narrow permissions you can expand based on experience.
Traceability matters more with agents than with conversational AI. When something goes wrong, can you reconstruct the chain of decisions? Build in logging and review points. Some organisations require human approval before any agent action; others review outputs periodically. The right approach depends on stakes, but "no oversight" isn't an option for charity contexts.
And plan for failure. When the agent gets it wrong, and it will, what's the recovery plan? Who fixes it? How do you prevent the same error happening again?
Most charities aren't building custom agents yet. But your staff may already be using them. Open-source agents like OpenClaw connect to email, messaging, and calendars, and individuals are adopting them independently. Agent capabilities are also appearing in tools you already pay for: Copilot in Microsoft 365, Agentforce in Salesforce. This is the shadow AI problem applied to agentic tools - the governance questions are already here, whether or not your organisation has consciously adopted agentic AI.
20.AI and your environmental footprint
We covered the scale of AI's environmental impact in Part 1. Here's what it means in practice for your organisation.
Training vs using
It helps to understand what you're actually responsible for. Training a frontier AI model, the process of building it from scratch, consumes enormous resources. Training GPT-4 is estimated to have consumed around 50 gigawatt-hours of energy, enough to power San Francisco for three days. That cost is borne by the AI providers, not by you. You're not training a model when you use ChatGPT or Claude.
What you are doing is inference, sending queries and getting responses. This is far less energy-intensive per interaction, but it happens billions of times a day across millions of users, and the cumulative impact is significant. The IEA notes that data centres are among the few sectors where energy-related emissions are still set to grow through to 2030 (IEA).

What does it actually cost?
A typical AI query uses roughly 0.3 watt-hours of electricity, broadly comparable to what a Google search used to consume before AI was integrated into search results. A heavy user making 50 queries a day would add about the same to their annual carbon footprint as driving a car a few hundred metres. In isolation, that's negligible.
But there are two reasons not to dismiss it entirely.
First, these numbers are per-query averages for simple text prompts. Uploading documents, processing images, generating code, or using AI agents that take multiple steps across multiple tools all consume significantly more. A query involving a long document can use ten times the energy of a simple question. If your organisation is using AI for batch processing, data cleaning at scale, or building custom applications, the footprint is larger than casual use suggests.
Second, there's a rebound effect. If AI makes content production cheaper, organisations produce more of it. If AI makes data analysis faster, you analyse more data. The per-unit cost goes down, but the total volume goes up. This connects directly to the "volume vs value" tension we've raised in the communications section. It applies to environmental impact too.
What you can do
Choose the right size tool for the job. Smaller, faster AI models use significantly less energy per query than the most powerful ones. If you're summarising a short document or drafting a simple email, you don't need the most advanced model. Many platforms now offer model choices. Using a lighter model for routine tasks is both faster and greener.
Question whether AI is the right tool at all. Not every task benefits from AI. A web search, a spreadsheet formula, or a conversation with a colleague might get you to the same answer with a fraction of the environmental cost. The Friends of the Earth guidance on AI for environmental campaigners makes this point well: consider simpler alternatives before reaching for AI (Friends of the Earth).
Be mindful of volume. Generating ten versions of something to pick the best one has an environmental cost. So does using AI to produce content you don't need. The discipline of asking "should we produce this at all?" applies to environmental impact as well as communications strategy.
Factor AI into your sustainability reporting. If your charity reports on its environmental impact, even informally, AI use should be part of the picture. You probably can't calculate exact figures yet (the reporting tools aren't mature enough), but acknowledging AI as part of your digital footprint is a start. The Royal Academy of Engineering is pushing for mandatory reporting by data centres, which should eventually make this easier.
Ask your providers about their environmental commitments. Some AI providers power data centres with renewable energy; others don't. Some are investing in water-efficient cooling technologies; others are building in water-stressed regions. This information isn't always easy to find, but it's a reasonable question for procurement decisions.
Part 4: Ingredients
21.Understanding the techniques
You don't need to understand all of AI. But it helps to know what's out there so you can recognise when something might fit your problem.
Large language models (LLMs)
These are the AI systems behind ChatGPT, Claude, and Copilot. You give them text, they give you text back. They're trained on vast amounts of human writing, which means they're good at things humans do with language: drafting, summarising, explaining, analysing, having conversations.
They're useful when you're working with unstructured information: free-text feedback, case notes, long documents, anything where the meaning isn't neatly organised into rows and columns. They can find themes in hundreds of survey responses, draft a first version of a funding bid, or help you make sense of a complicated policy document.
LLMs can be confidently wrong, making up facts or details that sound plausible but aren't true. This has improved with features like web search and grounding, but it hasn't gone away entirely. Human judgment still matters on anything important.
They also have memory limits. Within a conversation, the AI remembers what you've discussed, but that memory has a ceiling. When you start a new conversation, it starts fresh. This is why giving good context upfront matters: the AI only knows what you've told it in this conversation, plus what it learned during training.
Embedded AI: Microsoft 365 Copilot and similar tools
Many charities will encounter AI not through ChatGPT or Claude, but through tools they already use. Microsoft dominates charity IT infrastructure, and M365 Copilot is how most organisations will first experience AI at an organisational level: AI embedded directly into Word, Excel, Outlook, Teams, and other Microsoft applications. Google Workspace has similar capabilities with Gemini, though adoption in the charity sector is lower.
We cover Copilot in detail here not because it's the best AI available (standalone tools like ChatGPT and Claude are generally more capable for complex tasks), but because Microsoft's dominance in the sector means it's what most charities will be offered first.
The governance case
For many organisations, the appeal of M365 Copilot isn't the features - it's the governance. When staff use ChatGPT or Claude with personal accounts, data leaves your organisation. You have limited visibility into what's being shared, no central control, and potential compliance issues.
M365 Copilot addresses this directly. Data stays in your tenant, your existing data processing agreements with Microsoft already cover it, IT can manage access and apply existing security policies, and usage is logged. For charities handling sensitive beneficiary data or operating under strict funder requirements, this matters more than any individual feature.
What it does and what to watch for
Copilot can summarise Teams meetings, draft emails and documents, answer questions about content in your Microsoft 365 environment, and find information scattered across your files and messages. The experience is different from standalone tools: instead of copying text into a separate application, you work where you already work.
But the governance benefits don't eliminate the risks - they just change where they sit. A government evaluation found a 22% hallucination rate when Copilot was asked about documents - roughly one in five responses contained inaccuracies (DBT, September 2025). Copilot also works best when your Microsoft 365 environment is well-organised. If your SharePoint is a mess or your permissions are inconsistent, Copilot will struggle to find the right information - or worse, surface things people shouldn't see. Several charities that deployed Copilot expecting transformation found the AI confidently producing nonsense because the underlying data wasn't ready.
Cost
M365 Copilot is significantly more expensive than standalone AI tools - typically £24-30 per user per month on top of existing Microsoft 365 licensing. For a team of 20, that's £6,000-7,000 per year. Some charities are finding value by licensing selectively rather than rolling it out organisation-wide.
Copilot vs standalone tools
It's worth recognising what Copilot is becoming. Microsoft is building agentic capabilities, e.g. Copilot Studio agents that work across applications, trigger automations, and connect systems. But the M365 Copilot most charities are paying for is still primarily a personal productivity tool: drafting, summarising, finding things. The cross-system capabilities require Copilot Studio and Power Platform investment that most charities haven't made. And even with agents, Copilot respects your existing SharePoint permissions. If teams can't see each other's work, Copilot won't change that. Good governance, but AI won't break down silos your organisation has built.
Many charities end up using both. Copilot for the governed, everyday tasks - meeting summaries, email drafts, document questions. Standalone tools or custom solutions for the work that actually moves the needle: connecting data sources, building new capabilities, solving problems specific to your context. SCVO use Teams transcription with Copilot for meeting summaries, but are clear about limits: "We would probably not use this process for a high-stakes call where an accurate record was critical." The governance is there; the judgment about when to rely on it still sits with people.
Traditional statistics and rules
This is the deterministic end of AI: mathematical, predictable, gives you the same answer every time. It's been around much longer than LLMs and remains useful for different problems.
It's good for forecasting (how many people will attend this event?), spotting patterns in numbers (which months have highest demand?), and anything where consistency matters more than creativity. If you need an auditable calculation that works the same way every time, this is where you look.
The limitation is that it needs structured data: numbers in columns, categories that are consistent. It can't handle ambiguity or make sense of messy text.
Classical machine learning
This sits between the other two. You train a model on historical data and it learns patterns it can then apply to new situations. It's good for prediction and classification: which donors might stop giving, which enquiries should go to which team, which cases might need extra attention.
It needs decent historical data to learn from, and usually some technical capacity to build and maintain. But once it's working, it can process large volumes consistently and improve over time as you feed it more data.
Vision
AI can now see, not just read. You can upload images, photographs, documents, screenshots, and AI can describe what's in them, extract text, analyse content, answer questions about what it's looking at.
This is useful for digitising handwritten forms, extracting information from documents that aren't machine-readable, analysing photos from events or services, or making visual content accessible. The quality is now good enough to be useful for real work, though still worth checking on anything important.
Tool use and agents
This is AI that doesn't just generate text but connects to other systems: querying your database, pulling information from your CRM, searching the web, reading documents. The AI decides which tools it needs to answer your question, uses them, and brings the results together.
Agents go further. They don't just gather information but take actions: sending emails, updating records, completing multi-step tasks. Rather than giving you an answer and leaving you to do something with it, an agent can do the thing.
For charities, this opens up possibilities for automating workflows that currently require someone to manually move information between systems. But it also raises governance questions: if AI is taking actions on your behalf, what oversight do you need? Many organisations are already encountering agents through tools like Microsoft Copilot, or through staff using open-source agents like OpenClaw to manage email and calendars. The adoption is often happening before the organisation has consciously decided to use agentic AI, which makes the governance questions urgent rather than theoretical.
Edge AI
Edge AI means AI running on a device rather than in the cloud: your phone, your laptop, a sensor, a camera. The AI is right there rather than sending data somewhere else to be processed.
This matters for privacy, since data doesn't leave the device. It matters for speed, since there's no round trip to a server. And it matters for access, since it can work offline.
You're already using edge AI without thinking about it. The face recognition on your phone, voice assistants, photo features, predictive text. As smaller models get more capable, more useful AI can run locally.
For charities, edge AI is relevant when you're working with sensitive data you don't want to send to the cloud, when connectivity is unreliable, or when you need real-time processing. It's still emerging as a practical option for custom applications, but worth knowing about.
Synthetic data
Synthetic data is fake data that behaves like real data. Instead of using actual donor records or beneficiary information, you generate artificial records that have the same patterns, distributions, and relationships as the real thing, but don't relate to any real person.
This matters for charities because so much of your data is sensitive. You can't paste beneficiary case notes into ChatGPT to experiment with analysis techniques. You can't share real donor data with a consultant to test a new approach. You can't let junior staff practice on live databases.
Synthetic data gives you a way to experiment, test, and learn without the risk. You can try out AI techniques, build prototype tools, train staff, and validate approaches, all without touching real personal data.
The limitations are real though. Synthetic data is generated based on patterns in your real data, which means it inherits the biases and gaps. If your real data underrepresents certain groups, your synthetic data will too. And models that work well on synthetic data don't always perform the same way on real data. It's useful for experimentation and development, but you still need to validate with real data before relying on anything in practice.
22.Choosing the right type of AI
Not all AI works the same way, and choosing the right type matters.
Large language models like ChatGPT and Claude are probabilistic - they predict likely responses rather than calculating fixed answers. This makes them flexible and creative, but it also means you might get different outputs each time. For some tasks that's fine. For others it's a problem.
In some domains, variability is expected. Ask two marketers to write a campaign and you'll get two different approaches. Here, an LLM's tendency to produce varied outputs fits reasonably well.
In other domains, consistency matters more. Health information, benefits advice, safeguarding assessments. You want the same question to get the same answer, regardless of who's asking or when. Here, deterministic approaches - traditional statistics, rules-based systems, classical machine learning - are often safer.
What level of accuracy do you need?
The Breast Cancer Now transcription system needed to hit 99% accuracy. That sounds impressive, but it also means one in a hundred fields might be wrong. Whether that's acceptable depends on what you're doing with it. For surfacing themes across thousands of responses, a few errors won't change the picture. For individual clinical data, the stakes are different.
The government's Consult tool found that human reviewers made no changes to the AI's categorisation for 60% of responses - but humans only agreed with each other 62% of the time (i.AI, GOV.UK). For that task, AI variability was within acceptable bounds. But 61% agreement wouldn't be good enough for a charity providing health guidance.
Combining approaches
The Learning Lab, a spinout from Guy's and St Thomas' NHS Foundation Trust, faced this directly when building AI-powered clinical assessments. The question wasn't "how accurate can we make the AI?" but "what needs to be reliable, and what can vary?"
The question wasn't "how accurate can we make the AI?" but "what needs to be reliable, and what can vary?"
The clinical logic had to be deterministic so patient vitals, valid treatments, and deterioration patterns couldn’t vary. But the patient dialogue could use an LLM - that's where natural conversation really benefits from flexibility.
Many real applications work this way. The BCN transcription system uses different approaches for different parts of the process depending on what's needed: speed, accuracy, cost, consistency. The engineering is in knowing which tool fits which part of the job.
Which model?
There are hundreds of models with different strengths, speeds, and costs. Some are fast and cheap, good for simple tasks at high volume. Others are slower and more powerful, better for complex reasoning. Some can run on your own computer rather than in the cloud - useful when privacy matters or you need to work offline.
For most charities using off-the-shelf tools, you don't need to worry about this. ChatGPT and Claude make choices about which models to use behind the scenes. But if you're building something custom, or working with a technical partner, the choice of model matters.
23.From conversation to code
The simplest way to use AI is through a chat interface: type a question into ChatGPT or Claude, get an answer. This works well for one-off tasks, exploring ideas, or processing small amounts of information.
But chat interfaces have limits. If you need to process 500 documents the same way, you don't want to paste them in one by one. If you need AI as part of an ongoing workflow, you don't want someone manually copying and pasting every time. This is where programmatic approaches come in.
Embedded AI tools
Between standalone chat interfaces and full programmatic approaches, there's a middle ground: AI embedded in tools you already use. Microsoft 365 Copilot in Word, Excel, and Teams. Google's Gemini in Workspace. Notion AI. Canva's Magic tools.
These don't require any technical setup - the AI is just there, in the applications you're already working in. For many charities, this is where AI will have the most immediate impact: not through a separate tool, but through capabilities that appear in familiar software.
The limitation is that you're working within what the vendor has built. You can't customise how it works or connect it to other systems. And the pricing models (often per-user subscriptions) can add up quickly across an organisation.
APIs
An API (Application Programming Interface) lets you connect to AI services directly from code or other tools. Instead of typing into a chat window, you send requests programmatically and get responses back. This opens up automation: process a batch of files overnight, connect AI to your CRM, build it into a workflow that runs without manual intervention.
Most AI providers offer APIs: OpenAI, Anthropic, Google. The pricing is usually based on usage rather than a monthly subscription, which can work out cheaper at volume or more expensive if you're not careful.
Google Colab
Colab is a free tool from Google that lets you run Python code in your browser without installing anything. It's a good middle ground: more powerful than chat interfaces, but you don't need to be a developer to use it. Many of our recipes use Colab for tasks like data cleaning, analysis, or batch processing.
The WaterAid supporter analysis we described in the data section used Colab to keep data and analysis code together. You can see each step, rerun it, adapt it. It's particularly useful when you want to do something repeatable with your own data.
Claude Code and similar tools
Claude Code and similar tools (like Cursor or GitHub Copilot) let you use AI to help write code. You describe what you want, the AI generates code, you run it. This is useful when you have some technical comfort but aren't a full-time developer, or when you want to build something quickly without starting from scratch.
Scale influences things
The right tool depends partly on volume. For a handful of documents or a one-off analysis, chat interfaces are fine. For tens or hundreds of items, you probably want something like Colab where you can batch process. For thousands, or for ongoing automated workflows, you're looking at APIs and proper engineering.
The goal is matching the approach to what you're trying to do. The recipes are organised with this in mind: some are designed for quick wins in a chat window, others assume you'll be working in Colab or with code.
If you're not sure where your task sits, start simple. If you find yourself doing the same thing repeatedly or hitting limits, that's a signal to explore more programmatic approaches.
Part 5: By function
24.AI for fundraising
Income is under pressure across the sector. Individual giving is declining, competition for grants is intensifying, and fundraising teams are being asked to do more with less.
AI can help with the tactical work: reviewing applications, identifying prospects, personalising appeals. But it can also support a shift in how fundraising operates: understanding donor behaviour at scale, predicting where income is at risk, focusing limited capacity on the relationships that matter most.
What we're learning
The strongest evidence so far is in three areas: grants work, predictive analytics, and donor communications.
On the grants side, AI is already making a practical difference for stretched teams. Tools like FreeWill's grant assistants are showing significant impact, with some reporting up to 60% reduction in time spent on administrative drafting. The AI handles scaffolding and structure, freeing fundraisers to focus on the case-making that actually wins funding. The National Lottery Community Fund took this further, building a proof-of-concept AI application assistant to help people applying for grants. The idea was straightforward: applicants often struggle with funder jargon and eligibility questions, so TNLCF trained an AI assistant on their programme data to answer questions like "Am I eligible?" or "What does question 5 mean?" in real time. It worked well for basic queries, but testing revealed it could be inaccurate on nuance, meaning applicants might not always get the advice they need. The project remains ongoing, but it shows where grants support is heading: AI handling the routine guidance so funding officers can focus on the applicants who need more complex help.
For predictive analytics, large UK charities using tools like Dataro and ProspectAI are moving away from blanket appeals. AI models analyse years of CRM data to identify propensity to give, allowing teams to focus personal outreach on the small percentage of donors most likely to become major supporters. The point isn't to send more emails. It's to know where to invest relationship-building time.
Where AI gets more interesting is in donor communications. Muslim Charity, working with Giving Analytics, used AI to generate individually tailored fundraising emails during Ramadan 2024. Rather than sending the same template to their entire database, they used a large language model to analyse individual donor histories and generate thousands of unique emails, each reflecting that donor's specific relationship with the charity. The results were striking: donations per email increased over three-fold compared to their standard templated approach, and not a single email was marked as spam. The work won the Chartered Institute of Fundraising's "Most Powerful Insight Using AI/ML" award. Notably, this worked with imperfect donor data. The team didn't wait for a perfect CRM before experimenting.
Understanding supporter motivation at scale is another growing area. Most charities collect far more qualitative feedback than they ever properly analyse. Survey responses, donor comments, open-ended feedback forms: this material is rich with insight but historically too time-consuming to work with systematically.
London Funders tested this directly. They used Copilot to analyse responses from over 130 grantees, grouping answers into broad themes that correlated with their grant programme areas. It saved significant time and the themes broadly matched what staff expected. But the AI grouped similar-sounding themes too loosely and missed nuance in how organisations described their work. Their advice: "Be specific in what you ask and make sure you also know the data a bit first."
Both the Muslim Charity and London Funders examples point to a broader pattern. AI is genuinely useful for turning large amounts of donor and supporter data into something you can act on. But it works best when someone who already understands the context is guiding the process, checking outputs, and catching the places where the AI smooths over important distinctions.
Where the value is
The most immediate value is in grants and trusts work: reviewing applications before submission, tailoring to specific funders, prioritising where to spend limited bid-writing time. These are tasks that generative AI already handles well, they don't require specialist tools or technical setup, and the risk of getting it wrong is low because a human reviews the output before it goes anywhere. If you're not using AI for grant applications yet, that's the place to start. But it's worth being thoughtful about how. IVAR's research with CAST and the Technology Association of Grantmakers highlights that funders are noticing the shift: application volumes are increasing, and AI-generated bids can be formulaic, making it harder to spot genuinely distinctive or community-led ideas. The charities getting most from AI in this space aren't using it to generate entire applications. They're using it to structure thinking, check alignment with funder priorities, and free up time for the parts that matter most: making the case in their own voice.
Individual giving applications, particularly predicting which donors might lapse, are higher value but need more investment. They require decent historical data and some technical capacity. If your CRM is solid and you've got a few years of donor history, these are worth exploring. If your data is a mess, fix that first.
The bigger opportunity is in shifting from reactive to predictive fundraising. Rather than responding to who gave last month, understanding the patterns that predict future giving, future lapsing, future major gift potential. Muslim Charity's Ramadan campaign shows what's possible when you combine predictive insight with genuinely personal communication. But the gap between that kind of success and where most charities are today is still significant, and the trust implications are real.
Tensions and trade-offs
Personalisation vs authenticity. AI makes it easy to tailor communications at scale, but fundraising depends on relationships and trust. If supporters knew exactly how their appeal was crafted, would they feel respected or manipulated? Muslim Charity's experience suggests that genuinely personalised communications, ones that reflect a real relationship rather than just inserting a first name into a template, can strengthen rather than undermine trust. But getting it wrong is easy. Mass "personalisation" that feels robotic, where donors sense they're a number in a database, is worse than not personalising at all. Donors are increasingly protective of their data, and the AI fundraising tools that succeed long term won't be the flashiest but the most trustworthy.
Efficiency vs relationship. AI can identify who to talk to and when, but major donor work is fundamentally human. The risk is that teams optimise the transactional parts, the segmentation, the timing, the follow-up triggers, while neglecting the relational work that actually converts. AI is at its best here when it surfaces opportunities for human connection, not when it replaces it.
Prediction vs action. Knowing which donors might lapse is only useful if you do something about it. The AI part is often easier than the organisational part: having the capacity, the processes, and the culture to act on what you learn. A model that predicts lapse but sits unused is just expensive data science.
Data quality matters, but it's not a prerequisite for everything. Predictive models need clean data to learn from. But Muslim Charity's campaign succeeded with imperfect data because the approach was well designed. The question is whether your data is good enough for what you're trying to do, not whether it's perfect.
Where to start
When: New to this
Immediate value, no setup required, low risk. A good first taste of what AI can do for fundraising.
When: Good donor data
Higher investment, higher return. Knowing who's about to stop giving is only useful if you can act on it, so make sure you've got the capacity to follow through.
When: Small team stretched across trusts and foundations
When you can't apply for everything, these help you focus limited capacity where it's most likely to pay off.
When: Want to improve donor communications
Start with a specific campaign or segment rather than trying to personalise everything at once. Muslim Charity focused on Ramadan, their peak giving period, where the investment would have the greatest impact.
When: Drowning in qualitative feedback from donors or grantees
Start with what London Funders did: upload anonymised survey responses, ask AI to identify themes, but make sure someone who knows the data is checking the outputs.
25.AI for service delivery
Service delivery is where AI gets most interesting and most difficult. The potential value is significant: stretched teams, complex caseloads, information scattered across systems, demand that outstrips capacity. But the stakes are higher. These applications touch vulnerable people directly, and getting it wrong matters more.
What we're learning
Most of the rigorous evidence comes from healthcare, but the findings are strong and the patterns transfer directly to charity service delivery. The largest NHS AI scribe trial to date, led by Great Ormond Street Hospital across 17,000 patient encounters at nine London sites, found a 23.5% increase in direct patient interaction time and halved the time to complete initial notes. NHS England has since backed wider deployment, publishing a registry of 19 approved AI scribe suppliers. The documentation pattern is now well-evidenced at scale: AI handles the writing, humans get more time with patients.
Kingston Council piloted AI-powered case note assistants for social workers. By automating the drafting of administrative notes, they returned several hours a week to social workers for direct client contact. The AI handled the documentation burden; the humans got more time for the work that actually matters. Citizens Advice SORT built something similar with "CaseNote", a workflow that live-transcribes client calls and drafts case notes automatically. Across a six-week pilot covering around 2,000 advice sessions, it halved average write-up time while maintaining quality scores. The same team has now launched ConvoCoach, an AI role-play training tool where AI personas simulate real clients (who might be stressed, upset, or frustrated) to prepare advisers before they handle live cases. Funded by Money and Pensions Service, it tests emotional intelligence, not just knowledge of advice issues. Alongside Caddy (real-time adviser support) and CaseNote, it gives Citizens Advice SORT a suite of three AI tools, each addressing a different part of the adviser workflow.
Moorfields Eye Hospital developed an AI clinical assistant providing multilingual support, ensuring non-English speaking patients receive accurate, real-time guidance on specialist care. This isn't replacing clinicians. It's extending their reach to people the system was struggling to serve. Limbic Access, an AI chatbot used by 45% of NHS Talking Therapies services for mental health self-referral, shows what happens when AI genuinely reaches people who weren't being reached before. Across 500,000 clinical assessments, referrals from nonbinary people increased by 179%, from Asian patients by 39%, and from Black patients by 40%. 40% of self-referrals now come outside working hours. The gain here isn't efficiency but rather that the service is reaching people it was failing before.
WECIL, a disabled-led charity, created "Cecil from WECIL", a chatbot that acts as an Easy Read translator, turning complex legal or medical documents into accessible information for people with learning disabilities. AI making services more accessible, not less human. The MND Association took accessibility further, co-creating "Mind's Eye" with people living with MND: an AI art generation tool designed specifically for people who can't use standard interfaces, controlled through eye-tracking and switches. It's been popular since launch, with users finding creative expression through a tool designed around their needs rather than requiring them to adapt to existing ones.
Age UK's Telephone Friendship Service uses AI to transcribe and scan calls for safeguarding flags. This allows a small staff team to oversee thousands of volunteer-beneficiary pairings safely, scaling the service without a linear increase in costs. The RSPCA took a different approach to scale, building a chatbot embedded in Google Chat that helps frontline staff access over 4,500 internal documents. Early estimates suggest it could save 160,000 staff hours a year by letting teams self-serve accurate answers rather than waiting for subject experts.
The failures matter too. Early general-purpose AI models gave dangerous advice when presented with mental health crises, sometimes encouraging self-harm. The problem was using off-the-shelf AI for sensitive interactions without constraining it to verified information.
A different kind of failure is illustrated by research into volunteer scheduling optimisation (Kaur et al., 2022). When researchers modelled task assignments that maximised operational coverage without considering volunteer preferences, retention dropped significantly. The algorithm treated volunteering as a logistics problem, but volunteers who consistently get tasks they don't enjoy don't come back. Both cases point to the same lesson - AI that's technically competent but doesn't understand the context it's operating in can do real harm.
Where the value is
Start with information that's currently trapped in unstructured formats. Case notes that take hours to read through for a handover. Enquiries that need classifying before they can be routed. Feedback and records that contain insights nobody has time to extract.
AI is good at this work: summarising, classifying, extracting patterns from text. It doesn't replace professional judgment, but it can surface the information that judgment needs to act on.
Demand forecasting and resource allocation are practical now. If you're constantly guessing how many people will need your service next month, or manually juggling staff and volunteer capacity across programmes, there are practical ways to improve on intuition.
Accessibility is an underexplored area. Translation, Easy Read conversion, multilingual support: AI can extend your reach to people your current services don't serve well. The WECIL and MND Association examples show what's possible when accessibility is the starting point rather than an afterthought.
The tensions worth thinking about
Automation vs human connection. For many charities, the human relationship is the service. AI can help with the administration around that relationship, but if it starts replacing the relationship itself, you've lost something important. The question isn't just "can AI do this?" but "should it?"
The question isn't just "can AI do this?" but "should it?"
Efficiency vs safety. Faster triage and automated routing can help stretched teams serve more people. But they can create new risks. What if the AI routes someone in crisis to the wrong place? What if it misses a safeguarding concern? Speed and scale need to be balanced against the cost of getting it wrong. The mental health chatbot failures show what happens when AI is deployed in sensitive contexts without proper constraints.
Patterns vs individuals. AI is good at spotting patterns across many cases. But service delivery is often about the individual. The volunteer scheduling research is a good example: optimising for coverage while ignoring that volunteers come for connection. There's a risk of optimising for the average while failing the edge cases, and the people AI handles badly might be the ones who most need your help.
Who decides? When AI starts influencing who gets support, how quickly, and what kind, that's a significant shift. Even if a human makes the final decision, they're often working from what the AI surfaced or recommended. Being clear about where AI is shaping decisions, and building in oversight, matters more here than in most domains.
Safeguarding deserves special attention
We've included a recipe on identifying patterns in safeguarding concerns. This is advanced work and should be approached carefully. The potential value is real: spotting patterns across cases that nobody would see individually, as Age UK is doing with their friendship service. But safeguarding data is sensitive, patterns can be misleading, and there's a danger of either over-reliance or false confidence. If you're considering AI for safeguarding, get proper advice.
Where to start
When: New to this
Both are beginner-level and low-risk. They'll give you a feel for what AI can actually do with your service data without going anywhere near anything sensitive.
When: Accessibility is a priority
Translation and Easy Read conversion can extend your reach to people your current services don't serve well, without high technical investment.
When: Managing volunteers and the coordination is complex
If you're juggling dozens of volunteers across multiple programmes, these address the real coordination headaches at intermediate complexity.
When: Confident and want to tackle something more substantial
This is where the bigger value lies, but the stakes are higher too. Read the tensions section above before diving in.
26.AI for operations
Everyone was promised massive efficiency savings with AI. Whether that's being realised depends on the organisation. Most organisations haven't found a way to measure it yet, and the gains that do exist are tangled up with other changes.
What we are seeing is AI adoption in operations happening organically. Not through big transformation projects, but because someone finds a way to make a repetitive or time-consuming process easier, and it spreads from there. This mirrors a wider pattern: across all functions, individual staff are adopting AI tools informally, often ahead of organisational policy.
What we're learning
The NHS Copilot trial is the largest test of AI for administrative productivity we have. Across 30,000 workers in 90 organisations, staff saved an average of 43 minutes per day, roughly five weeks per year. At national scale, that projects to 400,000 hours saved per month (GOV.UK, October 2025). The main use cases were unglamorous but real: summarising long email chains, drafting routine correspondence, pulling together information from multiple sources.
The British Heart Foundation shows what operational AI looks like at charity scale. BHF is the UK's largest charity retailer, processing around 800,000 donated items a week across nearly 700 shops. They built a custom AI model trained on 1.9 billion rows of their own historical retail data that analyses each item's attributes and suggests optimal pricing and sales channel - in-store, eBay, or Depop. The system went from blank page to live in stores in three months using Microsoft Power Platform and Azure. Projected returns: over £1 million in additional revenue and £500,000 in Gift Aid uplift in year one (Microsoft, 2025). The volunteer angle is worth noting: shop volunteers help train the model by providing feedback on its suggestions, and staff report genuine engagement rather than resistance. As one of BHF's architects put it, "You can see the spark when they realize they're teaching the machine. It's not just automation, it's collaboration."
Operations teams in charities spend significant time on similar admin: compiling reports, chasing information, formatting documents, answering repeated internal questions. The 43 minutes a day won't apply universally, but the direction is clear. London Funders found that even simple automation delivered quick wins: using Zapier to automatically move items between categories when a status changed, or to classify incoming form submissions for easier triage. As they put it, "even small flows of changing a status automatically moving to another category has been really helpful." Their meeting transcription through Fathom saved the most time of all their AI experiments, particularly valuable for an organisation that convenes members and needs to capture rich discussion without losing the conversational quality.
On the financial side, councils are using AI for planning and forecasting. Greater Cambridge Shared Planning uses AI to process thousands of consultation responses. Local authorities are testing AI to cut housing approval timelines from 18 months to weeks. The potential for AI to handle volume and complexity in operational processes is already being demonstrated.
The cautionary note is the same as everywhere: AI only works if there's coherent data for it to work with. Charities that deployed Copilot expecting transformation found it hallucinating because their internal information was scattered across inconsistent folders, outdated PDFs, and contradictory spreadsheets. The UK government's Department for Business and Trade found the same in their Copilot evaluation: 22% of users identified hallucinations, and the trial found no robust evidence that time savings led to improved productivity (DBT, September 2025). Operational AI readiness is data and document hygiene.
Where the value is
Financial monitoring and forecasting offer the clearest returns. Cash flow prediction, spotting sustainability risks early, understanding where you're over-reliant on single funders. These aren't glamorous but they matter: the difference between seeing a problem coming and being blindsided by it.
Reporting automation is another quick win for teams with regular reporting cycles. If someone spends hours every month pulling data from your CRM, copying figures into spreadsheets, and formatting the same report, that's a candidate for automation.
Internal tools are an underexplored area. Teams often need small things: a form that saves to a spreadsheet, a dashboard showing key stats, a way to query organisational data without knowing SQL. These used to require developer time. Now they can often be built with AI assistance, even by people who aren't technical. RSPCA Coventry's website API feed, built with AI assistance in about 40 hours (described in the buy, build or wait section), is one example of what's now possible without dedicated developer resource.
The "AI readiness" recipes are worth flagging. Before investing in AI tools, it helps to assess whether your organisation can actually use them. What's the state of your data? What features are already in tools you're paying for but not using? Should you build something custom or wait for capabilities to arrive in existing platforms?
The tensions worth thinking about
Efficiency vs oversight. Automating operational processes can save significant time. But operations often involve judgment calls, context, and exceptions. If AI is handling things automatically, who notices when something's wrong? The NHS found value in AI drafting letters, but humans still reviewed them. That balance matters, and nobody wants to spend their time checking AI's work. The goal is building in verification loops that don't just shift the burden from doing to checking.
Tool proliferation vs coherence. It's easy to end up with AI tools scattered across the organisation: this team uses Claude, that team uses ChatGPT, finance has a custom forecasting model, comms has an AI writing assistant. The result can be fragmented knowledge, inconsistent practices, and data flowing in directions nobody's tracking.
Building capability vs getting things done. Some of the operations recipes are about building AI infrastructure: setting up Claude Code, creating internal tools, chaining workflows. This builds long-term capability but takes time. Others are about quick wins: using Claude Projects for persistent context, everyday office tasks. Both have value, but be honest about your capacity for each.
Where to start
When: New to this
Before investing in anything new, check what features are already in tools you're paying for. You might be surprised how much is sitting there unused.
When: Regular reporting that eats up time
If someone spends hours every month pulling the same data into the same spreadsheet, this can deliver quick value. But your data sources need to be reasonably clean first.
When: Cash flow or financial sustainability is a concern
Intermediate-level, but these address real anxieties. The difference between seeing a financial problem coming and being blindsided by it.
When: Want to build internal capability
This opens up a lot of what's in the more technical recipes, but be realistic about the time investment. It's a commitment, not a quick win.
27.AI for impact measurement
Impact measurement has always presented an interesting challenge for charities. On the one hand, it's the absolute core of what they're about. On the other, the cost of measuring it properly can sometimes exceed the cost of delivery itself.
The result is that most charities do what they can within the resources they have, which often means focusing on what's feasible to capture: people reached, sessions delivered, reports produced. Actual outcomes, lives changed, problems solved, are harder to measure and even harder to attribute to your work.
AI doesn't solve this fundamental challenge. But it does make data capture and analysis more feasible than it used to be.
What we're learning
Space4Nature, a project from Surrey Wildlife Trust, uses satellite imagery and AI computer vision to track habitat restoration in real time. This provides verifiable environmental outcomes, not "we planted 500 trees" but actual measurable change in biodiversity. It replaces anecdotal evidence with hard data for reporting requirements like Biodiversity Net Gain.
This kind of approach, using AI to capture real-world outcomes at scale, is still uncommon, but the technology is ready. Most impact measurement in charities still relies on surveys, case studies, and self-reported data. But AI is making it easier to do more with what you've got.
Where we're seeing practical value is in making sense of qualitative data. Transcribing interviews, finding themes across feedback, extracting outcomes from narrative reports. Work that used to take days of manual coding can now be done in hours. This doesn't change what outcomes you achieved. But it means you might actually have time to understand them.
The Masonic Charitable Foundation tested this directly, using AI to analyse monitoring reports from across their grant portfolio and extract common themes. The conclusions tallied with staff experience for the most part, but the AI also surfaced patterns that would have taken weeks to identify manually across that volume of reports. Paul Hamlyn Foundation tried something more radical: replacing written grant reports with recorded phone conversations, transcribed by AI. The idea was to reduce reporting burden on grantees while still capturing what mattered. It's an interesting experiment in whether AI can help shift impact reporting from written to conversational, though they found the approach worked better for some grantholders than others.
Noise Solution CIC created a different kind of impact data entirely. Young people record reflection videos after music sessions, and AI analyses these for wellbeing indicators and personal development markers. The approach turns natural conversation into structured data, creating evidence where none existed before. The work was successful enough to spin out into a separate organisation, Transceve, to help other charities do the same.
What does the feedback actually say?
One of the things AI is good at is analysing sentiment across large volumes of text. When you do this with beneficiary feedback, you get a more accurate picture than relying on what naturally surfaces.
It's human nature to notice and remember the positive comments. The thank you emails get shared, the success stories make it into reports. When AI analyses everything systematically, the picture may be more mixed than internal perception suggests. London Funders found this when they asked Copilot to analyse survey comments from over 130 grantees: it identified themes that correlated with their grant programme areas far faster than manual analysis would have allowed, and surfaced patterns across the full dataset rather than the handful of responses that happened to catch someone's eye.
If decisions about services are based on an incomplete sense of how they're valued, that's worth knowing. Especially when resources are tight and choices have to be made about what to continue and what to stop.
Where the value is
Start with the qualitative data you already collect but don't have time to properly analyse. Interview transcripts sitting in folders. Free-text feedback from surveys. Narrative reports from projects. AI can find patterns across this material that would take weeks to identify manually.
Comparing your impact against sector benchmarks helps you contextualise what you're achieving. You helped 500 people this year, but is that good for your budget and focus? Without context, numbers don't mean much.
Generating narrative from data is useful for reporting, but use it carefully. AI can turn your numbers into fluent prose, but fluent prose isn't the same as meaningful insight. The risk is producing impressive-sounding impact reports that don't actually say much.
The tensions worth thinking about
Measuring what's easy vs what matters. AI makes it even easier to count things. But counting isn't the same as understanding impact. The temptation is to produce more metrics because you can, not because they illuminate anything. AI should help you focus on what matters, not generate more noise.
Funder pressure vs honest reporting. Funders want impact data. AI can help produce it. But there's a risk of using AI to generate the narrative funders want rather than the reality of what happened. The technology makes it easier to tell compelling stories. That's a responsibility, not just a capability.
Attribution remains hard. AI can help you understand what changed for the people you worked with. It can't prove you caused that change. The fundamental challenge of impact measurement, separating your contribution from everything else in someone's life, doesn't go away because you have better tools.
Where to start
When: New to this
Beginner-level, takes hours not days, and shows you what AI can do with qualitative data you probably already have sitting in a folder somewhere.
When: Want a reality check
It might be the most valuable hour you spend. AI is surprisingly good at poking holes in logic you've stopped questioning.
When: Qualitative data piling up
Interview transcripts sitting in folders, free-text survey responses nobody's read properly. These recipes can clear the backlog and surface insights you've been missing.
When: Need to report to funders
AI can turn your numbers into fluent prose, but fluent isn't the same as meaningful. Use this to get a draft started, not to replace honest reflection on what actually happened.
28.AI for communications
Communications is where AI has been most visibly adopted. The tools are accessible, the use cases are obvious, and much of the work involves writing. Many comms teams have been experimenting since ChatGPT launched, and often advocating for AI adoption across the wider organisation.
But obvious doesn't mean simple. There are real questions about what AI does to the craft of communications, and whether faster production actually serves your mission.
What we're learning
AI is genuinely useful for the mechanical parts of comms work. Taking a case study and reformatting it for different channels: the 200-word version for the annual report, the 100-word version for the newsletter, the social media snippets. Drafting responses to common supporter emails. Creating first drafts that a human then refines. The National Lottery Community Fund tested AI for generating press summaries of funded projects and found it easy to use and time-saving, though some summaries felt too generic even after adjusting prompts. That's a common experience: AI gets you 70% of the way quickly, but the last 30% still needs a human who knows the voice and the nuance.
What's less clear is whether this frees people up for better work, or just increases the expectation that more content gets produced. The content treadmill is real: if AI means you can publish twice as often, does that become the new baseline? London Funders found AI was better for structure and planning than actual writing, noting that "some of the language was so far removed from how the team would write" that heavy editing was needed. That's worth remembering: AI can scaffold, but voice still comes from people.
There's value in the analytical applications too. Understanding which themes resonate with supporters, tracking what people are saying about your cause on social media. These help you be more strategic rather than just faster. Google NotebookLM is proving particularly useful for charities that need to consolidate large amounts of information, scanning documents to identify key themes and spotting gaps in knowledge bases, especially in policy-heavy areas.
Goose, the AI marketing platform built for heritage organisations by the Arts Marketing Association and National Lottery Heritage Fund, takes a different approach altogether. Instead of generating content, it works as a set of "thinking partners" - AI personas that ask probing questions and help professionals work through marketing strategy. Heritage organisations were deeply sceptical about AI's impact on authenticity, so a tool that enhanced professional judgment rather than replaced it was the only approach that would have been adopted. Early testing showed these strategic conversations averaged 20 messages per session, compared to 7 for typical AI interactions - closer to genuine collaboration than content production.
And accessibility is an underexplored area. Generating Easy Read versions of documents, building translation workflows that maintain quality. AI can extend your reach to audiences your current content doesn't serve. Action for Children found AI was good at simplification but not always at appropriate simplification, reinforcing that review by the people the content is designed for remains essential.
The tensions worth thinking about
Craft vs production. Many comms people got into the work because they care about language: finding the right word, telling a story well, capturing a voice. AI can produce fluent content quickly, but fluent isn't the same as good. There's a risk of comms becoming more about editing AI output than creating something distinctive. What happens to craft when the first draft is always machine-generated?
AI can produce fluent content quickly, but fluent isn't the same as good.
Volume vs value. AI makes it easy to produce more. But should you? More content isn't always better content. More emails to supporters isn't always more engagement. The question isn't just "can we produce this faster?" but "should we be producing this at all?"
Authenticity vs efficiency. If supporters knew that thank you message was AI-generated, would they feel differently about it? There's no universal answer, but it's worth asking. The transparency questions from earlier in this playbook apply here directly.
Voice and distinctiveness. Everyone's using the same tools. There's a risk of charity communications converging on a generic AI voice: fluent, friendly, slightly bland. Maintaining what's distinctive about how your organisation communicates takes deliberate effort when AI is doing the first draft. Street Support Network learned this when building a custom GPT: "AI outputs weren't valuable until we 'taught' it who Street Support is - brand work had to come first." Getting clear on your voice, tone and values before bringing AI in makes the outputs far more useful.
Skills and development. If junior comms people spend their time editing AI output rather than writing from scratch, what happens to their development? How do you build craft if you're not practising it? This matters for the long-term health of comms teams, not just immediate productivity.

Where the value is
The clearest value is in reformatting and repurposing. Take content that already exists and adapt it for different channels, formats, and audiences. This is mechanical work that AI handles well, freeing humans for the strategic and creative decisions.
Accessibility applications are worth prioritising. Generating Easy Read versions, building translation workflows, making content work for audiences currently underserved. This is AI extending your reach rather than just speeding up what you already do.
Analysis can inform strategy. Understanding what themes resonate, what supporters respond to, what's being said about your cause. This helps you focus effort where it matters rather than guessing.
First drafts can be useful, but treat them as starting points, not finished work. The value is in having something to react to, not in publishing what the AI produces.
Where to start
When: New to this
Beginner-level, takes hours not days, and shows you what AI can do without touching anything sensitive. Take a case study you've already published and see how quickly AI can adapt it for different channels.
When: Accessibility is a priority
This is AI extending your reach to people your current content doesn't serve, not just making existing work faster. Still needs review by the people the content is designed for.
When: Want to be more strategic
Helps you stop guessing what works and start knowing. More useful than producing more content faster.
When: Drowning in supporter emails
Can free up real time, but the responses still need to feel human. If supporters knew it was AI-generated, would they feel differently about it?
29.AI for data
We've covered why data is often the blocker for AI adoption across organisations. This section is specifically for data teams and what AI can do for your work.
The operational trap
Data teams in charities often get stuck in operational mode: cleaning up messes, running reports, answering ad hoc queries, maintaining systems that were set up years ago. The strategic work, finding insights that change decisions, enabling the organisation to understand its impact, building capability for the future, gets squeezed out.
AI can help with both. But the particular opportunity is in automating enough of the operational burden that there's actually time for strategic work. The Health Foundation explored this when they built a tool to search and synthesise data from hundreds of research outputs produced by their grant holders. The outputs existed but were scattered and hard to interrogate. Other teams immediately saw the value of a searchable library, though the Foundation found that just collating records of what had been funded presented an enormous challenge. The lesson is familiar: AI can unlock data, but only once you've done the unglamorous work of knowing what you have and where it is.
Where the value is
Data cleaning and quality is where you'll see results fastest. Checking for problems automatically, detecting duplicates, standardising messy contact data, cleaning up years of inconsistent data entry. This work has always been important but tedious. AI makes it faster and more thorough. The recipes can help you tackle the backlog that's been sitting there for years.
Processing documents in bulk is another area where AI changes what's feasible. Digitising handwritten forms, extracting information from PDFs, categorising transactions. Work that would take weeks manually can now happen in hours.
Then there's making data accessible to others. Building tools that let colleagues ask questions about data in plain English rather than waiting for you to run a query. This can shift the data team from bottleneck to enabler. CAST built a custom GPT called "Data Dotty" specifically for analysing survey data and generating impact insights. They found it useful for specialist tasks like spotting trends across anonymised surveys, though noted the output varies even with the same instructions, so consistency needs human oversight.
And there's genuine analytical value: spotting patterns you weren't looking for, extracting insights from datasets too small for traditional statistical approaches. AI can help you find things in data that would otherwise stay hidden. GRCC (Gloucestershire Rural Community Council) piloted a smart dashboard that collates and analyses community-level data from multiple sources. Their key finding was that "even with cutting-edge tools, trust is the foundational currency" - community partners engaged more when they saw themselves as co-owners of the data, not sources of it.
Synthetic data (covered in the Ingredients section) can help you experiment safely when you can't use real beneficiary or donor data.
The tensions worth thinking about
Cleaning vs using. There's always more cleaning to do. AI makes cleaning easier, which is good. But the risk is that you spend all your new capacity on more cleaning rather than actually using the data. At some point, good enough is good enough.
Automation vs understanding. If AI is processing your data automatically, do you still understand what you have? There's value in the manual work of data cleaning: you learn about data quality, you spot systemic issues, you understand the limitations. Fully automating might mean losing that understanding.
Accessibility vs governance. Making it easy for colleagues to query data themselves is useful. But it also means less control over how data is used and interpreted. Someone asking questions in plain English might not understand the caveats and limitations that you'd normally explain.
Where to start
When: New to this
Beginner-level and genuinely eye-opening. It shows you what's possible without any setup, and you'll probably learn something about your own data in the process.
When: Data quality is your biggest headache
That backlog of messy data you've been meaning to sort out for years? AI makes it faster and more thorough than doing it manually. Start here before trying anything more ambitious.
When: Want to enable others
Lets colleagues get answers without waiting for you to run a query. This can shift the data team from bottleneck to enabler, but think through the governance first.
When: Planning for the future
The decisions you make now about how data is collected and stored will determine what AI can do with it later. Worth an afternoon's investment before you need it.
Part 6: Conclusion
30.Final thoughts
This playbook will date. The tools will change, the capabilities will expand, some of the examples will look quaint. That's the nature of writing about technology that's moving this fast. But the underlying argument won't shift as quickly: start with a problem you understand, get your data in order, invest in people, and be honest about what works and what doesn't.
The charities getting the most from AI right now aren't the ones with the biggest budgets or the most technical teams. They're the ones that tried something specific, learned from it, and built on what worked. That's available to any organisation willing to start.
We don't know where this goes. Nobody does, and anyone who claims certainty is selling something. But the organisations that start learning now, even with small experiments, will be better placed wherever it goes.
Pick a problem. Try a recipe. See what you learn.
About the authors
Make Sense of It is an applied AI agency working with charities, foundations and social-good organisations. We built many of the systems described in this playbook, including the Breast Cancer Now survey transcription system, the Goose AI marketing platform for heritage organisations, and The Learning Lab's clinical simulation engine. We also designed and delivered the AI capability programmes for the National Fire Chiefs Council and WaterAid.
We help organisations figure out where AI adds genuine value, build the things that work, and develop the capability to keep going without us. If you've read this playbook and want to talk about what it means for your organisation, get in touch. This playbook and the AI Recipes for Charities collection are part of that work.