Getting Started with AI for Nonprofits

In the world of technology, features often get a lot of buzz for a while and then die down – take that of Facebook’s metaverse (still happening? Who knows…) or cryptocurrency (does anyone still have Bitcoin?). But one technology that made a huge splash and we’re still feeling the tidal wave building for is artificial intelligence (AI), and, more specifically, the introduction of consumer-focused Large Language Models (LLMs). So let’s look at how AI can be used for nonprofits.

While much of this technology might seem overwhelming at first, there are several great use cases nonprofits and community-based organizations can jump into immediately to get comfortable with how they work and be beneficial for their work.

What are LLMs?

While several types of AI systems are becoming increasingly available, the Large Language Models (LLMs) are generating the most significant buzz right now due to several companies – Google, Facebook, OpenAI – releasing their models for public use. LLMs are text-based algorithms trained on large amounts of textual data to react to text prompts in a human-like fashion. These models are very technologically advanced – you might have seen the term “deep learning” thrown around to mean the same thing – making them very powerful and versatile for anyone from an everyday user to a deep knowledge expert.

These models predict the next word based on the data they were trained on. For example, if you ask for a recipe with chickpeas, it will likely generate a recipe that includes chickpeas because it has learned that association. The model’s training data may have included recipes with chickpeas, so it tends to provide information related to that. If you also mention rice, the model will generate a recipe that combines rice and chickpeas based on its knowledge of those ingredients frequently appearing together in recipes. Essentially, the model predicts the most likely next word based on its understanding of word associations in recipes.

You need some technical knowledge to understand these models, but they are designed to be accessible to most people. The companies developing these models have been careful to limit the kind of text they can generate, aiming to prevent derogatory, sexist, racist, or inappropriate content. These AI models are intended as tools for individuals and organizations to enhance productivity and foster creativity.

What products exist right now?

Every week new products are coming out to try in AI and LLMs. While there are variations across each of them and why certain AIs might be better fit for individual needs, I’ll focus on two options available that are also free (although with time and more adoption, this is subject to change).

Bard Logo

First up is Google, which offers two AI tools: Bard AI and Labs AI. Bard AI is a typical chatbot where you can input prompts and have conversations. Labs AI, on the other hand, is integrated into Google Docs and appears as a small star with a pencil icon. A prompt window opens over the document, where you can enter your query by clicking on it. The generated text from Labs AI can be used to recreate, refine, or insert into the document. However, since both tools are still in experimental or beta mode, they may encounter difficulties when asked about topics they don’t have sufficient knowledge about.

Open AI logo

Second is OpenAI’s ChatGPT, debatably the most talked about AI model. Having built upon prior iterations of OpenAI’s similar GPT2 and GPT3, this model works as a conversation-style chatbot, similar to that of Bard (which came out after ChatGPT), in which a question or statement is provided that the text bot works to predict the best next word based on what information you provided to it. With the amount of people who have flooded to these technologies, occasionally, the servers will be overwhelmed, and you’ll be asked to come back later to use them, but when they are available, they are incredibly powerful tools.

What are examples of helpful nonprofit use cases?

Knowing more about these LLMs is helpful, but what practical ways nonprofit professionals could use these tools in everyday life? The possibilities are endless, but here are three high-level classifications of how these two tools, in particular, can be integrated into your work.

Writing

The clearest use case for these text-based models is to have them help you with writing. You might be tempted to feel these models are useless if you don’t provide enough information, but they are very functional for most other types of writing that can take up your time! Two of the best examples are writing emails and writing grants.

Emails

For many professionals, writing emails can take up a whole host of time because it can be hard to match the right tone of professionalism for the given audience. Provide an LLM with a prompt such as “write a concise, professional email for a potential donor,” the result will be open-ended enough to fill in the blank and adjust as needed to match your specific use case. Similar prompts can be created with different audiences in mind, different lengths, different tones, etc. Changing the prompts slightly can provide new results that can greatly take the pressure off of you to do all the work! Here are a few helpful prompts for email writing:

  1. Write a concise, professional email to extend an offer to a job interview candidate.
  2. Draft a professional email to reconnect with a foundation donor contact
  3. Draft an email explaining my nonprofit’s most recent accomplishments to a general audience

Grants

As for writing grants, LLMs of course will never know everything about your organization and cannot draft anything for a specific program. However, they can assist in creating outlines, creating fill-in-the-blank structures for organizational descriptions or cover letters. Try using “write a nonprofit description to use in a grant” and see how it provides a helpful skeleton to fill in with your own information. Stop wasting time thinking through structures, and build upon models that have worked in the past by using these LLMs. Try a few of these prompts to get started:

  1. Write a description for a nonprofit evaluation plan for a grant
  2. Draft a list of data analytics for a program relating to women and children
  3. Draft 10 reasons why a small nonprofit would want to apply for funding for women and children right now

Synthesizing

One of my favorite features of these LLMs is their ability to synthesize information. Try taking several paragraphs from a document or even from your own written meeting notes, and provide the following prompt: “Write the two main points from this text: “[insert notes here in quotes]”.” This will allow the model to take new information in and write something that gets at the core of what is being said.

This can be extremely useful if you’re trying to find the right words to summarize a meeting or write an abstract for a very long document. Let the AI take a first pass, and you can return to change what didn’t quite land. Sometimes, this can also be a helpful insight that what you thought was being said in the document isn’t coming through!

Additionally, several companies are building upon ChatGPT to create their own plug-ins actually to conduct note-taking when on videos or in person. This article provides insights into what is currently available (and currently free to use!), which might be worthwhile to use to free up your staff to participate fully in the meeting without worrying about capturing accurate notes!

Researching

While most of these LLMs are trained on data until 2021, they can still be useful for conducting light research and brainstorming. Asking about something that happened recently won’t be helpful, but asking it to write a paragraph about a historical figure or a particular location will often produce clear results. Additionally, while we don’t know what information is available in its training data, asking about similar organizations if you’re conducting a landscape map or asking for social service gaps in a geographical area can start to provide helpful insights that can be used to build programs and grants.

These AIs can also assist in brainstorming topics and can provide ideas for everything from program structure to workshop titles to market research. Try a few of these research-based prompts.

  1. Provide 5 different outlines for a new program providing food to children in schools.
  2. Draft 10 workshop titles for a program involving values and personal ethics.
  3. What are 10 organizations that provide social services to women in Seattle?

What are the risks?

Some discussions have arisen regarding the lack of knowledge about how these algorithms work and how the text prompts allow the AI to come to the conclusion it provides from the information it is given. This “black box” of the algorithm is both the key and the issue of artificial intelligence – much like the human brain; we don’t fully understand how it works and why it arrives at the conclusions it does. We know what it is trained upon but don’t know how that training leads to the output. In many ways, this is similar to teaching a classroom full of students the same content, and based on differences in other data they had been provided before the classroom lesson, they have varying outputs to the same questions.

While this black box is a concern for its lack of transparency, we should also keep a perspective that we are constantly interacting with algorithms we don’t understand daily, including Google search engine and the Facebook newsfeed. While some of these do have an algorithm equation you can find and attempt to dissect, it is not uncommon nowadays for algorithms to work off data and get to a local conclusion without us fully understanding the exact equation of how it got there.

Tools, not replacements

Importantly, these models should be considered tools, not people, trained on a data set with inherent biases and inequalities. While the companies behind the models are working to negate potential negativities from coming out of the tools, they will likely make mistakes and potentially provide false information.

Many scholars and technologies have been working to “hack” the technology to make it say things that it was not supposed to, but this is not the experience of most using these tools. However, you should always do your diligence when working with the tools, including double-checking any facts provided to you by the models, as they can be known to provide false or misleading information.

Final Thoughts

As with other platforms and technologies that have become popular over the past few decades, artificial intelligence is a tool to be used with the knowledge and skills we have as humans. Resisting against getting to know these tools, even just in principle, can lead to longer-term setbacks by organizations, especially for nonprofits.

Keeping up with the latest trends and innovations can feel futile when there are many other issues we face right now, and there are no clear links to how AI will help us provide better social services or build community in our neighborhoods, leading us to register them as useless. But starting with these few ideas and incorporating AI for your nonprofit can improve your writing, synthesis, and research. You can build familiarity so that when those same tools expand, there is less of an initial learning curve.

What we do

Oh, and don’t forget all our work over a Layr using AI and language models synthesizing current research. And we also talked about using AI in program evaluation. So jump around and read some more!