Letters of Recommendation are very biased. But we can change that.
Studies* show that letters of recommendation are biased in multiple ways, reinforcing stigma and bias that already exist. That bias and stigma reinforce negative beliefs about women; trans, lesbian, gay, bisexual, queer (TLGBQ) folks; and Black, Indigenous, and People of Color (BIPOC). The bias in our writing is both conditioned by society and a part of how our own unconscious bias plays out.
One of the most documented ways bias shows up in our writing is through gendered terminology and associations. We have associations with terminology that are gendered feminine and masculine, and the words we use to describe someone may carry with them gendered associations. Let’s take the word nurturing, for example. This is not a gendered word, but it’s commonly associated with the feminine. The person reading that letter of recommendation may be more likely to minimize the overall value of the person being described because being nurturing (and more feminine) is something our Western and capitalist culture undervalues to the same extent as characteristics associated with the masculine. Words like driven, decision maker, decisive, and expert are often associated with the masculine and overvalued.
Gendered language wouldn’t be an issue if we valued someone’s capacity to nurture the same way we valued someone’s expertise. The reality is that we live in an imperfect world where our bias impacts what we value and the words we use to describe those values.
Thankfully, there are ways we can combat this bias.
Usually, we write a letter of recommendation because we’ve played a role in someone’s life where we can provide examples of their strengths, insights into their values, and why we are recommending them. When writing these letters, though, we can inadvertently disenfranchise someone by writing something that is skewed and biased by our own human-ness.
Thankfully, there are ways to address this. These are my go-to tools for writing better, less biased letters of recommendation:
The Tom Forth calculator pulls out words with gender associations to show you, the writer, that you might be using more feminine masculine-associated words to describe someone. It pulls out keywords that are “female-associated words” and “male-associated words” in your writing. The tool is web-based and doesn’t save any part of the letter you add to be analyzed. It’s a really interesting tool to learn from and discover what we are writing that we consciously mean to say and what we might be unconsciously saying in our writing.
This handout from Colombia University, Avoid Implicit Gender Bias In Recommendation Letters, gives suggestions for identifying key ways we could be disenfranchising someone with a letter of recommendation. It provides suggestions on how to “ensure that similarly qualified [people]” (Note: it says men and women, but let’s change that to “people” so we are including nonbinary folks and anyone else who identifies outside of the binary) “are described in similar language, thereby avoiding unconscious bias.”
Finally, check out Letters by Lucinetic. It’s a platform that both the letter of recommendation requestor and writer can use to get more information to make the writing process easier. Lucinetic then uses AI that it has trained to identify gender bias that occurs in our natural writing and has suggested alternative text that will not further disenfranchise the letter of recommendation requestor. The software doesn’t write a letter of recommendation for you, but it can auto-generate some ideas based on the key skills or areas of expertise of the requestor. On the website, there’s a short video that walks you through how the platform leverages the power of AI to make writing letters of recommendation (and cover letters) easier and minimizes how our unconscious bias shows up in our writing.
I hope this helps as you navigate writing letters of recommendation and helps illuminate some of the challenges women, TLGBQ, and BIPOC folks are facing when making these requests of mentors, bosses, and supervisors.
*Study Finds Recommendation Letters inadvertently signal doubt about female candidates
P.S. Watch this space: I’ll be sharing more resources about ways to use AI in ethical ways that support equity and inclusion.