Case Study: Letters by Lucinetic

The Customer: Letters by Lucinetic 

Letters by Lucinetic understands that crafting the perfect letter of recommendation or cover letter for a job can be a daunting task. By leveraging the power of AI, their platform makes writing cover letters and letters of recommendation easier, faster, and better. The platform can be used by both the letter of recommendation requestor and the writer to make the writing process easier while employing an AI-writing algorithm that is trained to create top-quality, ethical, and unbiased letters of recommendation and cover letters for jobs. 

The Ask:
One of Lucinentic’s founders, Olga Batygan, reached out to me to ask about supporting and building the language for the AI models to train on that would challenge biases that could show up in letters of recommendation. 

The Challenge:
Letters of recommendation are one area where unchecked bias is rampant. In one sense, it’s invited. You’re being asked to build the case for why this person is the best fit to fill a given role. It is our human bias that goes unchecked and might use different words to describe the same actions based on the social identity characteristics that someone holds. You can read more about the bias in letters of recommendation in a past blog post.

In creating a service that supports writing letters of recommendation, how do we solve for our human bias that we unintentionally may write into letters of recommendation that won’t serve this person and their highest good? To that end, the goal of my engagement with Lucintec was to give a large language learning model information to challenge oppressive biases it might already hold. These include but are not limited to sexism and racism that could show up in AI writing. 

The Solution:
I worked closely with a Lucinetic founder and the specific person working on programming codes for the Large Language Model (LLM) to build a list of the most common to least common ways that bias may show up in writing letters of recommendation.

Here are two specific examples of how we incorporated inclusion:

Ethical Training of LLMs:
Working with a lead developer, I offered my expertise in the most common words associated with bias that could potentially disenfranchise a candidate. This helped teach the LLM to use the same strong and respected words no matter the gender identity or race of the requester. Lucinetic then uses AI that it has trained on to identify gender bias and racial bias that occurs in writing and suggests alternative text that will not further disenfranchise the letter of recommendation requestor. 

Pronouns:
At the start of the process of using Letters by Lucinetic, we also built out functionality to ensure the person requesting a letter of recommendation can choose the pronouns they would like their recommender to use. Why is this important? When someone has changed their pronouns in their life, it can “out” someone who isn’t ready to be “out” by using the incorrect pronouns. In this case the person requesting the letter has full control over asking the person writing the letter which pronouns would be appropriate in this situation. The person writing the letter doesn't have to second guess or make assumptions about which pronouns this person uses. In the end, this software is built and maintained to minimize how our unconscious bias shows up in our writing.

The Result: 
A platform that works with the writers of letters of recommendation to leverage ethical AI in writing unbiased and high-quality letters of recommendation. If you’re interested in checking out the service for yourself, visit Letters by Lucinetic

Previous
Previous

How to Budget for ERGs

Next
Next

What is Ethical AI