What is Ethical AI
If artificial intelligence isn’t “inherently“ ethical, what do you need to know about turning to ethical AI?
In our age of big technology that holds the famous Zuckerberg ethos of “move fast and break things,” we (humans) have created new ways of working that get ‘products’ out and working in the field before we fully know the effects those products will have on society. And despite a lot of research and knowledge, even though something is deemed unethical, it’s hard to turn back once these systems are already in use.
Artificial intelligence and our access to it now fall into this category.
So much of our scientific study is fueled by funding that is dedicated to making companies or countries more money or more powerful. This is where artificial intelligence began, and now it’s become more widely available to the public through more commonly known interfaces like Siri, Chat GPT, and Poe. There are many ways we have infused AI into systems that we know are biased. Some of the resources I offer below dive into the studies they are doing to show that software created by humans is just as biased (if not even more so sometimes) as us humans.
So what about ethical AI? Briefly, ethical AI is AI that is being created and reviewed by humans that are taking into consideration the wellbeing of other humans. It’s important that we get informed about AI platforms that are building into the foundations of the work of human rights, individual rights, privacy, non-discrimination, and non-manipulation at the core.
When we dive into this journey of learning about bias in AI and the unethical aspects of AI, we can better understand how to implement more ethical systems within AI. Here is a starter list of places to start learning more on this topic and hear from people who are a part of the movement to challenge unethical AI.
Dr. Joy Buolamwini has a great TED talk called “How I’m Fighting Bias in Algorithms.” Dr. Buolawmwini founded the Algorithmic Justice team, which is dedicated to sharing how bias shows up in algorithms and helping advocate for solutions to these large issues.
Dr. Safiya Noble wrote the book Algorithms of Oppression about how search engines have been built to reinforce racism.
The United Nations shares four core values that root its recommendations for ethical AI systems to provide human rights and dignity; living in peaceful, just, and interconnected societies; ensuring diversity and inclusiveness; and the environment and ecosystem are flourishing.
Letters by Lucinetic is a workflow tool with an ethically trained AI writing assistant built to support the creation of high-quality career documents, including letters of recommendation. The algorithm behind this robust tool has been meticulously taught by the world's top faculty, students, and DEI professionals; trust, legal, and social implications of generative text creation through freeform proprietary data input are outpacing human users.
These are a few starting places for entrepreneurs and organizations who are pushing AI to be and do better. Please reach out if you have resources to share on this topic and people we can uplift who are engaging in more ethical AI research and work. I’ll be sharing more about my learning and working on building more ethical AI platforms in the coming weeks and months.
Further reading and reading that inspired this post:
The Dawn of AI Interfaces how AI is rewriting human technology Interactions, by Rushabh Sheth, Forbes
The era of move fast and break things is over by Hemant Taneja, Harvard Business Review