AI & Equity: Unveiling Bias and Building Bridges

Dee Lanier • February 1, 2024

We've been talking about AI in education, its potential, and the pitfalls to avoid. But one crucial conversation often gets swept under the rug: bias. We hear whispers of "identifying bias" or "ethical AI," but rarely do we get concrete tools to tackle these complex issues.


That's where my "A.M. I. R.ight?" framework comes in. Which, if any, of the following apply to you:

These simple yet powerful prompts help us confront our own biases, the unseen baggage we carry around. It's the essential first step before we even think about AI bias. But how do we move from self-awareness to identifying bias in AI outputs? The answer lies in a brutal mirror: how vigilant are we about spotting bias in ourselves? Can we sniff out our own prejudices against certain people, ideas, or even articles we mistake for truth? Do we lean towards or against things without digging deeper, unwilling to be proven wrong?


Only then can we truly begin to see the bias woven into AI output. Because let's face it, these tools are trained on data sets, and data sets are inherently flawed. They're built on the biases of the people who created them, reflecting the inequalities and blind spots of our world.


So, what can we do? How can we leverage AI's power without perpetuating harmful inequities? Here are some keys:

  1. Become a bias detective: Develop critical thinking skills to question everything, including AI outputs. Don't blindly accept what machines tell you.
  2. Seek diverse perspectives: Expose yourself to voices and viewpoints different from your own. Challenge your assumptions and expand your understanding of the world.
  3. Demand transparency: Ask how AI tools were trained, by whom, and on what data. Look for red flags of bias and advocate for fairer data sets.
  4. Embrace continuous learning: This journey is never-ending. Stay updated on the latest research on AI bias and keep refining your own ability to identify and combat it.


And for those who want to delve deeper, there's more:

  • My first book, Demarginalizing Design, explores how bias infiltrates design and how we can create more inclusive solutions.
  • My upcoming co-authored book with Ken Shelton tackles the very topic of ethical and equitable AI use (title TBD). Stay tuned for updates!


Let's not let AI become another tool that perpetuates inequalities. By acknowledging our own biases, critically examining AI outputs, and demanding fairer data sets, we can build a future where technology empowers, not marginalizes. Together, we can bridge the gap and create a truly equitable AI landscape in education and beyond.

Remember, the fight against bias starts with awareness and action. Let's embark on this journey together, with open minds and critical hearts.


Speaking of transparency, yes, I did indeed use AI to help edit this post based on my own words. Below is a sample prompt all educators and students should utilize:


Act like a copyeditor and proofreader and edit this manuscript according to the Chicago Manual of Style. Focus on punctuation, grammar, syntax, typos, capitalization, formatting and consistency.

Dee is the author of Demarginalizing Design and a passionate and energetic educator and learner with over two decades of instructional experience on the K-12 and collegiate levels. Dee holds Undergraduate and Master’s degrees in Sociology with special interests in education, race relations, and equity.  Dee is an award-winning presenter, TEDx Speaker, Google Certified Trainer, Google Innovator, and Google Certified Coach that specializes in creative applications for mobile devices and Chromebooks, low-cost makerspaces, and gamified learning experiences. Dee is a founding mentor and architect for the Google Coaching program pilot, Dynamic Learning Project, and a co-founder of Our Voice Academy, a program aimed at empowering educators of color to gain greater visible leadership and recognized expertise. Dee is also the creator of the design thinking educational activities called, Solve in Time!® and Maker Kitchen™️ and co-host of the Liberated Educator podcast. Dee practices self-care by reading, playing percussion, and roasting, brewing, and drinking coffee.

RECENT ARTICLES

By Betsy Monke October 29, 2025
This blog, written by an IDEA Mini-Grant recipient, discusses the idea behind using robots during reading lessons to support literacy and increase reading scores.
By Emily Vertino October 22, 2025
ChatGPT, a form of generative Artificial Intelligence, more commonly referred to as AI, popularized amongst students my freshman year of high school. My at-the-time English teacher was the first to notice that all of a sudden, freshmen—who had never taken a high school-level English course—wrote as advanced as a college professor. He pointed out that students who showed high school-level hand-written papers were able to properly use an em dash and focused on parallel structuring solely on their online assignments, a feat he had not seen in freshmen before. It became natural for teachers—from freshmen classes to senior classes—to connect that students using emdashes or specific words—delve, deep understanding, crucial, elevate, resonate, enhance, tapestry, foster, endeavor, enlighten—had used AI in their paper. After a few months of teachers reporting that students began scoring exceptionally well on papers, my school implemented an application called TurnItIn, ironically, another generative AI that reviewed paper and scanned for “proof” of AI generated text. The issue started once TurnItIn accused students who properly incorporated a citation into an essay plagiarized the text, ignoring all credit given to the original author and the research done by the student. Needless to say, we switched back to teachers reading papers and discussing with the students themselves if there was suspicion of AI incorporation and my school made a policy about “AI Academic Dishonesty”. Even amongst my peers in the classes with the highest rigor, there are countless kids who incorporate AI into their school work. Be it through having ChatGPT solve their calculus problem or Chemistry problem, AI is widely incorporated, which causes a noticeable shift in their critical thinking capabilities. Rather than spending thirty minutes struggling through a derivative problem on their own, they immediately refer to having ChatGPT solve it and copy the answer down, depriving them of critical understanding of the problem and the method used to solve it. General conversation is shifting too—my hallways are full of students misusing words or bragging about how ChatGPT landed them an A in a specific class. This isn’t to say I’m against AI—because I truly believe proper use of AI can be more beneficial than harmful—but as it is now, generative AI devices are damaging the development of my peer’s brains and there are dozens studies showing that generative AI, specifically Elon Musk’s Grok, is ruining the ecosystem of Memphis. I also find that the use of the resources around me has gradually decreased. When I was a freshman, my school used a center court to hold a resource center for all subjects—on top of every teacher having office hours for an additional 30 minutes after every day—and it quickly became a hot spot for students. I write fanfiction during my free time so I was actively inside the court, having English teachers proofread my work and discussing my ideas for the next scenes. I also went in to simply talk to teachers, but that’s beside the point. Each day I was in our resource court, it was filled with students coming in for support—be it math, English, science, history, or a language—and truly working on bettering their understanding of the subject. However, now, as a senior, we only have a math resource center (MRC) that operates full-time and a science resource center that operates during the first 40 minutes of a class. My school no longer has an English resource center for students that need help and for those who do, even office hours are a 50/50. As mentioned before, teachers stay for 30 minutes after school—with the exception of teachers who supervise clubs or sports—which is far too short for English teachers that have dozens of students coming in for English support. A select number of teachers introduced an appointment scheduling simply because of how busy their office hours are, while other teachers have students who only come in the day before a summative. The teachers I know became teachers because of their love for helping students, yet my peers are dismissing all help from their teachers in favor of ChatGPT, who isn’t even correct 100% of the time. This phenomenon occurs with reading, too. I’m an avid reader—most of my favorite novels have multiple volumes with hundreds of chapters (my all-time favorite has 1,400 chapters for the first book alone; the second book has another thousand), and a growing issue I’ve noticed as AI grows is that my peers use AI to summarize documents. For example, Connected Papers has been recommended to me by my closest friends and once I googled it, I found that it uses AI to web-browse for articles similar to a paper currently being read and labels key points that correlate to your current article. AI is useful; essentially, AI isn’t inherently harmful and there are proper uses for it, but the misuse of AI continuously outweighs the benefits. In the above instance, having a resource capable of easily accumulating sources in a similar field of interest shaves off time spent scouring online and leaves that time for additional revisions, which is beneficial, but the most common use of AI is completion, not assistance, when it should be the opposite.
By Member Engagement Committee September 10, 2025
IDEA has launched a pilot Slack community to provide its members with a space to create consistent and meaningful connections with like-minded peers.