top of page

By Tchicaya Ellis Robertson, PhD with Candace Ellis Clark, BS


Close to one million Black women ages 20+ were unemployed in November 2025, a 7.1% unemployment rate, according to BLS household survey data. The official data is already loud, and it carries direct consequences for how resumes will be screened in 2026. Think about this scale in human terms; almost 1M families who rely on a Black woman as head of household have lost their lunch money, bill money, transportation money, money to fill prescriptions. Yet, we are still talking about the initial shock of just 300,000 Black women who exited the workforce in 2025 as if it were the end of the story. 


It wasn’t. The public conversation keeps landing on a number that, according to media reporting, has at least doubled since then. The disruption is much larger than the initial job loss. It is pouring into systems that don’t account for ongoing and long-term unemployment and labor force exits. This is no longer a warning, it is a clarion call with debilitating downstream effects. One of which falls squarely onto the desks of HR leaders responsible for finding talent to power the AI skilling revolution.  


2026 hiring systems are not designed to handle algorithmic omissions like the one that resulted from the mass exodus of Black women in 2025. When a large cohort carries disruption into the market at the same time, “gap” stops being an exception and becomes a pattern. Most screening logic still treats it like an individual flaw: instability, risk, or lack of commitment. A missing job title gets read as a red flag. A nontraditional stretch of work gets treated as less credible than payroll work, even when the responsibilities were just as complex, just as demanding, and just as leadership-heavy.


This is how inequity becomes durable. Not only through layoffs or labor force exits, but through the second wave of harm that comes after: how the market interprets the people who were hit. When hundreds of thousands of Black women experience disruption in the same tight window, we should treat that as a macro labor market event, not an individual deficiency.


Now layer in what is changing inside employers. More organizations are standardizing screening. More are using structured interview guides, competency models, and AI-enabled tools to “improve consistency.” In theory, that consistency is supposed to reduce bias. In practice, it can scale the wrong assumption faster if the baseline story is incomplete.

If your systems are trained to treat gaps as risk, and a disproportionate share of Black women now have gaps, you have the ingredients for disparate impact even before you add a single biased prompt. You do not have to intend discrimination for your process to produce it. You only have to keep using a rule of thumb that was never designed for a labor market shock of this size. The problem is that this inaction is no longer legally defensible. Check out my blog on the new laws that went into effect on January 1, 2026.


What the data shows, in plain view

Before we get too deep into the AI layer, we must anchor to the labor market context. Based on data published by the Bureau of Labor Statistics, the scale of the disproportionate talent disruption is impossible to dismiss. The data in Table 1 shows that Black women’s unemployment rate hovered in the mid-5% range through early 2025, then surged past 6% in April, the inflection point when their jobless rate began sharply diverging from other groups, and climbed to over 7% for the remainder of the year, peaking at 7.5% in late summer. 


Meanwhile, the unemployment rate for White women remained comparatively steady around the mid-3% range, inching from 3.2% to 3.4% between August and September. 

In other words, what started as a concerning gap turned into a chasm, a labor market disruption disproportionately affecting Black women. By mid-2025, unemployment among Black women was double that of White women, and the gap holds through the back half of the year. 



This is not a small gap. It is the labor market signal that your hiring stack is about to misread at scale.

The data tells you what is happening, but it doesn’t explain why. And what is happening is that Black women are absorbing a disproportionate amount of labor market instability, and the recovery pattern is not obvious in the trend.

Now here is why that matters for HR leaders.


The hundreds of thousands of Black women that exited the workforce is not just a labor market story. It is a hiring reality.

When the rate of disruption doubles in a matter of months, not years, the downstream effect is not simply that more people are unemployed. The downstream effect is that a large cohort carries the same kinds of resume patterns into the market at the same time:

  • gaps and compressed timelines

  • pivots into contract work, caregiving, or entrepreneurship

  • leadership activity that is real but not payroll-labeled

  • credential sprints, portfolio builds, informal consulting

  • non-linear narratives that do not map to traditional “progression” templates


A human can read that as adaptation. A hiring pipeline can read it as risk.

And this is where many organizations will quietly fail their own talent goals.

Because if your screening logic treats a macroeconomic shock as an individual deficiency, you will filter out capable people precisely when you need them.

“Gap” is not the problem. The interpretation of the gap is the problem.


Many Black women did not stop producing value during periods of disruption. They redirected time into leadership and high-responsibility work that hiring systems often do not capture well, such as:

  • caregiving operations and household leadership under constraint

  • volunteer executive leadership, board and community governance

  • credentialing, apprenticeships, portfolio projects

  • consulting, venture building, informal advising and coaching

  • crisis management, resource coordination, complex logistics

These are leadership signals. They are evidence of capability. Figure 1 contextualizes this problem and offers a method for leaders to evaluate these capability signals.  


Figure 1.

But too often, hiring stacks only recognize leadership when it appears under a conventional employer and title. So the same year can be encoded as either:

  • a negative “gap,” or

  • a credible “experience narrative”

When a cohort disruption hits one group harder than others, that encoding gap becomes a selection gap.


Where AI makes this worse, quickly

Even if your organization does not use fully automated hiring decisions, most HR teams now use some combination of:

  • resume parsing and structured profiles

  • screening and ranking tools

  • LLM-based copilots that draft candidate summaries and “fit” language

  • retrieval over internal rubrics and “what good looks like” guidance

These tools shape what evidence is surfaced and how it is framed. That framing influences screening decisions, interview design, and the tone of deliberations. When those effects are patterned across a cohort, selection rates shift and disparities compound.

This is why it is so important to develop and operationalize omission-aware metrics.


The technical failure hiding inside the hiring conversation is omission

A lot of bias work focuses on what a model says.

The higher-leverage problem here is what the system fails to recognize at all.

If a candidate’s 2025 story includes leadership, governance, operations, or credential-building, but the system does not extract it, summarize it, or treat it as relevant, the candidate is penalized before the interview begins. That is omission. And when a labor shock pushes one group into less “legible” pathways, omission becomes patterned disadvantage.


In practical terms: you can have a fair intent and still deploy a system that consistently downgrades the same group.


A direct message to HR leaders: this belongs in your 2026 hiring plan

If you are an HR leader planning for growth, retention, and leadership pipeline stability, you cannot afford to treat hundreds of thousands of Black women who exited the workforce as a social issue living outside your talent strategy. It is a talent supply issue, a capability recognition issue, and a business continuity issue.

Here is what “considering it” actually means:

  1. Update what your organization counts as experience

    Create and socialize a taxonomy of legitimate leadership and responsibility signals that count during disruption, including unpaid or non-linear roles.

  2. Audit your screening stack for gap penalties

    Test how your tools treat gaps and pivots. Look for automatic downgrades in seniority, “flight risk” language, or reduced interview recommendations.

  3. Train recruiters and hiring managers on narrative discipline

    If your internal norm is “gap equals concern,” your tools will learn it, reinforce it, and scale it.

  4. Hold vendors accountable for omission and narrative bias

    Ask vendors to demonstrate how their models recognize nontraditional leadership and how they prevent systematic downgrading of disrupted cohorts.

  5. Include the right interview questions in the hiring rubric 

When you see resume disruptions, ask candidates what they built, lead, or learned during those periods, rather than assume that gaps signal risk.



Why this matters for the labor market, not just individual careers

If a cohort of roughly 800,000 Black women is economically sidelined and then re-enters through nontraditional pathways, the question is not whether employers will hire them. Employers will hire some.


The question is whether hiring systems will recognize their capability fast enough to prevent long-term scarring: lower earnings trajectories, reduced leadership representation, thinner internal pipelines, and a durable “experience penalty” that outlasts the shock that created it.

This is a solvable problem. But it is not solvable through good intentions alone.

It requires a shift in what we measure, what we count as evidence, and what our tools are allowed to omit.


And for HR leaders, addressing it is not charity. It is capacity. It is competitiveness. It is the difference between building a resilient workforce and quietly automating away the very talent you say you cannot find.


 
 
 


On January 1, 2026, a new wave of AI laws quietly turned what used to be Responsible AI best practice into something closer to civil rights compliance for employers. Illinois in particular has drawn a line in the sand with HB 3773 Public Act 103-0804: if you use AI in employment decisions and it produces discriminatory outcomes, you can be on the hook, even if the bias comes from the way your large language model (LLM) retrieves, frames, and explains information rather than from a classic scoring model.[1][2][3]


At the same time, enterprise AI is increasingly deploying RAG-based assistants (for example Copilot or Perplexity) on top of foundation models (for example GPT, Gemini, Claude). Research is now showing that RAG can be “accurate but unfair”: it improves factual performance while quietly distorting who and what gets represented in answers.[4] That is exactly the sort of hidden disparate-impact risk that new state laws are setting up to catch.


This blog sketches how the January 1 legislative changes interact with representational bias in RAG and LLM layers, and what that means for employers, HR tech vendors, and compliance leaders.


WHAT ACTUALLY CHANGED ON JANUARY 1, 2026


The cleanest example is Illinois. Amendments to the Illinois Human Rights Act that took effect January 1, 2026 make it a civil-rights violation for an employer, agency, or labor organization to use AI in recruiting, hiring, promotion, training, discipline, or discharge in a way that results in discrimination against protected classes.[1][2] The law:


- Explicitly defines “artificial intelligence” and “generative artificial intelligence” as machine-based systems that infer from input to generate predictions, content, recommendations, or decisions that influence employment outcomes.[1][2]

- Requires notice when AI is used to influence covered employment decisions (employees now, and prospective employees under draft rules), with rules specifying what the notice must include.[5][6][1]

- Prohibits using ZIP code as a proxy for protected classes, signaling regulator awareness of proxy discrimination and structural bias.[1][2]


Illinois is the clearest employment example, but it is part of a broader patchwork of state activity that varies by scope, theory of discrimination, and start date: 


  • Illinois’ employment AI amendments to the Illinois Human Rights Act, effective on January 1, 2026, embed AI-driven discrimination and notice duties directly into state civil-rights law. 

  • Texas’ Responsible Artificial Intelligence Governance Act (TRAIGA, HB 149), also effective on January 1, 2026, establishes broader AI-governance obligations while framing discrimination primarily in intent-based terms alongside other prohibitions. 

  • In California, FEHA regulations on automated decision systems in employment took effect on October 1, 2025, so they were already operational before January 1, 2026.

  • Colorado’s comprehensive high-risk AI law, often discussed through a consumer-protection lens, is generally described as taking effect June 30, 2026. [3][7][8][9]


At the federal level, a December 11, 2025 executive order signaled a push toward a minimally burdensome national AI framework and directed the Attorney General to stand up an AI Litigation Task Force to challenge certain state AI laws, adding uncertainty to the state-by-state compliance landscape.[20]


Even where statutes do not explicitly say “do a bias audit,” commentary aimed at HR teams is blunt: regulators and courts will look at whether you tested your systems, documented results, and tried to correct disparate impacts before you rolled them out.[3][10]


In other words, 2026 is the year where your AI hiring stack stops being a pure “innovation” story and starts looking a lot more like a Title VII / fair-lending / health-equity problem with logs, audits, and discovery.


WHY RAG PIPELINES ARE PART OF THE DISCRIMINATION STORY


Most of the public debate still focuses on scoring models and automated employment decision tools. But in practice, many of the systems that hiring managers and HR business partners will use in 2026 are RAG-based copilots, not just rankers: assistants that search across internal documents, policies, and historical data, then generate seemingly grounded answers with an LLM.


Fairness research has started to pull at this thread. One study on fairness in RAG finds that a RAG pipeline can keep or even improve utility (e.g., accuracy or exact-match scores) while producing responses that are “less fair” across demographic groups because retrieval and context selection skew who is represented.[4] That happens at multiple layers:


- Documents and databases. If the corpus behind a recruiting or performance-management assistant under-represents women, older workers, or certain ethnic groups in “success profiles,” the RAG system will retrieve and foreground those skewed examples as if they were neutral facts.[4] Under an Illinois-style law, it becomes hard to claim “reasonable care” if your knowledge base itself is structured in ways that map cleanly onto protected-class disparities.[1][2]


- Query planning and reformulation. The choice of retriever and query-expansion strategy can tilt results toward content about historically dominant groups, even for neutral questions.[4] A recruiter copilot that expands “ideal sales leader” to patterns seen mainly in past male leaders is effectively laundering old discrimination into new recommendations.  


- Context generation and injection. How you build the context window (e.g., what gets included or what gets dropped) acts as a narrative filter. Studies show cases where RAG context disproportionately highlights one protected group, shifting perceived fairness without degrading utility metrics.[4] In HR, that can mean subtle over-exposure of one group’s achievements and under-exposure of others’.  


- Grounded answer generation. If the evidence is skewed, grounded answers can be “technically correct” yet systematically more positive, detailed, or actionable for one demographic group than another.[4][11] That kind of representational disparity is squarely in the domain of disparate impact, even if the model architecture never sees “race” or “gender” as explicit features.


For compliance leaders, the takeaway is simple: your RAG stack is part of the employment-decision pipeline. Retrieval and prompts shape what gets surfaced and how it is framed, and those narratives influence screening, interview design, and promotion guidance. When that influence is patterned, it can shift selection rates and other outcomes, placing representational bias squarely in disparate-impact territory. You cannot treat it as a neutral “knowledge layer” that sits outside the scope of bias and discrimination law.


THE LLM LAYER: INSTRUCTIONS, PROMPTS, AND SEMANTIC FRAMES


On top of RAG, employers are increasingly wrapping foundation models like GPT, Gemini and Claude, in custom instructions and prompts that define what “good” looks like.


Research on cultural alignment and inclusive language shows that default LLM behavior already reflects skewed values and semantic associations, often privileging Western or majority-group norms unless constrained.[12][11] In an employment context, that matters in several ways:


- Instructions as de facto HR policy. System prompts that ask a model to prioritize “executive presence,” “grit,” or “culture fit” can encode highly contested, historically biased ideas of merit. When those instructions sit behind a hiring assistant used in Illinois or similar jurisdictions, they are no longer just UX choices, they are part of the decision logic now covered by civil-rights law.[1][2]


- Knowledge recall and semantic memory. Studies of LLM cultural bias show that what “comes to mind” first in examples, analogies, and advice is often skewed toward majority groups.[12][11] In HR copilots, that means different groups may consistently receive different quality of guidance on promotion preparation, salary negotiations, or leadership style, even at the same nominal performance level.[13][11]


- Prompt engineering and flows. Enterprise deployments on ChatGPT Enterprise, Azure OpenAI (GPT-4/4.1), Gemini for Workspace, or Perplexity for internal search often standardize prompts like “rank top candidates” or “summarize this candidate’s fit.” Research on fairness in RAG underlines that these prompts act as hidden decision rules that compress past patterns into present scoring and narrative.[4][14][15]


- Semantic framing of groups. Work on gender-inclusive language generation shows that models default to stereotypical associations unless explicitly directed toward inclusive behavior.[11] In performance feedback or reference summaries, that can mean systematically different adjectives and developmental suggestions by gender or other group cues, even when the underlying data is similar.


From a regulatory perspective, these are not just UX artifacts; they are part of the “design and use” of AI that statutes and regulators will scrutinize when evaluating whether an organization took reasonable steps to prevent discrimination.[3][1]


WHY THIS MATTERS


To make this real, here are a few recognizable platforms and use cases:


- HR copilots on Microsoft Copilot + internal RAG. Large employers in Illinois are already exploring Copilot-based assistants over their own SharePoint or Teams content to help recruiters summarize résumés, draft outreach, and generate interview questions. The RAG corpus includes job descriptions, competency models, and past review templates. These are exactly the artifacts where representational bias hides. Under HB 3773, these systems fall squarely into the “AI in employment decisions” category, triggering notice obligations and making corpus and prompt audits part of discrimination risk management.[1][2][4]


- Recruiting assistants on ChatGPT Enterprise or Claude. Talent teams use ChatGPT Enterprise or Claude to draft job ads, design structured interview guides, and produce “ideal candidate” summaries by feeding historical success profiles. If those profiles over-index on one demographic, the assistant will too, which can influence who gets sourced or advanced even if no automated scoring is in play.[4][16]


- Knowledge-worker copilots on Perplexity or Gemini + RAG. Firms deploy Perplexity or Gemini-based search copilots over internal promotion criteria, leadership frameworks, and project case studies, answering questions like “What does partner potential look like here?” Even when these systems are “just informational,” they shape norms and aspirations; in a world where AI use in employment is explicitly regulated, that boundary looks increasingly porous.[14][17][1]


- Advisor assistants in financial and health domains. RAG deployments in financial-services and healthcare, often built on OpenAI, Gemini, or Claude APIs, use internal policies and guidance documents to support decisions about credit, treatment pathways, or support resources.[18][15][19] Representational gaps in those volumes can create systematically different information environments for different groups, raising both sectoral compliance and anti-discrimination concerns.



CONCLUSION


Representational bias in RAG and LLM layers is the next disparate-impact frontier. Laws like Illinois’ HB 3773 have not yet named RAG explicitly, but the combination of:


- broad AI definitions that clearly cover RAG-based assistants;  

- outcome-focused anti-discrimination language; and  

- emerging fairness research showing “accurate but unfair” RAG behavior  


means that employers now have a concrete reason, no, an obligation, to audit not just models and scores but also what and who their AI systems represent.[1][2][4]


For practitioners, that translates into a checklist: massive representation reviews, retrieval and context fairness tests, instruction and prompt audits, and semantic framing evaluations, all logged and tied back to the new legal duties that arrived on January 1, 2026.



CITATIONS

[1] 2026 Illinois Employment Law Update: New Compliance Obligations on AI, Leave, Pay, and Workplace Agreements. https://www.laboremployment-lawblog.com/2026-illinois-employment-law-update-new-compliance-obligations-on-ai-leave-pay-and-workplace-agreements/

[2] Amendments to Illinois Human Rights Act to Regulate Use of AI in Employment Decisions. https://www.jdsupra.com/legalnews/amendments-to-illinois-human-rights-act-5442348/

[3] New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption. https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption

[4] Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems. https://aclanthology.org/2025.coling-main.669.pdf

[5] Illinois Unveils Draft Notice Rules on AI Use in Employment Ahead of Discrimination Ban. https://ogletree.com/insights-resources/blog-posts/illinois-unveils-draft-notice-rules-on-ai-use-in-employment-ahead-of-discrimination-ban/

[6] Illinois Employers Face AI Transparency Deadline Despite New Executive Order. https://www.reinhartlaw.com/news-insights/illinois-employers-face-ai-transparency-deadline-despite-new-executive-order

[7] Revisiting 2026 State AI Laws That Aim to Regulate AI in Employment. https://www.thehrdigest.com/revisiting-2026-state-ai-laws-that-aim-to-regulate-ai-in-employment/

[8] New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption. https://www.jdsupra.com/legalnews/new-state-ai-laws-are-effective-on-4178820/

[10] State laws regulating AI take effect in the new year. Here’s what HR needs to know. https://www.hrdive.com/news/state-laws-regulating-ai-take-effect-in-the-new-year-what-hr-needs-to-know/807125/

[11] Gender inclusive language generation framework: A reasoning approach with RAG and CoT. https://www.sciencedirect.com/science/article/pii/S0950705125011372

[12] ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning. https://arxiv.org/html/2501.01031v1

[13] Bias & Fairness in AI Models. https://research.contrary.com/report/bias-fairness

[14] Top 10 RAG Use Cases and Business Benefits. https://www.uptech.team/blog/rag-use-cases

[15] RAG in Financial Services: Use-Cases, Impact, & Solutions. https://hatchworks.com/blog/gen-ai/rag-for-financial-services/

[16] Top 7 RAG Use Cases and Applications to Explore in 2025. https://www.projectpro.io/article/rag-use-cases-and-applications/1059

[17] Best RAG Use Cases in Business: From Finance and LegalTech to Healthcare and Education. https://www.aimprosoft.com/blog/rag-use-cases-in-business/

[18] Development and evaluation of an agentic LLM based RAG framework for evidence-based patient education. https://pmc.ncbi.nlm.nih.gov/articles/PMC12306375/

[19] Retrieval augmented generation for 10 large language models and its generalizability in assessing medical fitness. https://www.nature.com/articles/s41746-025-01519-z

[20] Ensuring a National Policy Framework for Artificial Intelligence (Executive Order), The White House, Dec. 11, 2025. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/


 
 
 
Critical Thinking and AI


The campus was alive with energy—parents carrying shopping bags, students laughing as they mapped out their next four years, and my daughter, Prestyn, planning for her future. This was our last college visit before moving in for good this fall. In one of the parent sessions, one message stood out above the rest:


Critical thinking is the skill of the future

I smiled to myself. I couldn’t agree more. After decades of leadership and workforce experience and advanced training in generative AI, I knew they were right.


Critical thinking—questioning, analyzing, challenging—has never been more essential. In this era of AI, critical thinking is no longer optional. It is the skill that will separate those who can thrive from those who will drown in information that may not tell the full story.


I couldn’t wait to meet back up with Prestyn to share yet another moment of confirmation that she had indeed chosen the right place to continue her educational journey.

 

A Simple Birthday Query


The First Question

I asked AI:

Other than my birth (LOL), what other amazing things happened on October 13?

The answer came back confidently:

  • The U.S. Navy was founded.

  • The White House cornerstone was laid.

  • Nero became Roman Emperor.


The Second Question

I typed again:

"What major events in Black history happened on October 13?”

This time, the response was different:

  • Edith Spurlock Sampson, the first African American woman appointed to the United Nations, was born in 1898.

  • Angela Davis was arrested in 1970, sparking global protests and a rallying cry for justice.

  • Maryland ratified its emancipation constitution in 1864, freeing enslaved people before the 13th Amendment.


Why These Matter

  • Edith Sampson’s appointment represented international recognition of Black women scholars and diplomats at a time when such roles were exceedingly rare.

  • Angela Davis’s arrest highlighted Black radical thought and the legal system’s treatment of activists.

  • Emancipation in Maryland before the 13th Amendment shows that progress toward freedom took place on multiple legal fronts throughout the Civil War era.

 

Two questions. Two answers. Two very different versions of history.

 

Why Did We Have to Ask Twice?

This wasn’t just an AI problem—it was a history problem.


AI learns from existing data—news archives, encyclopedias, and history books. If those sources center Eurocentric narratives while treating Black history as an afterthought, AI will mirror that bias.


The AI didn’t “forget” Edith Sampson or Angela Davis. It was never trained to consider those stories as equally important in the first place.

 

Our Rewrite Journey

I decided to turn this moment into a learning lesson with Prestyn—a story about how a simple birthday question revealed something much bigger about how history is told.


When I asked ChatGPT to help to explain its biased response, the first answer placed the burden on me:

“If you want to see inclusive stories, you have to ask differently.”


That didn’t sit well with me at all! Why should I—or anyone—have to ask twice to get the full truth?


So, I pushed back:

“This isn’t about asking better questions. This is about building systems that tell better stories by default.”


ChatGPT and I had another lengthy conversation about the piece I wanted to write. In doing so, I realized the rewrite itself was an act of critical thinking—examining what was missing, challenging the first answer, and reshaping the narrative.


Critical Thinking in the AI Era

When Prestyn and I were debriefing our experiences at the university event, I looked at her and said, “This is what I want you to take with you into college…this is what I mean by critical thinking.”


Critical thinking today means:

  • Recognizing what’s left unsaid.

  • Asking why certain voices are amplified while others are erased.

  • Demanding systems that reflect the whole truth—not just the parts deemed ‘mainstream.’

 

AI Is a Mirror of Our Society

The omissions we see in AI responses are not random. They reflect centuries of historical erasure, where the contributions of Black people and women were minimized or ignored.

Unless AI is intentionally built with inclusive data, it will continue to replicate these patterns. Edith Sampson’s diplomacy, Angela Davis’s activism, and the emancipation of enslaved people in Maryland aren’t “extras.” They are history.

 

The Lesson for Students and Parents

Prestyn’s generation will grow up with AI as a constant companion—researching, learning, even writing. But AI is not the final authority. Critical thinking is.


The future belongs to those who can:

  • Question the first answer.

  • Spot the gaps.

  • Refuse to settle for half-truths.


The Lesson for Corporate Leaders

In the era of AI-driven decision-making, leaders cannot afford to delegate critical thinking to algorithms. AI is powerful, but it reflects the data—and biases—it’s built upon. True leadership requires oversight, discernment, and accountability.


The future belongs to those who can:

  • Interrogate the data behind the decisions.

  • Identify bias and its business impact.

  • Refuse to adopt technology without ethical alignment.


Quote from Corporate Leader, Reggie Romain
Quote from Corporate Leader, Reggie Romain

The Lesson for Educators and Policymakers

As AI reshapes how we learn, work, and interact, educators and policymakers hold the responsibility of ensuring future generations are prepared not just to use AI, but to question and improve it. AI cannot replace human judgment, ethics, or contextual understanding—and our systems of learning and governance must reflect that.


The future belongs to those who can:

  • Integrate AI literacy and critical thinking into education.

  • Create policies that address bias in technology.

  • Champion equitable access to both technology and truth.

 

A Call to Action

October 13, my birthday, taught me something unexpected:

The whole truth should never require a second question.


Parents - Teach your children that the most powerful thing they can do is think deeply and demand better answers.


Students – When you use AI or any other tool, don’t just accept the first thing it gives you. Ask what’s missing. Ask who’s missing.


Corporate Leaders – Refuse to adopt technology that lacks ethical alignment.


Educators and Policymakers – Equip future generations to question and improve AI and create policies that prioritize accountability and equity.


Truth isn’t just about what’s said. It’s also about what isn’t.

 
 
 
Transparent logo 2.png

Our insights enable business leaders to make data-driven decisions that improve the lives of individuals, communities, and nations. We actively contribute to the growth of communities and businesses through thought leadership, training initiatives, and community programming.

GET IN TOUCH

hello@tribeinsights.org

901 Lake Street, Unit 3331

Oak Park, IL  60303

  • Instagram
  • LinkedIn
  • X

©2025 Tribe Insights. All rights reserved.

Website design by JWHITE BRANDING

bottom of page