Generative AI tools such as ChatGPT, DeepSeek, Google’s Gemini, and Microsoft’s Copilot are revolutionising industries quickly. Yet, as these expansive language models become more affordable and prevalent in pivotal decision-making roles, their inherent biases might skew results and diminish public confidence. Naveen Kumar, an associate professor at the University of Oklahoma’s Price College of Business, has co-authored research highlighting the critical need to address bias by developing and implementing ethical, comprehensible AI. This involves strategies and guidelines that promote equity and transparency while reducing stereotypes and discrimination in large language model (LLM) applications.
Kumar highlighted a significant trend in the AI market: “As major entities like DeepSeek and Alibaba unveil platforms that are either free or significantly cheaper, a worldwide AI pricing competition emerges,” he explained. “With cost becoming a predominant concern, questions arise about whether ethical considerations and bias regulations will maintain their importance. Alternatively, with international firms involved, might there be a swift move towards stringent regulations? We are hopeful for the latter, but it remains to be seen.”
The study Kumar contributed to pointed out that nearly one-third of survey respondents felt they had missed out on financial or employment opportunities due to biased AI algorithms. Kumar notes that while AI systems have been designed to eliminate explicit biases, subtle, implicit biases persist. As LLMs grow more intelligent, identifying these hidden biases becomes increasingly complex, underscoring the imperative for ethical governance.
Kumar stressed the importance of aligning these models with human ethical standards, especially in finance, marketing, human resources, and healthcare sectors, to prevent biased decisions and outcomes. “Healthcare models biased against certain demographics could worsen patient treatment disparities; recruitment algorithms might unfairly prefer certain genders or races; advertising models could continue to perpetuate harmful stereotypes,” he noted. These concerns underscore the vital need for models that uphold fairness and avoid perpetuating inequality.
While frameworks for explainable AI and ethical guidelines are being established, Kumar and his team urge scholars to forge ahead with proactive technical and organisational measures to monitor and mitigate bias in LLMs. They advocate for a balanced approach to ensure that AI applications remain effective, equitable, and transparent. “The pace at which this industry evolves generates significant tension among stakeholders, each with diverging aims. It is crucial to reconcile the concerns of developers, business leaders, ethicists, and regulators to adequately confront bias in these LLM models,” he concluded. “Striking an optimal balance across various business sectors and regional regulations is essential for achieving success.”
More information: Naveen Kumar et al, Addressing bias in generative AI: Challenges and research opportunities in information management, Information & Management. DOI: 10.1016/j.im.2025.104103
Journal information: Information & Management Provided by University of Oklahoma