Ethical Prompt Design for Health Equity: Preventing Hallucination and Addressing Bias in AI Diagnoses

Authors

  • Adya Mishra (Independent Researcher) Virginia, USA. Author

DOI:

https://doi.org/10.63282/3050-9262.IJAIDSML-V6I3P102

Keywords:

Prompt Engineering, Health Equity, LLMs, AI Hallucination, Bias Mitigation, Medical NLP, Ethical AI, Human in the Loop

Abstract

The use of large language models (LLMs) in healthcare is transforming how clinicians access information, generate insights, and support patient care decisions. These AI systems hold tremendous promise offering the ability to summarize complex clinical notes, suggest differential diagnoses, and assist in managing vast amounts of medical data. However, alongside these benefits come serious ethical and practical concerns. If not carefully guided, LLMs can generate hallucinations confident sounding yet entirely fabricated information which can mislead clinical decision making. Moreover, these models often inherit and perpetuate biases from the data they are trained on, potentially exacerbating disparities in care for already marginalized or underrepresented patient populations. This paper explores how ethical prompt design the careful crafting of instructions and context given to LLMs can help address these risks. We focus on two key challenges: reducing hallucinated AI responses in medical contexts and minimizing bias that could negatively impact care quality for certain demographic groups. To tackle these issues, we propose a human in the loop framework, where clinicians and domain experts actively shape and evaluate prompts to ensure the output is safe, inclusive, and grounded in evidence based medicine

References

[1] Patil, R., Heston, T. F., & Bhuse, V. (2024). Prompt engineering in healthcare. Electronics, 13(15), 2961.

[2] AJUZIEOGU, U. C. Towards Hallucination Resilient AI: Navigating Challenges, Ethical Dilemmas, and Mitigation Strategies.

[3] Dankwa Mullan I, Weeraratne D. Artificial intelligence and machine learning technologies in cancer care: addressing disparities, bias, and data diversity. Cancer Discov. 2022;12(6):1423–1427. 10.1158/2159 8290.CD 22 0373

[4] Yang J, Soltan AAS, Eyre DW, Yang Y, Clifton DA. An adversarial training framework for mitigating algorithmic biases in clinical machine learning. NPJ Digit Med. 2023;6(1):55. 10.1038/s41746 023 00805 y

[5] Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Human centered design to address biases in artificial intelligence. J Med Internet Res. 2023;25:e43251. 10.2196/43251

[6] Ferrara E. Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Sci. 2024;6(1):3. 10.3390/sci6010003

[7] Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. 2024;10(4):e26297. 10.1016/j.heliyon.2024.e26297

[8] Dankwa Mullan I, Scheufele EL, Matheny M, Quintana Y, Chapman W, Jackson G, et al. A proposed framework on integrating health equity and racial justice into the artificial intelligence development lifecycle. J Health Care Poor Underserved. 2021;32(2):300–317. 10.1353/hpu.2021.0065

[9] Rajamani G, Rodriguez Espinosa P, Rosas LG. Intersection of health informatics tools and community engagement in health related research to reduce health inequities: scoping review. J Particip Med. 2021;13(3):e30062. 10.2196/30062

[10] Bura, C., Myakala, P. K., & Jonnalagadda, A. K. (2025). Ethical prompt engineering: Addressing bias, transparency, and fairness.

[11] Leslie Miller, C. J., Simon, S. L., Dean, K., Mokhallati, N., & Cushing, C. C. (2024). The critical need for expert oversight of ChatGPT: Prompt engineering for safeguarding child healthcare information. Journal of pediatric psychology, 49(11), 812 817.

[12] Patil, R., Heston, T. F., & Bhuse, V. (2024). Prompt Engineering in Healthcare. Electronics, 13(15), 2961. https://doi.org/10.3390/electronics13152961

[13] Alemanno, A., Carmone, M., & Priore, L. (2025). Prompting as an emerging skill for Healthcare Professionals. Journal of Advanced Health Care. Retrieved from https://www.jahc.it/index.php/jahc/article/view/399

[14] Kim, Y., Jeong, H., Chen, S., Li, S. S., Lu, M., Alhamoud, K., ... & Breazeal, C. (2025). Medical hallucinations in foundation models and their impact on healthcare. arXiv preprint arXiv:2503.05777.

[15] Aljohani, M., Hou, J., Kommu, S., & Wang, X. (2025). A comprehensive survey on the trustworthiness of large language models in healthcare. arXiv preprint arXiv:2502.15871.

[16] Echeverría Muñoz, D. E. (2025). Legal impact of Artificial Intelligence (AI) hallucinations (Master's thesis, Quito, EC: Universidad Andina Simón Bolívar, Sede Ecuador).

[17] Yao Zhang, Tongquan Zhou, Huifen Qiao, Taohui Li, "Ethical Issues in AI Generated Texts: A Systematic Review and Analysis", International Journal of Human–Computer Interaction, pp.1, 2025

[18] Henrickson L, Meroño Peñuela A. Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT. AI & SOCIETY. 2023:1 16.

[19] Giray L. Prompt Engineering with ChatGPT: A Guide for Academic Writers. Annals of Biomedical Engineering. 2023:1 5.

[20] Grabb D. The impact of prompt engineering in large language model performance: a psychiatric example. Journal of Medical Artificial Intelligence. 2023;6.

[21] Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems. 2023;47(1):33.

[22] Bibi, N., Khan, M., Khan, S., Noor, S., Alqahtani, S. A., Ali, A., & Iqbal, N. (2024). Sequence-Based intelligent model for identification of tumor t cell antigens using fusion features. IEEE Access.

[23] G. Lakshmikanthan, S. S. Nair, J. Partha Sarathy, S. Singh, S. Santiago and B. Jegajothi, "Mitigating IoT Botnet Attacks: Machine Learning Techniques for Securing Connected Devices," 2024 International Conference on Emerging Research in Computational Science (ICERCS), Coimbatore, India, 2024, pp. 1-6, doi: 10.1109/ICERCS63125.2024.10895253

Published

2025-07-04

Issue

Section

Articles

How to Cite

1.
Mishra A. Ethical Prompt Design for Health Equity: Preventing Hallucination and Addressing Bias in AI Diagnoses. IJAIDSML [Internet]. 2025 Jul. 4 [cited 2025 Sep. 15];6(3):7-12. Available from: https://ijaidsml.org/index.php/ijaidsml/article/view/231