loading . . . Managing a “responsibility vacuum” in AI monitoring and governance in healthcare: a qualitative study - BMC Health Services Research Background Despite the increasing implementation of artificial intelligence (AI) and machine learning (ML) technologies in healthcare, their long-term safety, effectiveness, and equity remain compromised by a lack of sustained oversight. This study explores the phenomenon of a “responsibility vacuum” in AI governance, wherein maintenance and monitoring tasks are poorly defined, inconsistently performed, and undervalued across healthcare systems. Methods We conducted semi-structured interviews with 21 experts involved in AI implementation in healthcare, including clinicians, clinical informaticists, computer scientists, and legal/policy professionals. Participants were recruited through purposive and snowball sampling. Interviews were transcribed, coded, and analyzed using abductive qualitative methods to identify themes related to maintenance practices, institutional incentives, and responsibility attribution. Results Participants widely recognized that AI models degrade over time due to factors such as data drift, changes in clinical practice, and poor generalizability. However, monitoring practices remain ad hoc and fragmented, with few institutions investing in structured oversight infrastructure. This “responsibility vacuum” is perpetuated by institutional incentives favoring rapid innovation and strategic ignorance of AI failures. Despite these challenges, some participants described grassroots efforts to monitor and maintain AI systems, drawing inspiration from fields such as radiology, laboratory medicine, and transportation safety. Conclusions Our findings suggest that institutional and cultural forces in healthcare deprioritize the maintenance of AI tools, creating a governance gap that may lead to patient harm and inequitable outcomes. Addressing this responsibility vacuum will require formalized accountability structures, interdisciplinary collaboration, and policy reforms that center long-term safety and equity. Without such changes, AI/ML technologies designed to improve patient health may introduce new forms of harm, ultimately eroding trust in AI and machine learning for healthcare. https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-025-13388-z