Abstract:With the rapid advancement of artificial intelligence (AI) in healthcare, clinical decision support system (CDSS) is gradually integrated into clinical workflows. While CDSS enhances diagnostic and therapeutic efficiency and accuracy, it has also sparked complex debates regarding accountability. This article systematically examines the current global landscape of accountability challenges associated with CDSS. It delves into the mechanisms underlying these challenges, focusing on 5 key dimensions: ambiguity in liability attribution, the “black-box” nature of algorithms, data bias, backward legal and regulatory systems, and heterogeneity in deployment environments. To address these issues, a collaborative governance framework centered on whole-lifecycle governance is proposed. Specifically, the framework advocates: enhancing AI trustworthiness through technical pathways; constructing innovative mechanisms such as risk-based tiered regulation and no-fault compensation funds through legal and regulatory pathways; strengthening institutional internal controls through organizational pathways; and reshaping a new professional paradigm of “human-AI collaboration” through ethical and educational pathways. The study argues that resolving CDSS accountability dilemmas is not only critical for fostering trustworthy AI in medicine but also essential for safeguarding patient safety, ensuring healthcare equity, and providing clinicians with clear liability boundaries and robust professional protections amid technological transformation.