基于人工智能的临床辅助决策支持系统问责制度探索
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

癌症、心脑血管、呼吸和代谢性疾病防治研究国家科技重大专项(2024ZD0524100,2024ZD0524102),深圳市医学研究专项资金项目(C10120250085),深圳市基础研究专项(JCYJ20240813160307011).


An exploration of accountability frameworks for artificial intelligence-driven clinical decision support systems
Author:
Affiliation:

Fund Project:

Supported by National Major Science and Technology Project for Prevention and Treatment of Cancer, Cardiovascular and Cerebrovascular, Respiratory, and Metabolic Diseases (2024ZD0524100, 2024ZD0524102), Shenzhen Medical Research Fund Project (C10120250085), and Shenzhen Basic Research Project (JCYJ20240813160307011).

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着人工智能(AI)在医疗领域的快速发展,临床辅助决策支持系统(CDSS)逐步融入临床诊疗流程。然而,CDSS在提升诊疗效率与精准性的同时也引发了复杂的责任归属界定问题。本文围绕CDSS问责困境的国内外现状展开系统讨论,从责任主体模糊、算法“黑箱”特性、数据偏倚、法律与监管制度落后与部署环境异质性5个方面深入剖析了CDSS问责困境的形成机制,并提出以全生命周期治理为核心的协同治理框架:主张通过技术路径增强AI的可信度,通过法律与监管路径构建风险分级监管与无过错赔偿基金等创新机制,通过组织路径强化机构内控,通过伦理与教育路径重塑“人机协同”的职业新范式。破解CDSS问责困境既要保障患者安全与医疗公平,也要为身处技术变革中的医务人员提供清晰的责任边界与坚实的职业保障,从而推动医疗AI技术健康发展。

    Abstract:

    With the rapid advancement of artificial intelligence (AI) in healthcare, clinical decision support system (CDSS) is gradually integrated into clinical workflows. While CDSS enhances diagnostic and therapeutic efficiency and accuracy, it has also sparked complex debates regarding accountability. This article systematically examines the current global landscape of accountability challenges associated with CDSS. It delves into the mechanisms underlying these challenges, focusing on 5 key dimensions: ambiguity in liability attribution, the “black-box” nature of algorithms, data bias, backward legal and regulatory systems, and heterogeneity in deployment environments. To address these issues, a collaborative governance framework centered on whole-lifecycle governance is proposed. Specifically, the framework advocates: enhancing AI trustworthiness through technical pathways; constructing innovative mechanisms such as risk-based tiered regulation and no-fault compensation funds through legal and regulatory pathways; strengthening institutional internal controls through organizational pathways; and reshaping a new professional paradigm of “human-AI collaboration” through ethical and educational pathways. The study argues that resolving CDSS accountability dilemmas is not only critical for fostering trustworthy AI in medicine but also essential for safeguarding patient safety, ensuring healthcare equity, and providing clinicians with clear liability boundaries and robust professional protections amid technological transformation.

    参考文献
    相似文献
    引证文献
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-07-02
  • 最后修改日期:2025-07-25
  • 录用日期:
  • 在线发布日期: 2025-08-19
  • 出版日期: 2025-08-20
文章二维码
重要通知
友情提醒: 近日发现论文正式见刊或网络首发后,有人冒充我刊编辑部名义给作者发邮件,要求添加微信,此系诈骗行为!可致电编辑部核实:021-81870792。
            《海军军医大学学报》编辑部
关闭