Abstract:When artificial intelligence is integrated into medical decision-making, the ambiguous definition of technological autonomy and the multi-party game of responsibility allocation lead to legal disputes over rights and responsibilities as well as ethical challenges. Focusing on clinical diagnosis, treatment strategy generation, and prognosis assessment, this paper reveals the data bias and decision traceability deficiencies caused by the “black box” of algorithms, and then proposes solutions from 3 dimensions: defining responsible parties, standardizing technical defect assessments, and promoting interdisciplinary collaboration. From a legal perspective, based on the “high-risk system” classification under the European Union Artificial Intelligence Act and no-fault liability principle under the product liability relative laws, a 3-tier responsibility framework of developer-operator-medical institution is constructed: for auxiliary systems, operating physicians bear ultimate responsibility; for decision-making systems, developers are required to provide verifiable algorithms and take joint liability. Ethically, dynamic review mechanisms (such as mandatory disclosure of decision tree logic and data balance metrics) are suggested to balance technological efficiency with patients’ right to informed consent.