Responsible AI to Increase Clinical Decision Trust: Explainability & Reliability of Machine Learning Models
TRUST Workshop
Recently, biomedical and health informatics has registered important developments concerning the application of Machine Learning (ML) models. The support to clinical decision making is one of these domains. However, the integration of these computational models into clinical practice remains constrained.
The main goal is to provide solutions for machine learning explainability, providing the ability to justify its outcomes, thus effectively supporting physicians in the use of the model prediction. An additional key issue directly relates with the quantification of reliability that predictions of ML models can provide.
In this context, the proposed workshop aims to explore and promote the discussion of techniques fostering explainability and/or reliability estimation of machine learning models.The main objective is to devise potential solutions empowering physicians to effectively apply model predictions in the daily clinical practice.
Topics
- Machine Learning
- Explainability
- Interpretability
- Reliability
- Trust
- ...
Program (tbd!!)
December 9th, 2024
Call for Papers (CFP)
ICDM 2024 solicits papers (max 8 pages plus 2 extra pages) for peer review.
Furthermore, as in previous years, papers that are not accepted by the main conference will be automatically sent to a workshop
selected by the authors when the papers were submitted to the main conference.
All accepted workshop papers will be published in the dedicated ICDMW proceedings published by the IEEE Computer Society Press.
Workshops are expected to be held on December 9th, 2024.