Evaluation A Decision Support System Based-Machine Learning For Medical Issues
A growing amount of material on clinical decision support systems (CDSSs) employing deep learning (DL) mostly emphasizes direct comparisons be-tween CDSSs and physicians (human versus computer). Information regarding the results of these systems when utilized as supplements to human decision-making (human versus human augmented by computer) is limited. This research investigates the viability of explainable neural network algorithms for decision-making support in healthcare text analysis contexts. We utilized a defined methodology on the identical medical dataset to improve the interpretability of the decisions rendered by Convolutional Neural Networks (CNN). The aim is to bolster health specialists' confidence in box forecasts. We utilized comprehensible deep learning methodologies, notably Local Interpretable Model-agnostic Explanations (LIME) and Compound Arguments, alongside an alternative interpretative framework referred to as the Contextual Importance and Utility (CIU) approach. The produced justifications underwent evaluation using human assessment (THAP). We conducted three user research studies employing analyses produced by LIME, THAP, and CIU. Individuals from various non-medical fields administered a series of evaluations through an online questionnaire and documented their perspectives and understanding of the given rationales. Three user categories (N = 40, 40, 40) were subjected to quantitative analysis, each exposed to three distinct types of rationale. Our findings confirm that the CIU-explainable strategy outperformed both LIME and THAP methods in enhancing decision-making support, while also being more user-friendly and intelligible. Furthermore, CIU outperformed the LIME algorithm and THAP by generating justifications more rapidly. Our findings indicate significant inconsistencies in human judgement across different explanatory support contexts.
