Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

Comparative Study of Shap and Lime For Explaining Deep Learning Anomalies In Timeseries Data

This work presents strategies for Explainable Artificial Intelligence (xAI) techniques applied to temporal data, with a special focus on the interpretation of unsupervised deep learning models. The approach is based on an attention autoencoder deep model trained to detect anomalies in sequential data from mobile networks, using a real, labeled data set as a case study. SHAP and LIME techniques are analyzed, evaluating their ability to provide local explanations of model decisions at the level of individual instances. The analysis includes different time windows and combinations of relevant variables, with the aim of identifying interpretable patterns of anomalous behaviour. The results show that the techniques have complementary characteristics and are useful in supporting the understanding of the model from different perspectives. This comparison highlights the potential of explainability as an auxiliary tool for interpreting decisions in contexts with time-dependent data.

José Ribeiro
CIICESI, ESTG, Insituto Politécnico do Porto
Portugal

Cesar Analide
Department of Informatics, ALGORITMI Center, University of Minho
Portugal

Ricardo Santos
CIICESI, ESTG, Insituto Politécnico do Porto
Portugal

Fábio Silva
CIICESI, ESTG, Insituto Politécnico do Porto
Portugal