Comparative Study of Shap and Lime For Explaining Deep Learning Anomalies In Timeseries Data
This work presents strategies for Explainable Artificial Intelligence (xAI) techniques applied to temporal data, with a special focus on the interpretation of unsupervised deep learning models. The approach is based on an attention autoencoder deep model trained to detect anomalies in sequential data from mobile networks, using a real, labeled data set as a case study. SHAP and LIME techniques are analyzed, evaluating their ability to provide local explanations of model decisions at the level of individual instances. The analysis includes different time windows and combinations of relevant variables, with the aim of identifying interpretable patterns of anomalous behaviour. The results show that the techniques have complementary characteristics and are useful in supporting the understanding of the model from different perspectives. This comparison highlights the potential of explainability as an auxiliary tool for interpreting decisions in contexts with time-dependent data.
