Explaining Time-Series

This thesis bridges time-series explanations via information bottleneck and ReLiNet (dynamic systems with stepwise linearization), rigorously comparing their explanatory power and critiquing benchmarks to advance ethical AI in temporal interpretability.

true" ? copyright : '' }

Description

The paper [1] presents a method and benchmark for explaining time series data. In our contribution [2], we proposed a method for learning a dynamic system that can also be used to predict time series data.

Our approach employs a stepwise linearization applicable to different prediction types: those that focus on the final outcome (as demonstrated in [1]) and those that assess the importance of features across the entire series or within specific windows.

The task of this thesis is to apply the method from [2] to encode time series and compare the resulting linearizations and their explanatory capabilities with those presented in [1]. A critical evaluation of the strengths and weaknesses of the benchmarking approach in [1] is expected.

References:

[1] Zichuan Liu, Tianchun Wang, Jimeng Shi, Xu Zheng, Zhuomin Chen, Lei Song, Wenqian Dong, Jayantha Obeysekera, Farhad Shirani, and Dongsheng Luo. TimeX++: Learning Time-Series Explanations with Information Bottleneck. International Conference on Machine Learning, 2024.

[2] Alexandra Baier, Decky Aspandi, and Steffen Staab. ReLiNet: Stable and Explainable Multistep Prediction with Recurrent Linear Parameter Varying Networks. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23), 2023. https://www.ijcai.org/proceedings/2023/0385.pdf

Project Members

To the top of the page