TY - GEN
T1 - SDD: Shape-aware Data-driven Attention Mechanism for Time Series Analysis
AU - Cao, Yanyun
AU - Zuo, Rundong
AU - Cao, Rui
AU - Choi, Byron
AU - Xu, Jianliang
AU - S Bhowmick, Sourav
N1 - Thanks for the code review from Silver, Ho-Fai, Liu. This work was supported by the Hong Kong Research Grant Council (HKRGC), RIF R2002-20F.
Publisher copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/10
Y1 - 2025/11/10
N2 - Multivariate time series (mts ) analysis have extensive applications in various areas such as human activity recognition, healthcare, and economics, among others. Recently, Transformer approaches have been specifically designed for MTS and have consistently reported superior performance. In this paper, we demonstrate a software system for a recent efficient shape-aware Transformer (SDD ), where time-series subsequences (a.k.a shapes) are made available to users for investigation. First, a time-series Transformer, called SVP-T, takes shapes, together with their variable position information (VP information) as input to the training of a Transformer model. These shapes are computed from different variables and time intervals, enabling the Transformer model to learn dependencies simultaneously across both time and variables. Second, a data-driven kernel-based attention mechanism, called DARKER, reduces the time complexity of training Transformer models from O(N2) to O(N), where N is the number of inputs. As a result, the training process by using DARKER offers about 3x-4x speedup over vanilla Transformers'. In this demo, we present the first system (SDD ) that integrates SVP-T and DARKER. In particular, SDD visualizes the SVP-T's attention matrix and allows users to explore key shapes that have high attention weights. Furthermore, users can use SDD to decide the shape input to train a new model, to further balance between efficiency and accuracy.
AB - Multivariate time series (mts ) analysis have extensive applications in various areas such as human activity recognition, healthcare, and economics, among others. Recently, Transformer approaches have been specifically designed for MTS and have consistently reported superior performance. In this paper, we demonstrate a software system for a recent efficient shape-aware Transformer (SDD ), where time-series subsequences (a.k.a shapes) are made available to users for investigation. First, a time-series Transformer, called SVP-T, takes shapes, together with their variable position information (VP information) as input to the training of a Transformer model. These shapes are computed from different variables and time intervals, enabling the Transformer model to learn dependencies simultaneously across both time and variables. Second, a data-driven kernel-based attention mechanism, called DARKER, reduces the time complexity of training Transformer models from O(N2) to O(N), where N is the number of inputs. As a result, the training process by using DARKER offers about 3x-4x speedup over vanilla Transformers'. In this demo, we present the first system (SDD ) that integrates SVP-T and DARKER. In particular, SDD visualizes the SVP-T's attention matrix and allows users to explore key shapes that have high attention weights. Furthermore, users can use SDD to decide the shape input to train a new model, to further balance between efficiency and accuracy.
KW - efficient transformers
KW - human-in-the-loop
KW - time series analysis
UR - https://www.scopus.com/pages/publications/105023159288
U2 - 10.1145/3746252.3761483
DO - 10.1145/3746252.3761483
M3 - Conference proceeding
SN - 9798400720406
T3 - CIKM: Conference on Information and Knowledge Management
SP - 6614
EP - 6618
BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery (ACM)
CY - New York, NY, USA
ER -