TY - GEN
T1 - Robust shapelets learning
T2 - 1st Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2018
AU - Deng, Huiqi
AU - Chen, Weifu
AU - Ma, Andy J.
AU - Shen, Qi
AU - YUEN, Pong Chi
AU - Feng, Guocan
N1 - Funding Information:
Acknowledgement. This work is partially supported by the NSFC under grants Nos. 61673018, 61272338, 61703443 and Guangzhou Science and Technology Founding Committee under grant No. 201804010255 and Guangdong Province Key Laboratory of Computer Science.
PY - 2018
Y1 - 2018
N2 - Shapelets are discriminative local patterns in time series, which maximally distinguish among different classes. Instead of considering full series, shapelet transformation considers the existence or absence of local shapelets, which leads to high classification accuracy, easy visualization and interpretability. One of the limitation of existing methods is robustness. For example, Search-based approaches select sample subsequences as shapelets and those methods intuitively may be not accurate and robust enough. Learning-based approaches learn shapelets by maximizing the discriminative ability. However, those methods may not preserve basic shape for visualization. In practice, shapelets are subjected to various geometric transformations, such as translation, scaling, and stretching, which may result in a confusion of shapelet judgement. In this paper, robust shapelet learning is proposed to solve above problems. By learning transform-invariant representative prototypes from all training time series, rather than just selecting samples from the sequences, each time series sample could be approximated by the combination of the transformations of those prototypes. Based on the combination, samples could be easily classified into different classes. Experiments on 16 UCR time series datasets showed that the performance of the proposed framework is comparable to the state-of-art methods, but could learn more representative shapelets for complex scenarios.
AB - Shapelets are discriminative local patterns in time series, which maximally distinguish among different classes. Instead of considering full series, shapelet transformation considers the existence or absence of local shapelets, which leads to high classification accuracy, easy visualization and interpretability. One of the limitation of existing methods is robustness. For example, Search-based approaches select sample subsequences as shapelets and those methods intuitively may be not accurate and robust enough. Learning-based approaches learn shapelets by maximizing the discriminative ability. However, those methods may not preserve basic shape for visualization. In practice, shapelets are subjected to various geometric transformations, such as translation, scaling, and stretching, which may result in a confusion of shapelet judgement. In this paper, robust shapelet learning is proposed to solve above problems. By learning transform-invariant representative prototypes from all training time series, rather than just selecting samples from the sequences, each time series sample could be approximated by the combination of the transformations of those prototypes. Based on the combination, samples could be easily classified into different classes. Experiments on 16 UCR time series datasets showed that the performance of the proposed framework is comparable to the state-of-art methods, but could learn more representative shapelets for complex scenarios.
KW - Representative prototype
KW - Robustness
KW - Transform-invariant
UR - http://www.scopus.com/inward/record.url?scp=85057205489&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-03338-5_41
DO - 10.1007/978-3-030-03338-5_41
M3 - Conference proceeding
AN - SCOPUS:85057205489
SN - 9783030033378
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 491
EP - 502
BT - Pattern Recognition and Computer Vision - First Chinese Conference, PRCV 2018, Proceedings
A2 - Lai, Jian-Huang
A2 - Liu, Cheng-Lin
A2 - Tan, Tieniu
A2 - Chen, Xilin
A2 - Zha, Hongbin
A2 - Zhou, Jie
A2 - Zheng, Nanning
PB - Springer Verlag
Y2 - 23 November 2018 through 26 November 2018
ER -