Abstract
This paper discusses effective processing of subsequence matching under time warping in time-series databases. Time warping is a transformation that enables finding of sequences with similar patterns even when they are of different lengths. Through a preliminary experiment, we first point out that Naive-Scan, a basic method for processing of subsequence matching under time warping, has its performance bottleneck in the CPU processing step. For optimizing this step, in this paper, we propose a novel method that eliminates all possible redundant calculations. It is verified that this method is not only an optimal one for processing Naive-Scan, but also does not incur any false dismissals. Our experimental results showed that the proposed method can make great improvement in performance of subsequence matching under time warping. Especially, Naive-Scan, which has been known to show the worst performance, performs much better than LB-Scan as well as ST-Filter in all the cases by employing the proposed method for CPU processing. This result is interesting and valuable in that the performance inversion among Naive-Scan, LB-Scan, and ST-Filter has occurred by optimizing the CPU processing step, which is their common performance bottleneck.
Original language | English |
---|---|
Pages | 581-586 |
Number of pages | 6 |
DOIs | |
State | Published - 2005 |
Event | 20th Annual ACM Symposium on Applied Computing - Santa Fe, NM, United States Duration: 13 Mar 2005 → 17 Mar 2005 |
Conference
Conference | 20th Annual ACM Symposium on Applied Computing |
---|---|
Country/Territory | United States |
City | Santa Fe, NM |
Period | 13/03/05 → 17/03/05 |
Keywords
- Subsequence matching
- Time warping
- Time-series databases