Abstract:
To address insufficient modeling of inter-variable dependencies and suboptimal adaptation to spatiotemporal dynamics in multivariate time series forecasting, this paper proposes a Transformer network, STARFormer, incorporating spatiotemporal dimension reconstruction. This approach utilizes a segmented encoding mechanism that transforms single-dimensional temporal information into 2D vector matrices through dimension inversion. A dual-phase attention architecture is developed to hierarchically capture cross-temporal and cross-dimensional dependencies, effectively strengthening temporal representation learning. Further, this work introduces a dynamic graph module to model evolving dependencies between temporal patterns and spatial structures. Experimental evaluations across five real-world datasets demonstrate that STARFormer outperforms state-of-the-art Transformer-based models in multivariate forecasting tasks.