
Yan Liu
[intermediate] Time Series Foundation Models: From Forecasting to Reasoning
Summary
Time series data are ubiquitous across science, engineering, medicine, finance, etc. Traditionally, time series analysis has focused on forecasting using statistical and machine-learning approaches tailored to specific domains and datasets. Recently, Time Series Foundation Models (TSFMs), inspired by the success of large language models, offers a unified framework for representation learning, forecasting, anomaly detection, causal analysis, and reasoning across diverse temporal domains.
This lecture introduces the foundations of time series modeling and traces the evolution toward large-scale, pre-trained time series foundation models. It covers key concepts such as temporal representations, self-supervised learning, scaling laws, and cross-task generalization. The lecture also explores how foundation models extend beyond forecasting to enable reasoning, interpretation, and decision support over temporal data.
Participants will gain both conceptual understanding and practical insights into the design, capabilities, and limitations of time series foundation models, as well as their emerging applications and open research challenges.
Syllabus
1. Introduction to Time Series Modeling
- Characteristics of time series data
- Classical tasks: forecasting, smoothing, anomaly detection
- Limitations of traditional approaches
2. Foundations of Time Series Forecasting
- Statistical models (ARIMA, state-space models)
- Machine learning and deep learning approaches
- Datasets, evaluation metrics and benchmarking challenges
3. Time Series Foundation Models
- Temporal embeddings and encodings
- Architectural paradigms
- Pre-training objectives and datasets
- Scaling laws and generalization behavior
4. From Forecasting to Reasoning
- Anomaly detection and pattern discovery
- Temporal reasoning and interpretability
- Integration with language and multimodal models
5. Applications and Case Studies
- Finance, healthcare, climate, and industrial systems
- Decision support and monitoring
- Deployment considerations
6. Challenges and Open Research Directions
- Data heterogeneity and non-stationarity
- Evaluation standards for foundation models
- Trust, robustness, and interpretability
- Future directions in time series intelligence
References
[1] Taha Aksu, Gerald Woo, Juncheng Liu, Xu Liu, Chenghao Liu, Silvio Savarese, Caiming Xiong, and Doyen Sahoo. GIFT-Eval: A benchmark for general time series forecasting model evaluation. In Proceedings of the NeurIPS Workshop on Time Series in the Age of Large Models, 2024.
[2] Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. A decoder-only foundation model for timeseries forecasting. In Forty-first International Conference on Machine Learning, 2024.
[3] Abdul Fatir Ansari, Lorenzo Stella, Ali Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, et al. Chronos: Learning the language of time series. Transactions on Machine Learning Research, 2024.
[4] Ching Chang, Yidan Shi, Defu Cao, Wei Yang, Jeehyun Hwang, Haixin Wang, Jiacheng Pang, Wei Wang, Yan Liu, Wen-Chih Peng, and Tien-Fu Chen. A Survey of Reasoning and Agentic Systems in Time Series with Large Language Models. arXiv 2025.
[5] Defu Cao, Furong Jia, Sercan O Arik, Tomas Pfister, Yixiang Zheng, Wen Ye, and Yan Liu. TEMPO: Prompt-based generative pre-trained transformer for time series forecasting. In Twelfth International Conference on Learning Representations, 2024.
[6] Chengsen Wang, Qi Qi, Jingyu Wang, Haifeng Sun, Zirui Zhuang, Jinming Wu, Lei Zhang, and Jianxin Liao. Chattime: A unified multimodal time series foundation model bridging numerical and textual data. In Proceedings of the AAAI Conference on Artificial Intelligence, 39: 12694–12702, 2025b.
[7] Liangwei Nathan Zheng, Chang Dong, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, and Weitong Chen. Understanding why large language models can be ineffective in time series analysis: The impact of modality alignment. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, V. 2, pp. 4026–4037, 2025.
[8] Matthew Faw, Rajat Sen, Yichen Zhou, and Abhimanyu Das. In-context fine-tuning for time-series foundation models. In Forty-second International Conference on Machine Learning, 2025.
[9] Mingtian Tan, Mike Merrill, Vinayak Gupta, Tim Althoff, and Tom Hartvigsen. Are language models actually useful for time series forecasting? Advances in Neural Information Processing Systems, 37: 60162–60191, 2024.
[10] Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew G. Wilson. Large language models are zero-shot time series forecasters. Advances in Neural Information Processing Systems, 36: 19622–19635, 2023a.
[11] Wen Ye, Wei Yang, Defu Cao, Yizhou Zhang, Lumingyuan Tang, Jie Cai, and Yan Liu. Domain-oriented time series inference agents for reasoning and automated analysis. arXiv:2410.04047, 2024.
[12] Xinlei Wang, Maike Feng, Jing Qiu, Jinjin Gu, and Junhua Zhao. From news to forecast: Integrating event analysis in LLM-based time series forecasting with reflection. In Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
[13] Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, and Mingsheng Long. Timer: Generative pre-trained transformers are large time series models. In Forty-first International Conference on Machine Learning, 2024.
Pre-requisites
Participants are expected to have: basic understanding of machine learning concepts; familiarity with probability and statistics; prior exposure to time series data (helpful but not mandatory).
Short bio
Yan Liu is a Professor in the Computer Science Department and the Director of the Machine Learning Center at the University of Southern California. She received her Ph.D. degree from Carnegie Mellon University. Her research interest is machine learning for time series and its applications to geo-science, health care, and sustainability. She has received several awards, including NSF CAREER Award, Okawa Foundation Research Award, New Voices of Academies of Science, Engineering, and Medicine, Best Paper Award in SIAM Data Mining Conference. She is a fellow of IEEE and AAAI. She serves as general co-chair for KDD 2020 and ICLR 2023, program co-chairs for WSDM 2018, SDM 2020, KDD 2022 and ICLR 2022, and associate editor-in-chief for IEEE Transactions on Pattern Analysis and Machine Intelligence.
















