Abstract:Deep learning has emerged as an effective approach for addressing time series prediction, owing to its ability to capture intricate relationships and patterns within time series data. A typical methodology involves training individual neural networks for each task, leading to significant advancements in time series prediction. Recent advancements in multi-task learning techniques have demonstrated advantages in performance, computation, and memory usage by leveraging shared knowledge to jointly address multiple prediction tasks. To serve as a reference for researchers in selecting deep neural network architectures, the article provides an overview of deep models for time series prediction, encompassing convolutional neural networks, recurrent neural networks, attention mechanisms, and graph neural networks, along with discussions on datasets, model characteristics, and performance metrics. Subsequently, it delves into an analysis of deep multi-task time series prediction models, categorizing them based on parameter sharing methods and locations of parameter sharing (interaction), and explores common frameworks for multi-task time series prediction. Finally, it concludes by summarizing the challenges and issues faced in deep time series prediction and offering insights into future research directions.