In life, time is essential.
This is especially true in business, where each organization must forecast sales, demand, revenue, and capacity requirements. Accurate and reliable weather-dependent forecasts could help all organizations save (and earn) billions of dollars.
Time series forecasting is the bread and butter that drives a business. It involves predicting future values based on past observations collected at constant time intervals, whether daily, monthly, quarterly, or annually.
Artificial intelligence is expected to accelerate and fine-tune business planning with new, faster and smaller core models designed for multivariate time series forecasting. These models do not need to be the equivalent of an AI deck to generate results. Models based on small time series or other small basic models trained with high-quality curated data are more energy efficient and can achieve the same or better results.
Vice President of AI and Automation at IBM Research.
How can time series AI models predict the future?
Time series models can be built from scratch or adapted from existing pretrained models and are best used to predict outcomes in time series data. Traditionally, large AI language models calculate relationships between words to identify patterns in the data that can be projected to make better decisions.
Basic time series models look for patterns in historical observations to “understand” a temporal process. These abstract representations are what allow models to solve predictive tasks. The longer the time series, the better the forecast.
However, these types of measurements raise complications that words, code, and pixels do not. First, time series data is typically continuous: think of video streaming from a self-driving car, temperature readings from a reactor, or heart rate data from a smartwatch. There is a lot of data to process, and its sequential order and directionality must be strictly preserved.
Time series data varies widely, from stock prices and satellite images to brain waves and light curves from distant stars. Compressing disparate observations into an abstract representation is an enormous challenge.
Furthermore, different time series data sets are often highly correlated. In the real world, complex events arise from multiple factors. For example, air temperature, pressure, and humidity interact strongly to influence climate. To predict a hurricane, you need to know how these variables influenced each other in the past to understand how the future might develop. The calculations and correlations between channels could quickly become overwhelming as the number of variables increases, especially if it is a long historical record.
The further back you go, the more complex these calculations become, especially if your target variable is influenced by other factors. Sales of home heaters, for example, may be tied to peculiar climatic or economic conditions. The more variables interacting in any time series data set, the more difficult it is to isolate the signal that portends the future.
Breaking barriers in time series forecasting
Basic AI models designed for time series forecasting can be difficult to build. The large scale and complexity of multi-channel data sources along with external variables pose significant architectural challenges to the resulting model and non-trivial computational demands, making it difficult to train and update models with reasonable accuracy and the desired forecast window in a timely manner. Today, many fundamental models fail to capture the trends revealed by rapidly evolving data patterns, a process known as “temporal adaptation.” Basic time series models, such as MOIRAI, TimesFM, and Chronos, are based on hundreds of millions of parameters that require significant computational resources and runtime.
The next wave of innovation
Researchers and practitioners are working on new ways to overcome these obstacles and unlock the full potential of using AI in time series prediction. Can smaller models, pre-trained exclusively on limited and diverse public time series data sets, deliver greater forecast accuracy? It turns out the answer is yes!
Experimentation with the development of “small” foundation models with parameters significantly smaller than 1B is already underway. Smaller models for time series forecasting (parameters of 1 million to 3 million) can offer significant computational efficiency while achieving state-of-the-art results in zero- or low-chance forecasts, which is when models They generate forecasts from invisible data sets. They can also support external and cross-channel variables, critical features that existing popular methods lack.
These fast, tiny pre-trained general AI models can be quickly put to work on use cases like predicting electricity consumption demand. They are also flexible enough to extend to other time series tasks beyond forecasting. In anomaly detection, for example, these small models can be trained on data sets that include anomalous and regular patterns, allowing them to learn the characteristics of anomalies and detect deviations from normal behavior.
Increasingly, we see that these small models, combined with enterprise data, can have a big impact, delivering task-specific performance that rivals large models at a fraction of the cost. They are poised to become the “workhorses” of enterprise AI.
In the coming years, AI is expected to help drive a radical transformation in the business landscape. While the majority of the world’s public data fuels today’s models, a vast majority of enterprise data remains untapped. Small, fast entry-level models, which have flexibility, low development costs, and wide-ranging applications, are poised to play an important role in this shift.
We have the best AI tools currently available.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: