kaggle预测
两个预测kaggle比赛
一 .https://www.kaggle.com/c/web-traffic-time-series-forecasting/overview
github:https://github.com/sjvasquez/web-traffic-forecasting
- There are two main information sources for prediction. A) Year/Quarter seasonality. B) Past trend. Good model should use both sources and combine them intelligently
- Minimal feature engineering. Deep learning model is powerful enough to discover and use features on it's own. My task is just to assist model to use incoming data in a meaningful way.
I'll describe encountered implementation problems and their solutions
1. Learning takes too much time.
RNN's are inherently sequential and hard to parallelize. Today's most efficient RNN implementation is CuDNN fused kernels, created by NVIDIA experts. Tensorflow by default uses own generic, but slow sequential RNNCell. Surprisingly, TF also has support for CuDNN kernels (hard to find in documentation and poorly described). I spent some time to figure out how to use classes in tf.contrib.cudnn_rnn module
and got amazing result: ~10x decrease in computation time! I also used GRU instead of classical LSTM: it gives better results and computes ~1.5x faster. Of course, CuDNN can be used only for encoder. In decoder, each next step depends on customized processing of outputs from previous step, so decoder is Tensorflow GRUBlockCell
. GRUBlockCell
is again slightly faster than standard GRUCell
(~1.2x)
2. Long short-term memory is not so long.
The practical memory limit for LSTM-type cells is 100-200 steps. If we use longer sequence, LSTM/GRU just forgets what was at the beginning. But, to exploit yearly seasonality, we should use at least 365 steps. The conventional method to overcome this memory limit is attention. We can take encoder outputs from the distant past and feed them as inputs into current decoder step. My first very basic positional attention model: take encoder outputs from steps current_day - 365
(year seasonality) and current_day - 92
(quarter seasonality), squeeze them through FF layer (to reduce dimensionality and extract useful features), concatenate and feed into decoder. To compensate random walk noise and deviations in year and quarter lengths (leap/non-leap year, different number of days in months), I take weighted average (in proportion 0.25:0.5:0.25
) of 3 decoder outputs around the chosen step. Then I realized that 0.25:0.5:0.25
is just a 1D convolution kernel of size 3, and my model can learn most effective kernel weights and attention offsets on it's own. This learnable convolutional attention significantly improved model results.
But what if we just use lagged pageviews (year or quarter lag) as additional input features? Can lagged pageviews supplement or even replace attention? Yes, they can. When I added 4 additional features (3,6,9,12 months lagged pageviews) to inputs, I got roughly the same improvement as from attention.
3. Overfitting.
I decided to limit a number of days used for training to 100..400 and use remaining days to generate different samples for training. Example: if we have 500 days of data, use 200 days window for training, 60 days for prediction, then first 240 days is a 'free space' to randomly choose a starting day for training. Each starting day will produce a different time series. 145K pages x 250 starting days = 36.25M unique timeseries, not bad! For stage 2, this number is even higher. This is an effective kind of data augmentation: models using random starting point shows very little overfitting, even without any regularization. With dropout and slight L2 regularization, overfitting is almost non existent.
4. How model can decide what to use: seasonality or past trend or both?
Autocorrelation coefficient to the rescue. It turned to be a very important input feature. If year-to-year (lag 365) autocorrelation is positive and high, model should use mostly year-to-year seasonality, if it's low or negative, model should use mostly past trend information (or quarter seasonality if it's high). RNN can't compute autocorrelation on it's own (this will require additional pass over all steps), so this is only hand-crafted input feature in my models. It's important to not include leading/ending zeros/nans into autocorrelation calculation (page either don't exists at leading zeros day either deleted at ending zeros day)
5. High variance
I used following variance reduction methods:
- SGD weights averaging, decay=0.99. It really don't reduced observable variance, but improved prediction quality by ~0.2 SMAPE points.
- Checkpoints created at each 100 training steps, prediction results of models at 10 last checkpoints were averaged.
- Same model was trained on 3 different random seeds, prediction results were averaged. Again, it slightly improved prediction quality.
Prediction quality (predict last 60 days) of my models on Stage 2 data was ~35.2-35.5 SMAPE if autocorrelation calculated over all available data (including prediction interval) and ~36 SMAPE if autocorrelation calculated on all data excluding prediction interval. Let's see if model will hold same quality on future data.
Tips from the winning solutions
Congratulation to "all winners"! (including organizers) Thank you so much for creating, maintaining, competing, and sharing your solutions! Let me summarize something I learned from the top:
-
Use medians as features.
-
Use
log1p
to transform data, andMAE
as the evaluation metric. -
XGBoost and deep learning models such as MLP, CNN, RNN work. However, the performance hugely depends on how we create and train models.
-
For these deep learning models, skip connection works.
-
Best trick to me: clustering these time-series based on the performance of the best model. Then training different models for each cluster.
-
The period of stage 2 is easier for prediction than the period of stage 1. This affects how we will choose our best model (should it capture the weird behavior of stage 1 or not?).
-
Don't wait until last hour to submit models. For me, I overslept so I can't submit my best model =o= that model might have given me a gold (it boosts my CV to a margin of 0.5) :D
Various solutions (including 1st, 3rd, 4th,... places): https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/39367
2nd place solution: https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/39395
第六名:
https://github.com/sjvasquez/web-traffic-forecasting
二. Corporación Favorita Grocery Sales Forecasting