Kishan Manani - Feature Engineering for Time Series Forecasting | PyData London 2022

preview_player
Показать описание
Kishan Manani present:

Feature Engineering for Time Series Forecasting

To use our favourite supervised learning models for time series forecasting we first have to convert time series data into a tabular dataset of features and a target variable. In this talk we’ll discuss all the tips, tricks, and pitfalls in transforming time series data into tabular data for forecasting.

PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.

PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
Комментарии
Автор

Solo un maestro explica en sencillo y de forma visual temas complejos! gracias! la mejor exposición de forecasting!

calebterrelorellana
Автор

thank you thank you. This information is skipped out in most machine learning courses, and no one will teach you this. In practice, a lot of data has temporal nature, while all along you only learned how to classify cats and dogs, and regress house pricess.

ninjaturtle
Автор

Thank you. This was 43 minutes very well spent.

lashlarue
Автор

Genius. He Makes python and time series almost easy to understand.

julien
Автор

Amazing dump of knowledge, I have multiple times came back to this video

anirudhsharma
Автор

This is by far one of the best wholesome videos on time series forecasting!!! loved it

HEYTHERE-kowe
Автор

Very good talk. The presenter is a great teacher!

youknowmyname
Автор

This is a truly useful session. Thank you for sharing the knowledge!

olegkazanskyi
Автор

Excellent presentation. Great work Kishan

ChandanNayak-is
Автор

dude is a PhD for a reason, awesome stuff god damn

蔡传泽
Автор

finally, someone can articulate this topic well...

zakkyang
Автор

Great Presentation ! Interesting and clear

duscio
Автор

I will checkout these libraries. Very informative, thanks

vivek
Автор

Great talk hope will get more contents like that on Practical TS

onuragmaji
Автор

*Abstract*

This talk explores how to adapt machine learning models for time
series forecasting by transforming time series data into tabular
datasets with features and target variables. Kishan Manani discusses
the advantages of using machine learning for forecasting, including
its ability to handle complex data structures and incorporate
exogenous variables. He then dives into the specifics of feature
engineering for time series, covering topics like lag features, window
features, and static features. The talk emphasizes the importance of
avoiding data leakage and highlights the differences between machine
learning workflows for classification/regression and forecasting
tasks. Finally, Manani introduces useful libraries like Darts and
sktime that facilitate time series forecasting with tabular data and
provides practical examples.

*Summary*
*Why use machine learning for forecasting? (**1:25**)*
- Machine learning models can learn across many related time series.
- They can effectively incorporate exogenous variables.
- They offer access to techniques like sample weights and custom loss functions.
*Don't neglect simple baselines though! (**3:45**)*
- Simple statistical models can be surprisingly effective.
- Ensure the uplift from machine learning justifies the added complexity.
*Forecasting with machine learning (**4:15**)*
- Convert time series data into a table with features and a target variable.
- Use past values of the target variable as features, ensuring no data leakage from the future.
- Include features with known past and future values (e.g., marketing spend).
- Handle features with only past values (e.g., weather) by using alternative forecasts or lagged versions.
- Consider static features (metadata) to capture differences between groups of time series.
*Multi-step forecasting (**8:07**)*
- Direct forecasting: Train separate models for each forecast step.
- Recursive forecasting: Train a one-step ahead model and use it repeatedly, plugging forecasts back into the target series.
*Cross-validation: Tabular vs Time series (**11:32**)*
- Randomly splitting data is inappropriate for time series due to temporal dependence.
- Split data by time, replicating the forecasting process for accurate performance evaluation.
*Machine learning workflow (**13:00**)*
- Time series forecasting workflow differs significantly from classification/regression tasks.
- Feature engineering and handling vary at predict time depending on the multi-step forecasting approach.
*Feature engineering for time series forecasting (**14:47**)*
- Lag features: Use past values of target and features, including seasonal lags.
- Window features: Compute summary statistics (e.g., mean, standard deviation) over past windows.
- Nested window features: Capture differences in various time scales.
- Static features: Encode categorical metadata using target encoding, being mindful of potential target leakage.
*Overview of some useful libraries (**27:01**)*
- tsfresh: Creates numerous time series features from a data frame.
- Darts and sktime: Facilitate forecasting with tabular data and offer functionalities like recursive forecasting and time series cross-validation.
*Forecasting with tabular data using Darts (**28:04**)*
- Example demonstrates forecasting with lag features and future known features on single and multiple time series.


disclaimer: i used gemini 1.5 pro to summarize the youtube transcript.

wolpumba
Автор

excellent and very informative presentation. Will definitely checkout darts and sktime

satyakiray
Автор

Super helpful presentation, thank you, will definitely be checking out your course!

kaidendubois
Автор

Great talk! How would account for availability in your model? For example let’s say a SKU was out of stock for a portion of the training period. This could result in the sale lag feature being low for the out of stock SKU and high for substitute SKUs that were in stock.

Neilstube
Автор

Awesome lecture! I just have one question @32:38, Kishan mentions that we may have different time indexes for different groups can be different which is fine. But the original consolidated data (all groups included) has continuous time stamps whereas when we consider different groups, there may be gaps in the time stamps. Would you still consider them as time series? Will the rest of the process work normally under these circumstances?

onlineschoolofmath
Автор

I have a question. If I have a time series data for a market, and the data is from 2012 to 2022.
now I need to forcast the number of customer that visit the store.
But from 2020 to 2022, because of COVID19, the number of customer has drop a lot.
for this case, If I use last 30% data(from 2019 to 2022) to testing.
Model can't get any data that influences by COVID19 when model training (all of them use to test)
Isn't that make forcast mape very high? how should I do for this case? (sorry for my poor english)

mingilin