kaggle Pipelines
# Most scikit-learn objects are either transformers or models.
# Transformers are for pre-processing before modeling. The Imputer class (for filling in missing values) is an example of a transformer. # Over time, you will learn many more transformers, and you will frequently use multiple transformers sequentially.
# Models are used to make predictions. You will usually preprocess your data (with transformers) before putting it in a model.
# You can tell if an object is a transformer or a model by how you apply it. After fitting a transformer, you apply it with the transform # command. After fitting a model, you apply it with the predict command. Your pipeline must start with transformer steps and end with a # model. This is what you'd want anyway.
# Eventually you will want to apply more transformers and combine them more flexibly. We will cover this later in an Advanced Pipelines # tutorial.
import pandas as pd from sklearn.model_selection import train_test_split # Read Data data = pd.read_csv('../input/melb_data.csv') cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt'] X = data[cols_to_use] y = data.Price train_X, test_X, train_y, test_y = train_test_split(X, y) from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Imputer my_pipeline = make_pipeline(Imputer(), RandomForestRegressor()) my_pipeline.fit(train_X, train_y) predictions = my_pipeline.predict(test_X)