1) 样本误差:衡量模型在一个样本上的预测准确性
样本误差 = 样本预测值 - 样本实际值
2) 最常用的评价指标:均误差方(MSE)
指标解释:所有样本的样本误差的平方的均值
指标解读:均误差方越接近0,模型越准确
3) 较为好解释的评价指标:平均绝对误差(MAE)
指标解释:所有样本的样本误差的绝对值的均值
指标解读:平均绝对误差的单位与因变量单位一致,越接近0,模型越准确
4)平均绝对误差的衍生指标:平均绝对比例误差(MAPE)
指标解释:所有样本的样本误差的绝对值占实际值的比值
指标解读:指标越接近与0,模型越准确
5)模型解释度:R squared R方 r2
指标解释:应变量的方差能被自变量解释的程度
指标解读:指标越接近1,则代表自变量对于应变量的解释度越高
1) 加载数据
import pandas as pd
import matplotlib.pyplot as plt
import os
os.chdir(r'C:\Users\86177\Desktop')
# 样例数据读取
df = pd.read_excel('realestate_sample_preprocessed.xlsx')
# 根据共线性矩阵,保留与房价相关性最高的日间人口,将夜间人口和20-39岁夜间人口进行比例处理
def age_percent(row):
if row['nightpop'] == 0:
return 0
else:
return row['night20-39']/row['nightpop']
df['per_a20_39'] = df.apply(age_percent,axis=1)
df = df.drop(columns=['nightpop','night20-39'])
# 数据集基本情况查看
print(df.shape)
print(df.dtypes)
print(df.isnull().sum())
–> 输出的结果为:(这里直接加载数据并对共线性数据进行处理)
(898, 9)
id int64
complete_year int64
average_price float64
area float64
daypop float64
sub_kde float64
bus_kde float64
kind_kde float64
per_a20_39 float64
dtype: object
id 0
complete_year 0
average_price 0
area 0
daypop 0
sub_kde 0
bus_kde 0
kind_kde 0
per_a20_39 0
dtype: int64
2) 划分数据集
x = df[['complete_year','area', 'daypop', 'sub_kde',
'bus_kde', 'kind_kde','per_a20_39']]
y = df['average_price']
print(x.shape)
print(y.shape)
–> 输出的结果为:(创建模型前需要将数据集划分好,查看数据维度)
(898, 7)
(898,)
3) 建立回归模型
之前提到的Pipeline模型工作流就可以直接使用了,将要执行的工作流程全部封装在Pipeline中,具体的有:
数据标准化StandardScaler()
,
数据纠偏PowerTransformer()
,
变量拓展PolynomialFeatures(degree=3)
,样本值为1000,特征有7个,这里选择degree=3
,
线性回归中选择了lasso
回归,里面的LassoCV(alphas=(list(np.arange(8, 10) * 10)
,可以实现自动训练出最优的alpha
值带入模型中
import numpy as np
from sklearn.linear_model import LinearRegression, LassoCV
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
# 构建模型工作流
pipe_lm = Pipeline([
('sc',StandardScaler()),
('power_trans',PowerTransformer()),
('polynom_trans',PolynomialFeatures(degree=3)),
('lasso_regr', LassoCV(alphas=(
list(np.arange(8, 10) * 10)
),
cv=KFold(n_splits=3, shuffle=True),
n_jobs=-1))
])
print(pipe_lm)
–> 输出的结果为:
Pipeline(memory=None,
steps=[('sc',
StandardScaler(copy=True, with_mean=True, with_std=True)),
('power_trans',
PowerTransformer(copy=True, method='yeo-johnson',
standardize=True)),
('polynom_trans',
PolynomialFeatures(degree=3, include_bias=True,
interaction_only=False, order='C')),
('lasso_regr',
LassoCV(alphas=[80, 90], copy_X=True,
cv=KFold(n_splits=3, random_state=None, shuffle=True),
eps=0.001, fit_intercept=True, max_iter=1000,
n_alphas=100, n_jobs=-1, normalize=False,
positive=False, precompute='auto', random_state=None,
selection='cyclic', tol=0.0001, verbose=False))],
verbose=False)
4) 查看模型表现
import warnings
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
warnings.filterwarnings('ignore')
pipe_lm.fit(x,y)
y_predict = pipe_lm.predict(x)
print(f'mean squared error is: {mean_squared_error(y,y_predict)}')
print(f'mean absolute error is: {mean_absolute_error(y,y_predict)}')
print(f'R Squared is: {r2_score(y,y_predict)}')
# 计算MAPE
check = df[['average_price']]
check['y_predict'] = pipe_lm.predict(x)
check['abs_err'] = abs(check['y_predict']-check['average_price'] )
check['ape'] = check['abs_err']/check['average_price']
ape = check['ape'].mean()
print(f'mean absolute percent error is: {ape}')
–> 输出的结果为:(MAPE需要手动单独计算,没有模块可以使用)
mean squared error is: 27731808.3971612
mean absolute error is: 3764.8763555076
R Squared is: 0.671538868244777
mean absolute percent error is: 0.16143438261828635
最后可以看一下check
数据
average_price y_predict abs_err ape
0 33464.000 40132.981180 6668.981180 0.199288
1 38766.000 34522.854322 4243.145678 0.109455
2 33852.000 32718.508030 1133.491970 0.033484
3 39868.000 39242.949615 625.050385 0.015678
4 42858.000 39242.949615 3615.050385 0.084349
... ... ... ... ...
893 40113.000 39559.520171 553.479829 0.013798
894 41806.000 48224.353820 6418.353820 0.153527
895 51895.375 37610.727152 14284.647848 0.275259
896 34546.000 44010.896534 9464.896534 0.273980
897 33595.000 32194.055127 1400.944873 0.041701
898 rows × 4 columns