基于多算法融合的员工离职预测:从传统机器学习到大模型的演进之路
本文分享一个Kaggle员工离职预测比赛(bi-attrition-predict)的技术实践。项目采用多种机器学习算法,包括线性模型、支持向量机和多种树模型,对员工离职情况进行预测分析。
·
项目概述
本文分享一个Kaggle员工离职预测比赛(bi-attrition-predict)的技术实践。项目采用多种机器学习算法,包括线性模型、支持向量机和多种树模型,对员工离职情况进行预测分析。
传统机器学习 vs 大模型时代
随着AI大模型时代的到来,员工离职预测系统也在逐步演进:
系统架构图
端到端AI工作流程图
以下是一个完整的员工离职预测AI系统工作流程,展示了从数据输入到模型部署的全过程:
数据集分析
根据数据文件 attrition.csv,我们可以看到这是一个典型的员工离职预测数据集,包含以下特征:
- user_id: 员工唯一标识符
- Age: 员工年龄
- Attrition: 目标变量(是否离职,Yes/No)
- BusinessTravel: 商业旅行频率
- DailyRate: 日薪
- Department: 部门
- DistanceFromHome: 距家距离
- Education: 教育水平
- EducationField: 教育领域
- EnvironmentSatisfaction: 环境满意度
- Gender: 性别
- JobInvolvement: 工作投入度
- JobLevel: 职位级别
- JobRole: 职位角色
- JobSatisfaction: 工作满意度
- MaritalStatus: 婚姻状态
- MonthlyIncome: 月收入
- MonthlyRate: 月薪率
- NumCompaniesWorked: 工作过的公司数量
- OverTime: 是否加班
- PercentSalaryHike: 薪资涨幅百分比
- PerformanceRating: 绩效评级
- RelationshipSatisfaction: 关系满意度
- StockOptionLevel: 股票期权级别
- TotalWorkingYears: 总工作年限
- TrainingTimesLastYear: 去年培训次数
- WorkLifeBalance: 工作生活平衡
- YearsAtCompany: 在公司工作年限
- YearsInCurrentRole: 当前职位工作年限
- YearsSinceLastPromotion: 距离上次晋升年限
- YearsWithCurrManager: 与当前经理共事年限
这是一个高维度、多类型特征的分类问题,适合使用多种机器学习算法进行建模。
项目结构分析
从项目文件结构来看,实现了多种模型的实现:
attrition_lgb.py # LightGBM模型
attrition_xgboost.py # XGBoost模型
attrition_catboost.py # CatBoost模型
attrition_lr.py # 逻辑回归模型
attrition_lr_threshold.py # 逻辑回归阈值优化版本
attrition_svc.py # SVM模型
attrition_ngboost.py # NGBoost模型
attrition_gbdt.py # GBDT模型
以及生成的预测结果文件:
submit_lgb.csv # LightGBM预测结果
submit_xgb.csv # XGBoost预测结果
submit_cb.csv # CatBoost预测结果
submit_lr.csv # 逻辑回归预测结果
submit_lr_threshold.csv # 逻辑回归阈值优化结果
submit_svc.csv # SVM预测结果
submit_ngb.csv # NGBoost预测结果
submit_gbdt.csv # GBDT预测结果
详细代码实现
1. 数据预处理
# 数据加载与预处理
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score
# 读取数据
df = pd.read_csv('attrition.csv')
# 数据预处理流程
def preprocess_data(df):
# 处理缺失值
df = df.fillna(df.mode().iloc[0]) # 用众数填充分类变量
# 分类变量编码
label_encoders = {}
categorical_columns = df.select_dtypes(include=['object']).columns.tolist()
for col in categorical_columns:
if col != 'Attrition': # 目标变量单独处理
le = LabelEncoder()
df[col] = le.fit_transform(df[col].astype(str))
label_encoders[col] = le
# 目标变量编码
target_encoder = LabelEncoder()
df['Attrition'] = target_encoder.fit_transform(df['Attrition'])
return df, label_encoders, target_encoder
# 预处理数据
processed_df, label_encoders, target_encoder = preprocess_data(df)
# 分离特征和目标变量
X = processed_df.drop(['Attrition', 'user_id'], axis=1) # 移除目标变量和ID
y = processed_df['Attrition']
# 数据分割
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
2. 逻辑回归模型
# attrition_lr.py
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
# 特征标准化
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 逻辑回归模型
lr_model = LogisticRegression(
random_state=42,
class_weight='balanced', # 处理类别不平衡
max_iter=1000
)
lr_model.fit(X_train_scaled, y_train)
# 预测
y_pred_proba = lr_model.predict_proba(X_test_scaled)[:, 1]
y_pred = lr_model.predict(X_test_scaled)
# 模型评估
lr_accuracy = accuracy_score(y_test, y_pred)
lr_precision = precision_score(y_test, y_pred)
lr_recall = recall_score(y_test, y_pred)
lr_f1 = f1_score(y_test, y_pred)
lr_auc = roc_auc_score(y_test, y_pred_proba)
print(f"Logistic Regression - Accuracy: {lr_accuracy:.4f}, Precision: {lr_precision:.4f}, "
f"Recall: {lr_recall:.4f}, F1: {lr_f1:.4f}, AUC: {lr_auc:.4f}")
3. 线性SVM模型
# attrition_svc.py
from sklearn.svm import SVC
# SVM模型 (使用RBF核)
svc_model = SVC(
kernel='rbf',
probability=True, # 启用概率预测
class_weight='balanced', # 处理类别不平衡
random_state=42
)
svc_model.fit(X_train_scaled, y_train)
# 预测
svc_pred_proba = svc_model.predict_proba(X_test_scaled)[:, 1]
svc_pred = svc_model.predict(X_test_scaled)
# 模型评估
svc_accuracy = accuracy_score(y_test, svc_pred)
svc_precision = precision_score(y_test, svc_pred)
svc_recall = recall_score(y_test, svc_pred)
svc_f1 = f1_score(y_test, svc_pred)
svc_auc = roc_auc_score(y_test, svc_pred_proba)
print(f"SVM - Accuracy: {svc_accuracy:.4f}, Precision: {svc_precision:.4f}, "
f"Recall: {svc_recall:.4f}, F1: {svc_f1:.4f}, AUC: {svc_auc:.4f}")
4. LightGBM模型
# attrition_lgb.py
import lightgbm as lgb
# LightGBM模型
lgb_params = {
'objective': 'binary',
'metric': 'binary_logloss,auc',
'boosting_type': 'gbdt',
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0,
'random_state': 42,
'class_weight': 'balanced'
}
# 创建数据集
lgb_train = lgb.Dataset(X_train, label=y_train)
lgb_eval = lgb.Dataset(X_test, label=y_test, reference=lgb_train)
# 训练模型
lgb_model = lgb.train(
lgb_params,
lgb_train,
valid_sets=[lgb_train, lgb_eval],
num_boost_round=1000,
early_stopping_rounds=50,
verbose_eval=50
)
# 预测
lgb_pred_proba = lgb_model.predict(X_test, num_iteration=lgb_model.best_iteration)
lgb_pred = (lgb_pred_proba > 0.5).astype(int)
# 模型评估
lgb_accuracy = accuracy_score(y_test, lgb_pred)
lgb_precision = precision_score(y_test, lgb_pred)
lgb_recall = recall_score(y_test, lgb_pred)
lgb_f1 = f1_score(y_test, lgb_pred)
lgb_auc = roc_auc_score(y_test, lgb_pred_proba)
print(f"LightGBM - Accuracy: {lgb_accuracy:.4f}, Precision: {lgb_precision:.4f}, "
f"Recall: {lgb_recall:.4f}, F1: {lgb_f1:.4f}, AUC: {lgb_auc:.4f}")
5. XGBoost模型
# attrition_xgboost.py
import xgboost as xgb
# XGBoost模型
xgb_model = xgb.XGBClassifier(
objective='binary:logistic',
eval_metric=['logloss', 'auc'],
n_estimators=100,
learning_rate=0.1,
max_depth=6,
subsample=0.8,
colsample_bytree=0.8,
random_state=42,
scale_pos_weight=len(y_train[y_train == 0]) / len(y_train[y_train == 1]) # 处理不平衡
)
xgb_model.fit(
X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
early_stopping_rounds=50,
verbose=50
)
# 预测
xgb_pred_proba = xgb_model.predict_proba(X_test)[:, 1]
xgb_pred = xgb_model.predict(X_test)
# 模型评估
xgb_accuracy = accuracy_score(y_test, xgb_pred)
xgb_precision = precision_score(y_test, xgb_pred)
xgb_recall = recall_score(y_test, xgb_pred)
xgb_f1 = f1_score(y_test, xgb_pred)
xgb_auc = roc_auc_score(y_test, xgb_pred_proba)
print(f"XGBoost - Accuracy: {xgb_accuracy:.4f}, Precision: {xgb_precision:.4f}, "
f"Recall: {xgb_recall:.4f}, F1: {xgb_f1:.4f}, AUC: {xgb_auc:.4f}")
6. CatBoost模型
# attrition_catboost.py
from catboost import CatBoostClassifier
# 获取分类特征的索引
categorical_features_indices = [i for i, col in enumerate(X_train.columns)
if col != 'user_id' and X_train[col].dtype == 'object']
# CatBoost模型
cat_model = CatBoostClassifier(
iterations=100,
learning_rate=0.1,
depth=6,
eval_metric='AUC',
random_seed=42,
logging_level='Silent',
class_weights=[1, len(y_train[y_train == 0]) / len(y_train[y_train == 1])] # 处理不平衡
)
cat_model.fit(
X_train, y_train,
eval_set=(X_test, y_test),
plot=True
)
# 预测
cat_pred_proba = cat_model.predict_proba(X_test)[:, 1]
cat_pred = cat_model.predict(X_test)
# 模型评估
cat_accuracy = accuracy_score(y_test, cat_pred)
cat_precision = precision_score(y_test, cat_pred)
cat_recall = recall_score(y_test, cat_pred)
cat_f1 = f1_score(y_test, cat_pred)
cat_auc = roc_auc_score(y_test, cat_pred_proba)
print(f"CatBoost - Accuracy: {cat_accuracy:.4f}, Precision: {cat_precision:.4f}, "
f"Recall: {cat_recall:.4f}, F1: {cat_f1:.4f}, AUC: {cat_auc:.4f}")
7. GBDT模型
# attrition_gbdt.py
from sklearn.ensemble import GradientBoostingClassifier
# GBDT模型
gbdt_model = GradientBoostingClassifier(
n_estimators=100,
learning_rate=0.1,
max_depth=6,
random_state=42
)
gbdt_model.fit(X_train, y_train)
# 预测
gbdt_pred_proba = gbdt_model.predict_proba(X_test)[:, 1]
gbdt_pred = gbdt_model.predict(X_test)
# 模型评估
gbdt_accuracy = accuracy_score(y_test, gbdt_pred)
gbdt_precision = precision_score(y_test, gbdt_pred)
gbdt_recall = recall_score(y_test, gbdt_pred)
gbdt_f1 = f1_score(y_test, gbdt_pred)
gbdt_auc = roc_auc_score(y_test, gbdt_pred_proba)
print(f"GBDT - Accuracy: {gbdt_accuracy:.4f}, Precision: {gbdt_precision:.4f}, "
f"Recall: {gbdt_recall:.4f}, F1: {gbdt_f1:.4f}, AUC: {gbdt_auc:.4f}")
8. NGBoost模型
# attrition_ngboost.py
from ngboost import NGBClassifier
from ngboost.distns import Bernoulli
# NGBoost模型
ngb_model = NGBClassifier(
Dist=Bernoulli,
n_estimators=100,
learning_rate=0.1,
verbose=False,
random_state=42
)
ngb_model.fit(X_train, y_train)
# 预测
ngb_pred_proba = ngb_model.predict_proba(X_test)[:, 1]
ngb_pred = ngb_model.predict(X_test)
# 模型评估
ngb_accuracy = accuracy_score(y_test, ngb_pred)
ngb_precision = precision_score(y_test, ngb_pred)
ngb_recall = recall_score(y_test, ngb_pred)
ngb_f1 = f1_score(y_test, ngb_pred)
ngb_auc = roc_auc_score(y_test, ngb_pred_proba)
print(f"NGBoost - Accuracy: {ngb_accuracy:.4f}, Precision: {ngb_precision:.4f}, "
f"Recall: {ngb_recall:.4f}, F1: {ngb_f1:.4f}, AUC: {ngb_auc:.4f}")
模型性能对比与分析
创建一个完整的模型评估和可视化脚本:
import matplotlib.pyplot as plt
import seaborn as sns
# 收集所有模型性能
results = {
'Logistic Regression': [lr_accuracy, lr_precision, lr_recall, lr_f1, lr_auc],
'SVM': [svc_accuracy, svc_precision, svc_recall, svc_f1, svc_auc],
'LightGBM': [lgb_accuracy, lgb_precision, lgb_recall, lgb_f1, lgb_auc],
'XGBoost': [xgb_accuracy, xgb_precision, xgb_recall, xgb_f1, xgb_auc],
'CatBoost': [cat_accuracy, cat_precision, cat_recall, cat_f1, cat_auc],
'GBDT': [gbdt_accuracy, gbdt_precision, gbdt_recall, gbdt_f1, gbdt_auc],
'NGBoost': [ngb_accuracy, ngb_precision, ngb_recall, ngb_f1, ngb_auc]
}
# 创建性能对比图
metrics = ['Accuracy', 'Precision', 'Recall', 'F1-Score', 'AUC']
models = list(results.keys())
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
fig.suptitle('Model Performance Comparison', fontsize=16)
# Accuracy
axes[0, 0].bar(models, [results[model][0] for model in models])
axes[0, 0].set_title('Accuracy Comparison')
axes[0, 0].tick_params(axis='x', rotation=45)
# Precision vs Recall
precision_scores = [results[model][1] for model in models]
recall_scores = [results[model][2] for model in models]
axes[0, 1].scatter(precision_scores, recall_scores)
for i, model in enumerate(models):
axes[0, 1].annotate(model, (precision_scores[i], recall_scores[i]))
axes[0, 1].set_xlabel('Precision')
axes[0, 1].set_ylabel('Recall')
axes[0, 1].set_title('Precision vs Recall')
# F1-Score
axes[1, 0].bar(models, [results[model][3] for model in models])
axes[1, 0].set_title('F1-Score Comparison')
axes[1, 0].tick_params(axis='x', rotation=45)
# AUC
axes[1, 1].bar(models, [results[model][4] for model in models])
axes[1, 1].set_title('AUC Comparison')
axes[1, 1].tick_params(axis='x', rotation=45)
plt.tight_layout()
plt.show()
模型保存与预测
# 保存最佳模型和预测结果
import joblib
# 保存模型
best_model = lgb_model if lgb_auc > xgb_auc and lgb_auc > cat_auc else \
xgb_model if xgb_auc > cat_auc else cat_model
joblib.dump(best_model, 'best_attrition_model.pkl')
# 生成提交文件
# 假设我们使用测试集进行预测(通常在Kaggle中会有独立的test.csv)
test_df = pd.read_csv('test.csv') # 假设存在测试集
# 预处理测试集(与训练集相同的预处理流程)
test_processed = preprocess_data(test_df)[0]
X_test_final = test_processed.drop(['user_id'], axis=1)
# 进行预测
if hasattr(best_model, 'predict_proba'):
final_predictions = best_model.predict_proba(X_test_final)[:, 1]
else:
final_predictions = best_model.predict(X_test_final)
# 创建提交文件
submission = pd.DataFrame({
'user_id': test_processed['user_id'],
'prediction': final_predictions
})
# 保存提交文件
model_name = 'lgb' if best_model == lgb_model else 'xgb' if best_model == xgb_model else 'cat'
submission.to_csv(f'submit_{model_name}.csv', index=False)
模型特点分析
1. 线性模型(LR, SVM)
- 优势: 计算快速,可解释性强
- 劣势: 无法捕获复杂非线性关系
- 适用场景: 特征与目标变量关系相对简单时
2. 决策树模型(LGB, XGB, CB)
- 优势: 能处理非线性关系,对异常值鲁棒
- 劣势: 容易过拟合,需要调参
- 适用场景: 特征复杂、关系非线性时
3. 集成方法(GBDT, NGB)
- 优势: 综合多个模型,性能通常较好
- 劣势: 计算复杂度高,训练时间长
- 适用场景: 需要高精度预测时
结论与展望
这个员工离职预测项目展示了多模型对比在实际机器学习项目中的应用价值。通过实现和比较7种不同的算法,我们可以看到:
- 树模型通常表现更好:在分类任务中,LightGBM、XGBoost和CatBoost等树模型通常表现优异
- 特征工程的重要性:数据预处理对模型性能有显著影响
- 模型选择的必要性:不同数据集适合不同算法,需要通过实验确定最佳模型
未来可以考虑的方向:
- 特征工程优化
- 集成学习方法
- 深度学习模型
- 模型解释性分析
这个项目为Kaggle竞赛提供了一个完整的解决方案框架,可作为类似分类问题的参考模板。
更多推荐



所有评论(0)