Skip to content

Latest commit

 

History

History
4 lines (3 loc) · 495 Bytes

README.md

File metadata and controls

4 lines (3 loc) · 495 Bytes

Healthcare_Kaggle

The notebooks attached are the work I did for a kaggle competition: https://www.kaggle.com/competitions/ml2-usf-2023
I conducted exploratory data analysis and some feature engineering before trying out some general linear models(GLMs), random forest, and xgboost models to see which one minimized the Mean Absolute Error(MAE). The best performing model seems to be a linear neural network I build using PyTorch with test MAE of 4.18 and actual competition MAE of 3.84.