The notebooks attached are the work I did for a kaggle competition: https://www.kaggle.com/competitions/ml2-usf-2023
I conducted exploratory data analysis and some feature engineering before trying out some general linear models(GLMs), random forest, and xgboost models to see which one minimized the Mean Absolute Error(MAE). The best performing model seems to be a linear neural network I build using PyTorch with test MAE of 4.18 and actual competition MAE of 3.84.
-
Notifications
You must be signed in to change notification settings - Fork 0
kaeyang/Healthcare_Kaggle
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published