You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thanks for creating this useful package. The waterfall plots are quite informative and intuitive.
I found that when I varied the base_score in the buildExplainer function, the predicted values output in the showWaterfall function varied significantly. Concerned about the accuracy of predicted values in the actual xgboost model, the predicted values varied, but not nearly as much. Is this an error? Or am I doing something incorrectly? Should the base_score entered in the buildExplainer always match whatever was entered in the actual xgboost model?
This is what I observed for a single predicted outcome:
base_score = 0.5: pred = .48 in both xgb predict function and explainer waterfall function
base_score = 0.2: pred = .43 in xgb predict function, but explainer waterfall function was .18.*
base_score = 0.85: pred = .53 in xgb predict function, but explainer waterfall function was .83.*
*Note: in all three examples, the xgb model used in the explainer function had a base_score of 0.5. Therefore, it varied from the base_score entered in the explainer in the 2nd and 3rd examples.
Thanks for any suggestions.
The text was updated successfully, but these errors were encountered:
Hello,
Thanks for creating this useful package. The waterfall plots are quite informative and intuitive.
I found that when I varied the base_score in the buildExplainer function, the predicted values output in the showWaterfall function varied significantly. Concerned about the accuracy of predicted values in the actual xgboost model, the predicted values varied, but not nearly as much. Is this an error? Or am I doing something incorrectly? Should the base_score entered in the buildExplainer always match whatever was entered in the actual xgboost model?
This is what I observed for a single predicted outcome:
base_score = 0.5: pred = .48 in both xgb predict function and explainer waterfall function
base_score = 0.2: pred = .43 in xgb predict function, but explainer waterfall function was .18.*
base_score = 0.85: pred = .53 in xgb predict function, but explainer waterfall function was .83.*
*Note: in all three examples, the xgb model used in the explainer function had a base_score of 0.5. Therefore, it varied from the base_score entered in the explainer in the 2nd and 3rd examples.
Thanks for any suggestions.
The text was updated successfully, but these errors were encountered: