You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As deployer/Maintainer I should be able to improve Handwritten Digits model Accuracy by experimenting with open and synthetic datasets - Iteration 3
#51
As deployer/Maintainer I should be able to improve Handwritten Digits Accuracy by experimenting with open and synthetic datasets - Iteration 3. Expected accuracy improvement is approximately 3% to 5%.
dileep-gadiraju
changed the title
As deployer/Maintainer I should be able to improve Handwritten Digits Accuracy by experimenting with open and synthetic datasets - Iteration 3
As deployer/Maintainer I should be able to improve Handwritten Digits model Accuracy by experimenting with open and synthetic datasets - Iteration 3
Nov 23, 2022
@dileep-gadiraju (Dec 13) - Changed and tested the code for save model checkpoint (only save weights parameter). Trained the model for 5 epochs with existing digits dataset with data augmentation. Inference done on the same test dataset.
Accuracy on NSIT dataset (~60%). Almost same to old checkpoint.
Accuracy on Not in MNIST dataset (~97%). 3% increase than old checkpoint.
Tested new model on the existing dataset. Gives same results. Max 0.3% - 05% difference in accuracy for some classes
Analysed the miss-classifications on the NSIT dataset with images drawn in the notebook. Seems like the model just fails to understand the variations between different digits.
To segregate training images (NSIT dataset) for the model, tried pixel counting of (0,0,0) pixels to eliminate darkened numbers, unwanted lines around numbers.
As deployer/Maintainer I should be able to improve Handwritten Digits Accuracy by experimenting with open and synthetic datasets - Iteration 3. Expected accuracy improvement is approximately 3% to 5%.
The text was updated successfully, but these errors were encountered: