Skip to content

Commit

Permalink
readme edits
Browse files Browse the repository at this point in the history
  • Loading branch information
BenUCL committed Aug 31, 2023
1 parent e0e97d4 commit f65b564
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 2 deletions.
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,24 @@ locations/countries
1. For embeddings only UMAP dimensionality reduction was performed (to 10 dims) and these reduced embeddings were clustered using affinity propagation. The fidelity of clusters to true classes was then assesed using chi-sq.
1. 2D UMAPs were also plotted with each embedding.

## How to run some scripts (format in markdown properly soon)
SimCLR running scripts
To train SimCLR training:
- Go to /code/simclr-pytorch-reefs folder
- Run: python train.py --config configs/reefs_configs.yaml
- This uses the reef_config.yaml where params can be set

To train supervised ResNets:
- This is to get a baseline as to what the upper limit using a fully trained neural net on each tasks is. This can be used to see how the RF’s trained on the embedding do against the supposed upper limit
- Go to evaluation folders
- To run all on some fixed params, use
- ./fully_train_resnet_all.sh
- This is a .sh script that sets some params that get passed to each config yaml for each test dataset (evaluation /stored in multiple_config_runs). Further params can be set in these yamls
- I also found batch size was important for some datasets (I think mainly FP and Aus which needed low batch size), so tested running this using a batch size sweep:
- ./batchsize_sweep.sh



## Folder structure
Needs further tidy up
```
Expand Down
4 changes: 2 additions & 2 deletions code/simclr-pytorch-reefs/configs/reefs_configs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ color_dist_s: 1.0
config_file: ''
data: ROV
dist: dp
dist_address: '198.176.97.113' # changed to bens remote machine
dist_address: '198.176.97.88' # changed to bens remote machine
eval_freq: 30
gpu: 1 ################# changed from 0
iters: 16000 # num batches seen, not epochs! So 1 epoch. 54000/512 = 105.5 iters per epoch
iters: 5400 # num batches seen, not epochs! So 1 epoch. 54000/512 = 105.5 iters per epoch
log_freq: 5
lr: 0.6 # was 0.6 default
lr_schedule: warmup-anneal
Expand Down

0 comments on commit f65b564

Please sign in to comment.