Skip to content

Commit 8b90a16

Browse files
Fix typo and missing description of content in folder (#2004)
Fixes #2001 . ### Description These fixes are suggested by Cursor + claude-4-sonnet, with a prompt: ``` Please review the documentation (markdown/jupyter) in the tutorial repo for clarity, with a focus of typo fixes and language refinement ``` ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Avoid including large-size files in the PR. - [x] Clean up long text outputs from code cells in the notebook. - [x] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [x] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [x] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` Signed-off-by: Mingxin Zheng <[email protected]>
1 parent ef0ac7d commit 8b90a16

File tree

7 files changed

+23
-12
lines changed

7 files changed

+23
-12
lines changed

3d_regression/README.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,18 @@
11
3D Regression
22
=============
33

4-
How to run the 3D regression tutorial.
5-
--------------------------------------
4+
This directory contains a tutorial demonstrating how to use MONAI for 3D regression tasks, specifically brain age prediction using the IXI dataset and a DenseNet3D architecture.
65

7-
Running this notebook is straightforward. It works well in Colab.
6+
## Tutorial Overview
7+
8+
The `densenet_training_array.ipynb` notebook provides an end-to-end example of:
9+
- Loading and preprocessing 3D brain MRI data
10+
- Setting up data transforms for regression tasks
11+
- Training a DenseNet3D model for age prediction
12+
- Evaluating model performance on test data
13+
14+
## How to Run
15+
16+
This notebook can be run locally with Jupyter or in Google Colab. The notebook includes all necessary setup instructions and dependency installations.
17+
18+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/3d_regression/densenet_training_array.ipynb)

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Training and evaluation examples of 3D regression based on DenseNet3D and [IXI d
8383
#### <ins>**3D segmentation**</ins>
8484
##### [ignite examples](./3d_segmentation/ignite)
8585
Training and evaluation examples of 3D segmentation based on UNet3D and synthetic dataset.
86-
The examples are PyTorch Ignite programs and have both dictionary-base and array-based transformations.
86+
The examples are PyTorch Ignite programs and have both dictionary-based and array-based transformations.
8787
##### [torch examples](./3d_segmentation/torch)
8888
Training, evaluation and inference examples of 3D segmentation based on UNet3D and synthetic dataset.
8989
The examples are standard PyTorch programs and have both dictionary-based and array-based versions.

acceleration/distributed_training/distributed_training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ torchrun --nproc_per_node=NUM_GPUS_PER_NODE --nnodes=NUM_NODES brats_training_dd
2121

2222
## Multi-Node Training
2323

24-
Let's take two-node (16 GPUs in total) model training as an example. In the primary node (node rank 0), we run the following command.
24+
Let's take a two-node (16 GPUs in total) model training example. In the primary node (node rank 0), we run the following command.
2525

2626
```
2727
torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=PRIMARY_NODE_IP --master_port=1234 brats_training_ddp.py

nnunet/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# MONAI and nnU-Net Integration
22

3-
[nnU-Net](https://github.com/MIC-DKFZ/nnUNet) is an open-source deep learning framework that has been specifically designed for medical image segmentation. And nnU-Net is a state-of-the-art deep learning framework that is tailored for medical image segmentation. It builds upon the popular U-Net architecture and incorporates various advanced features and improvements, such as cascaded networks, novel loss functions, and pre-processing steps. nnU-Net also provides an easy-to-use interface that allows users to train and evaluate their segmentation models quickly. nnU-Net has been widely used in various medical imaging applications, including brain segmentation, liver segmentation, and prostate segmentation, among others. The framework has consistently achieved state-of-the-art performance in various benchmark datasets and challenges, demonstrating its effectiveness and potential for advancing medical image analysis.
3+
[nnU-Net](https://github.com/MIC-DKFZ/nnUNet) is an open-source deep learning framework that has been specifically designed for medical image segmentation. nnU-Net is a state-of-the-art deep learning framework that is tailored for medical image segmentation. It builds upon the popular U-Net architecture and incorporates various advanced features and improvements, such as cascaded networks, novel loss functions, and pre-processing steps. nnU-Net also provides an easy-to-use interface that allows users to train and evaluate their segmentation models quickly. nnU-Net has been widely used in various medical imaging applications, including brain segmentation, liver segmentation, and prostate segmentation, among others. The framework has consistently achieved state-of-the-art performance in various benchmark datasets and challenges, demonstrating its effectiveness and potential for advancing medical image analysis.
44

55
nnU-Net and MONAI are two powerful open-source frameworks that offer advanced tools and algorithms for medical image analysis. Both frameworks have gained significant popularity in the research community, and many researchers have been using these frameworks to develop new and innovative medical imaging applications.
66

@@ -73,7 +73,7 @@ Users can also set values of directory variables as options in "input.yaml" if a
7373
dataset_name_or_id: 1 # task-specific integer index (optional)
7474
nnunet_preprocessed: "./work_dir/nnUNet_preprocessed" # directory for storing pre-processed data (optional)
7575
nnunet_raw: "./work_dir/nnUNet_raw_data_base" # directory for storing formated raw data (optional)
76-
nnunet_results: "./work_dir/nnUNet_trained_models" # diretory for storing trained model checkpoints (optional)
76+
nnunet_results: "./work_dir/nnUNet_trained_models" # directory for storing trained model checkpoints (optional)
7777
```
7878

7979
Once the minimum input information is provided, the user can use the following commands to start the process of the entire nnU-Net pipeline automatically (from model training to model ensemble).
@@ -143,7 +143,7 @@ python -m monai.apps.nnunet nnUNetV2Runner predict_ensemble_postprocessing --inp
143143
--run_predict false --run_ensemble false
144144
```
145145

146-
For utilizing PyTorch DDP in multi-GPU training, the subsequent command is offered to facilitate the training of a singlular model on a specific fold:
146+
For utilizing PyTorch DDP in multi-GPU training, the subsequent command is offered to facilitate the training of a singular model on a specific fold:
147147

148148
```bash
149149
## [component] multi-gpu training for a single model

pathology/tumor_detection/README.MD

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Description
44

5-
Here we use a classification model to classify small batches extracted from very large whole-slide histopathology images. Since the patches are very small compare to the whole image, we can then use this model for the detection of tumors in a different area of a whole-slide pathology image.
5+
Here we use a classification model to classify small batches extracted from very large whole-slide histopathology images. Since the patches are very small compared to the whole image, we can then use this model for the detection of tumors in a different area of a whole-slide pathology image.
66

77
## Model Overview
88

vista_2d/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The tutorial demonstrates how to train a cell segmentation model using the [MONA
44

55
![image](../figures/vista_2d_overview.png)
66

7-
In Summary the tutorial covers the following:
7+
In summary, the tutorial covers the following:
88
- Initialization of the CellSamWrapper model with pre-trained SAM weights
99
- Creation of data lists for training, validation, and testing
1010
- Definition of data transforms for training and validation

vista_3d/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ The **VISTA3D** is a foundation model trained systematically on 11,454 volumes e
66

77
The tutorial demonstrates how to finetune the VISTA3D model on user data, where we use the MSD Task09 Spleen as the example.
88

9-
In Summary the tutorial covers the following:
9+
In summary, the tutorial covers the following:
1010
- Creation of datasets and data transforms for training and validation
11-
- Create and VISTA3D model and load the pretrained checkpoint
11+
- Create a VISTA3D model and load the pretrained checkpoint
1212
- Implementation of the finetuning loop
1313
- Mixed precision training with GradScaler
1414
- Visualization of training loss and validation accuracy

0 commit comments

Comments
 (0)