The TumorTrace project aims to develop an automated system for identifying and segmenting tumor regions in brain MRI images. Accurate segmentation is crucial for diagnosis, treatment planning, and monitoring of brain tumors. This project utilizes the U-Net architecture, a popular deep-learning model designed for image segmentation tasks, particularly in medical imaging.
Our goal is to provide a robust solution for automated tumor detection, which can significantly aid in clinical decision-making.
This project involves several critical steps to ensure effective brain tumor segmentation. Below is a brief outline of the process:
- Data Preparation: Collect MRI images and ground truth masks; preprocess (resize, normalize).
- Data Augmentation: Enhance dataset with rotation, flipping, scaling, and noise.
- Dataset Splitting: Divide into training, validation, and testing sets.
- Model Building: Implement U-Net with encoder-decoder architecture.
- Loss Function: Select suitable loss function (e.g., Dice loss).
- Training: Train model on training set, monitor validation performance.
- Evaluation: Assess performance on testing set using metrics like Dice coefficient.
- Post-processing: Refine masks with morphological operations.
- Visualization: Overlay results on original images.
- Deployment: Deploy for clinical use or integrate into software.
The LGG Segmentation Dataset is utilized in research by Mateusz Buda et al. and Maciej A. Mazurowski et al., focusing on the association of genomic subtypes of lower-grade gliomas with shape features extracted through deep learning. This dataset comprises brain MRI images along with manual FLAIR abnormality segmentation masks, sourced from The Cancer Imaging Archive (TCIA). It includes data from 110 patients from The Cancer Genome Atlas (TCGA) lower-grade glioma collection, featuring FLAIR sequences and corresponding genomic cluster data. Additional patient information is available in a data.csv file. For comprehensive genomic details, refer to the publication “Comprehensive, Integrative Genomic Analysis of Diffuse Lower-Grade Gliomas”.https://www.nejm.org/doi/full/10.1056/NEJMoa1402121 Dataset Link: https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation
Accuracy measures the proportion of correctly classified instances out of the total instances in a dataset. It provides a straightforward assessment of a model's performance, especially in balanced datasets.
Loss quantifies the difference between predicted values and actual values during model training. A lower loss indicates better model performance, guiding the optimization process to improve predictions.
IoU is a metric used to evaluate the accuracy of object detection and segmentation models. It calculates the overlap between the predicted segmentation area and the ground truth area, with values ranging from 0 to 1, where 1 indicates perfect overlap.
The results of brain tumor segmentation typically include visualized segmented regions overlaid on original MRI images, allowing for clear identification of tumor boundaries. Metrics such as the Dice coefficient and Intersection over Union (IoU) are used to quantify segmentation accuracy, often showing high scores that indicate effective model performance. Additionally, post-processing techniques can enhance the quality of the segmentation masks, improving clinical usability.
More output as above (in the file)
In conclusion, using advanced models like U-Net, brain tumor segmentation demonstrates significant potential in enhancing diagnostic accuracy in medical imaging. The effectiveness of the segmentation can be visualized through overlaid images of predicted tumor regions against the ground truth, showcasing the model's ability to accurately delineate tumor boundaries. Evaluation metrics such as the Dice coefficient and IoU provide quantitative measures of performance, reinforcing the model's reliability for clinical applications.