This repository implements and experiments with MIRNet, a deep learning architecture for enhancing low-light images. The project is inspired by the work presented in Learning Enriched Features for Real Image Restoration and Enhancement and was developed as part of the EE610: Image Processing course under the guidance of Prof. Amit Sethi at IIT Bombay.
- Introduction
- MIRNet Architecture
- Experiments Conducted
- Results
- Dataset
- Installation
- Usage
- References
Low-light conditions degrade image quality, resulting in issues like noise, low contrast, and color distortion. Enhancing such images is a challenging problem in computer vision. This project uses MIRNet, a multi-scale convolutional neural network that:
- Captures spatial accuracy.
- Reduces noise and preserves details.
- Improves perceptual quality using hybrid loss functions.
The work focuses on restoring high-quality content from low-light images, particularly using the LoL Dataset.
MIRNet introduces the following key components:
- Multi-Scale Residual Block (MRB): Captures multi-scale contextual information while maintaining high-resolution details.
- Dual Attention Unit (DAU): Combines channel attention and spatial attention for better feature refinement.
The network is trained using a hybrid loss function combining:
- Charbonnier loss: Robust loss for image restoration.
Optimizer: Adam
Learning Rate Scheduler: ReduceLROnPlateau
- Implemented MIRNet with original architecture and parameters.
- Loss Function: Charbonnier Loss.
- Experimented with Dual Attention Units (DAU) for refining features.
- Improved SSIM and PSNR metrics.
- Added SSIM Loss alongside the Charbonnier loss to emphasize perceptual quality.\
- Experimented with the addition of L1 and L2 regularization
Each experiment can be found in:
mirnet.ipynb
: Baseline implementation.mirnet_modified.ipynb
: Modified architecture.mirnet_modified_SSIM_loss.ipynb
: Alternative loss functions.mirnet_modified_regularisation.ipynb
: Added regularization.
Input | Enhanced (Baseline) | Enhanced (Modified) |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The LoL Dataset was used for training and evaluation. It contains paired low-light and well-exposed images:
- Training: 485 image pairs.
- Testing: 15 image pairs.
Dataset link: LoL Dataset
-
Clone the repository:
git clone https://github.com/TheShiningVampire/MIRNET_for_low_light_image_improvement.git cd MIRNET_for_low_light_image_improvement
-
Set up the environment:
python -m venv venv source venv/bin/activate pip install -r requirements.txt
To run the experiments, execute the following Jupyter notebooks:
-
Baseline Implementation:
jupyter notebook mirnet.ipynb
-
Modified Architecture:
jupyter notebook mirnet_modified.ipynb
-
SSIM Loss Implementation:
jupyter notebook mirnet_modified_SSIM_loss.ipynb