Skip to content

Commit

Permalink
major update to the docs - still more work to do
Browse files Browse the repository at this point in the history
  • Loading branch information
Pradeep Reddy Raamana committed Apr 2, 2018
1 parent f5d74fb commit b25524a
Show file tree
Hide file tree
Showing 56 changed files with 301 additions and 111 deletions.
2 changes: 1 addition & 1 deletion AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Credits
Development Lead
----------------

* Pradeep Reddy Raamana <raamana@gmail.com>
* `Pradeep Reddy Raamana <https://www.crossinvalidation.com>`_

Contributors
------------
Expand Down
7 changes: 7 additions & 0 deletions HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@
History
=======


0.3 (2018-04-02)
------------------

* Major update with multiple new use cases:


0.1 (2018-02-08)
------------------

Expand Down
77 changes: 36 additions & 41 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
==========
visualqc
VisualQC
==========

.. image:: https://zenodo.org/badge/105958496.svg
:target: https://zenodo.org/badge/latestdoi/105958496

.. image:: https://img.shields.io/pypi/v/visualqc.svg
:target: https://pypi.python.org/pypi/visualqc
Expand All @@ -10,61 +12,54 @@ visualqc
:target: https://www.codacy.com/app/raamana/visualqc?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=raamana/visualqc&amp;utm_campaign=Badge_Grade


Tool to automate the quality control workflow of MRI segmentations (gray and white matter, cortical, subcortical and other arbitrary segmentations) produced by Freesurfer and other tools.
VisualQC : assistive tool to improve the quality control workflow of neuroimaging data.

.. image:: docs/vqc_logo_small.png
.. image:: vqc_logo_small.png

Assessing and guranteeing the accuracy of any automatic segmentation (be it gray or white surfaces for cortical thickness, or a subortical segmentation) requires manual visual inspection. Not just one slice. Or one view. But many slices in all the views to ensure the 3d segmentation is accurate at the voxel-level. This process, in its most banal form, is quite cumbersome and time-consuming. Without any assistive tool, it requires opening both the MRI and segmentation for one subject in an editor that can overlay and color them properly, and manually reviewing one slice at a time, navigate through many many slices, and record your rating in a spreadsheet. And repeat this process for multiple subjects. In some even more demanding tasks (such as assessing the accuracy of cortical thickness e.g. generated by Freesurfer), you may need to review multiple types of visualizations, such as surface-redering with different labels colored appropriately, in addition to voxel-wise overlay on MRI. Without an automatic tool, this process allows too many human mistakes, over the span of 100s of subjects over many weeks jumping through multiple visualization software and spreadsheets. ``visualqc`` aims to reduce that to a single command to seamlessly record the ratings of accuracy and navigate through 100s of subjects with ease. All you need to do is sit back, focus your exper eye on the accuracy and ``visualqc`` takes care of the flow and bookkeeping.
Assessing and assuring the quality of imaging data, be it an raw acquisition (fMRI run or T1w MRI) or an automatic segmentation (be it gray or white surfaces for cortical thickness, or a subcortical segmentation) requires visual inspection manually. Not just one slice. Or one view. But many slices in all the views to ensure the 3d segmentation is accurate at the voxel-level. Often, looking at raw data is not sufficient to spot subtle errors, wherein statistical measurements (across space or time) assist greatly in rating the quality of image or severity of artefacts spotted.

Neuroimagers familiar with `ENIGMA quality control (QC) protocols <http://enigma.ini.usc.edu/protocols/imaging-protocols/>`_ would especially find this tool much easier. In addition to integrating valuable experience and knowledge from those protocols, this tool makes it easy so you don't have to deal with multiple scripts (to generate images and combine visualizations), and no alternating between multiple spreadsheets to keep track of ratings. Additional advantages include zooming in and needing to use only a single tool to QC both cortical and subcortical segmentations.
This manual process, in its simplest form, is quite cumbersome and time-consuming. Without any assistive tool, it requires opening both the MRI and segmentation for one subject in an editor that can overlay and color them properly, and manually reviewing one slice at a time, navigate through many many slices, and record your rating in a spreadsheet. And repeat this process for multiple subjects. In some even more demanding tasks (such as assessing the accuracy of cortical thickness e.g. generated by Freesurfer, or in reviewing an EPI sequence), you may need to review multiple types of visualizations (such as surface-redering of pial surface or carpet plots with specific temporal stats in fMRI), in addition to voxel-wise data. Without an automatic tool, this logistics process allows too many human mistakes, esp. as you flip through 100s of subjects over many weeks jumping through multiple visualization software and spreadsheets. Moreover, with careful use of outlier detection technique on dataset-wide statistics (across all the subjects in a dataset) can help us identify subtle errors (such as a small ROI with unrealiastic thickness value) that would otherwise go undetected.

* Free software: MIT license
* Documentation: https://raamana.github.io/visualqc
``VisualQC``, purpose-built for rigorous quality control, aims to reduce this laborious process to a single command to seamlessly present relevant composite visualizations while alerting user of any outliers, offer an easy way to record the ratings, and quickly navigate through 100s of subjects with ease. All you need to do is sit back, focus your expert eye on data and ``VisualQC`` takes care of the flow and bookkeeping.

* Free software: Apache license
* Documentation: https://visualqc.readthedocs.io.

Features
--------

* Makes the review and rating workflow seamless and easy! It is simple as: visualize the auto-generated overlay, review, zoom-in wherever you need, rate the quality, make notes and proceed to next!
* Automatically detect and flag outliers (in testing) based on over 500 measurements from Freesurfer
* Display multiple slices in multiple views, and easily navigate all subjects in a dataset
* Allows you to zoom in to any view/slice to ensure you won't miss any detail. No need to squint your eyes!
* Keyboard shortcuts to speed up the process, no need to lift your fingers!
* Allows to make arbitrary notes on the current segmentation/parcellation
* Allows you to control the transparency of overlay to your expert preference
* Allows to focus on a single or a set of arbitrary segmentations (hippocampus, or PCG or DMN etc) if necessary.

Gallery (contour)
-----------------

Some examples of how the interface looks are shown below. The first screenshot showcases the use case wherein we can review the accuracy of Freesufer's cortical parcellation against the original MRI (note that only one view is shown and one panel is zoomed-in):

.. image:: docs/vis/contour/visual_qc_cortical_contour__Pitt_0050034_v1_ns18_4x6.png
Use-cases supported
------------------------

In this screenshot, we show the user interface showing the elements to rating, notes and alerts from outlier detection module:
VisualQC supports the following use cases:

.. image:: docs/vis/contour/new_ui_with_outlier_alert_notes.png
* Functional MRI scans (focused visual review, with rich and custom-built visualizations)
* Freesurfer cortical parcellations (accuracy of pial/white surfaces on T1w mri)
* Structural T1w MRI scans (artefact rating)
* Volumetric segmenation accuracy (on T1w MRI)
* Registration quality (spatial alignment) within a single modality (multimodal support coming)
* For your own important use case, feel free to `contact me <crossinvalidation.com>`_
* Some others are being discussed - might be coming soon.

In the screenshot below, we show the use case for a single label (subcortical segmentation, tissue class or cortical ROI) - shown here are hippocampus and amygdala:

.. image:: docs/vis/contour/visual_qc_labels_contour_53_Pitt_0050039_v012_ns18_9x6.png

We can also add nearby amygdala:

.. image:: docs/vis/contour/visual_qc_labels_contour_53_54_Pitt_0050036_v02_ns21_6x7.png

And you can add as many ROIs as you like:
Features
--------

.. image:: docs/vis/contour/visual_qc_labels_contour_10_11_12_13_NYU_0051036.png
Each use case aims to offer the following features:

ROIs could be from anywhere in the MRI (including big cortical labels too!). For example, let's look at Insula (label 1035 in Freesurfer ColorLUT) in the left hemi-sphere :
* Ability to zoom-in slices displayed to to ensure you won't miss any detail (down to the voxel-level), so you can rate its quality with confidence.
* Automatically detect and flag outliers during review (multivariate high-dimensional outlier detection)
* Display multiple slices in multiple views, and easily navigate all subjects in a dataset
* Keyboard shortcuts to speed up the process, no need to lift your fingers!
* Allows to make arbitrary notes on the current review session
* Allows you to customize the visualizations to your expert preference (such as removing certain overlays, control the transparency, change how two images blended together).

.. image:: docs/vis/contour/visual_qc_labels_contour_1035_Pitt_0050032_v02_ns21_6x7.png
Galleries
----------

And, how about middle temporal?
* :doc:`doc/gallery_freesurfer`
* :doc:`doc/gallery_functional_mri`
* :doc:`doc/gallery_t1_mri`
* :doc:`doc/gallery_registration_unimodal`
* :doc:`doc/gallery_segmentation_volumetric`

.. image:: docs/vis/contour/visual_qc_labels_contour_2015_Pitt_0050035_v02_ns21_6x7.png

Let's just focus on axial view to get more detail:

.. image:: docs/vis/contour/visual_qc_labels_contour_2015_Pitt_0050039_v2_ns27_3x9.png
22 changes: 22 additions & 0 deletions docs/citation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@

If you used ``VisualQC`` in your analyses or projects (whether or not it was reported in the main paper), please cite it:

- Pradeep Reddy Raamana. VisualQC: Assistive tools for easy and rigorous quality control of neuroimaging data. http://doi.org/10.5281/zenodo.1211365

Bibtex:

.. code-block:: tex

@misc{pradeep_reddy_raamana_2018_1211365,
author = {Pradeep Reddy Raamana},
title = {{VisualQC: Assistive tools for easy and rigorous
quality control of neuroimaging data}},
month = apr,
year = 2018,
doi = {10.5281/zenodo.1211365},
url = {https://doi.org/10.5281/zenodo.1211365}
}


.. image:: https://zenodo.org/badge/105958496.svg
:target: https://zenodo.org/badge/latestdoi/105958496
9 changes: 9 additions & 0 deletions docs/cli_alignment.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Command line usage - Alignment
---------------------------------

.. argparse::
:module: visualqc.alignment
:func: get_parser
:prog: visualqc_alignment
:nodefault:
:nodefaultconst:
9 changes: 9 additions & 0 deletions docs/cli_freesurfer.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Command line usage - Freesurfer
---------------------------------

.. argparse::
:module: visualqc.freesurfer
:func: get_parser
:prog: visualqc_freesurfer
:nodefault:
:nodefaultconst:
9 changes: 9 additions & 0 deletions docs/cli_func_mri.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Command line usage - Functional MRI
-----------------------------------

.. argparse::
:module: visualqc.functional_mri
:func: get_parser
:prog: visualqc_func_mri
:nodefault:
:nodefaultconst:
9 changes: 9 additions & 0 deletions docs/cli_t1_mri.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Command line usage - T1w MRI
---------------------------------

.. argparse::
:module: visualqc.t1_mri
:func: get_parser
:prog: visualqc_t1_mri
:nodefault:
:nodefaultconst:
2 changes: 2 additions & 0 deletions docs/examples_alignment.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Example usage - Alignment
----------------------------
4 changes: 2 additions & 2 deletions docs/examples.rst → docs/examples_freesurfer.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Examples (Freesurfer)
----------------------
Example usage - Freesurfer
----------------------------

A rough example of usage can be:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
Examples (Generic)
----------------------
Example usage - Segmentation
-----------------------------

In some use cases, you may want to overlay images from arbitrary locations without any pre-defined structure or heirarchy like Freesurfer. This is possible in ``visualqc`` by specifying:
In some use cases, you may want to overlay images from arbitrary locations without any pre-defined structure or hierarchy like Freesurfer. This is possible in ``visualqc_freesurfer`` by specifying:

1. path to parent folder of the images using ``--user_dir`` or ``-u`` option, which contains a seprate folder for each subject ID
1. path to parent folder of the images using ``--user_dir`` or ``-u`` option, which contains a separate folder for each subject ID
2. name of the anatomical MRI with ``--mri_name`` (or ``-m``) and
3. name of the segmentation with ``--seg_name`` (or ``-g``) that is to be overlaid on the MRI.


If you would like to review all the subjects (each with their own folder) in ``/project/MR_segmentation``, who segmentation(s) are stored in ``roi_set.nii`` whose T1/anatomical MRI is stored in ``mri.nii``. The folder heirarchy (within ``/project/MR_segmentation``) might look like this:
If you would like to review all the subjects (each with their own folder) in ``/project/MR_segmentation``, whose segmentation(s) are stored in ``roi_set.nii`` whose T1/anatomical MRI is stored in ``mri.nii``. The folder hierarchy (within ``/project/MR_segmentation``) might look like this:

.. code-block:: bash
Expand All @@ -31,10 +31,10 @@ In that case, you would issue the following command:
.. code-block:: bash
visualqc --in_dir /project/MR_segmentation --mri_name mri.nii --seg_name roi_set.nii
visualqc_freesurfer --in_dir /project/MR_segmentation --mri_name mri.nii --seg_name roi_set.nii
This will process the four subjects (atlas1, atlas2, sub_01, sub_04) sequentially, and creates an output directory called ``visualqc`` in the input directory specified ``/project/MR_segmentation``, to store the visualizations generated, along with the ratings and notes provided by the user. You can also change the output directory with the ``-o`` option. You can also limit the review to a subject of IDs, by using a predefined list by a specifiying an id list with ``--id_list`` or ``-i`` option, containing one ID per line. An example (focusing only on the 2 atlases) could like:
This will process the four subjects (atlas1, atlas2, sub_01, sub_04) sequentially, and creates an output directory called ``visualqc`` in the input directory specified ``/project/MR_segmentation``, to store the visualizations generated, along with the ratings and notes provided by the user. You can also change the output directory with the ``-o`` option. You can also limit the review to a subject of IDs, by using a predefined list by a specifying an id list with ``--id_list`` or ``-i`` option, containing one ID per line. An example (focusing only on the 2 atlases) could like:
.. code-block:: bash
Expand Down
2 changes: 2 additions & 0 deletions docs/examples_func_mri.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Example usage - Functional MRI
------------------------------
2 changes: 2 additions & 0 deletions docs/examples_t1_mri.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Example usage - T1w MRI
----------------------------
6 changes: 3 additions & 3 deletions docs/file_formats.rst
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
Data formats and requirements
-----------------------------
===============================

``visualqc`` relies on `nibabel <http://nipy.org/nibabel/>`_ to read the input image data, and supports all the formats that nibabel can read.

**The only requirement being the two images to be overlaid must be of the same shape, in dimensions and size.**

And, for a given subject ID, these two images must be in the same folder (although this might be relaxed in the future with a more generic input mechanism).

Following formats are strongly encouraged:
Following imaging formats are strongly encouraged:

- Nifti
- MGH/Freesurfer

while the following formats are theoretically supported (but are not tested regularly):
while the following formats are supported, as they can be ready via ``nibabel``, but they are not routinely tested:

- MINC (1/2)
- gifti
Expand Down
3 changes: 0 additions & 3 deletions docs/gallery_freesurfer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,4 @@ Let's just focus on axial view to get more detail:
.. image:: vis/fs_contour/visual_qc_labels_contour_2015_Pitt_0050039_v2_ns27_3x9.png


Gallery - Freesurfer Parcellation (filled)
------------------------------------------------

Fore more visualizations e.g. those with filled labels instead of contours, refer to :doc:`gallery_freesurfer_filled`.
30 changes: 30 additions & 0 deletions docs/gallery_freesurfer_filled.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
Gallery - Freesurfer (labels filled)
===================================================

Freesufer's cortical parcellation against the original MRI (note one axis is zoomed-in):

.. image:: vis/fs_filled/cortical_zoomed_in.png

In the second screenshot, we show the use case for a single label (subcortical segmentation, tissue class or cortical ROI) - shown here is hippocampus:

.. image:: vis/fs_filled/hippocampus_not_zoomed_in.png

Focusing on multiple subcortical structures:

.. image:: vis/fs_filled/subcortical_multiple.png

And you can add as many ROIs as you like:

.. image:: vis/fs_filled/subcortical_even_more.png

ROIs could be from anywhere in the MRI (including big cortical labels too!). For example, let's look at posterior cingulate (label 1023 in Freesurfer ColorLUT) in the left hemi-sphere :

.. image:: vis/fs_filled/lh-posteriorcingulate_1023.png

And, how about insula?

.. image:: vis/fs_filled/insula_1035.png

You can also combine as many cortical ROIs as you wish and zoom in on them to get every detail you need to judge their accuracy:

.. image:: vis/fs_filled/single_label_cortical_zoomed_in.png
4 changes: 2 additions & 2 deletions docs/gallery_functional_mri.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Functional MRI scan - artefact detection and rating
------------------------------------------------------
Gallery - Functional MRI scan - artefact detection and rating
--------------------------------------------------------------

2 changes: 1 addition & 1 deletion docs/gallery_registration_unimodal.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Registration - comparison of spatial alignment
Gallery - Registration : comparison of spatial alignment
--------------------------------------------------------------------

2 changes: 1 addition & 1 deletion docs/gallery_segmentation_volumetric.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Segmentation/ROI - anatomical accuracy evaluation
Gallery - Segmentation/ROI - anatomical accuracy evaluation
--------------------------------------------------------------------

2 changes: 1 addition & 1 deletion docs/gallery_t1_mri.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Structural T1w MRI - artefact detection and rating
Gallery - Structural T1w MRI - artefact detection and rating
--------------------------------------------------------------------

10 changes: 0 additions & 10 deletions docs/how_to.rst

This file was deleted.

16 changes: 12 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,18 +10,26 @@ Contents:
installation
use_cases
recommended_usage
usage
usage_all
file_formats
how_to
interface
gallery_freesurfer
gallery_freesurfer_filled
gallery_functional_mri
gallery_registration_unimodal
gallery_segmentation_volumetric
gallery_t1_mri
examples
examples_generic
cli_freesurfer
cli_func_mri
cli_alignment
cli_t1_mri
examples_freesurfer
examples_freesurfer_generic
examples_func_mri
examples_alignment
examples_t1_mri
contributing
citation
authors
history

Expand Down
8 changes: 4 additions & 4 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,12 @@ To install `visualqc`, run this command in your terminal:
$ pip install -U visualqc
This is the preferred method to install visualqc, as it will always install the most recent stable release.
This is the preferred method to install ``visualqc``, as it will always install the most recent stable release.

If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
If you don't have `Python`_ or `pip`_ installed, follow the following guides:

.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
.. _Python: _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/


Requirements
Expand All @@ -36,6 +35,7 @@ Requirements
- scipy
- numpy
- scikit-learn
- nilearn


From sources
Expand Down
Loading

0 comments on commit b25524a

Please sign in to comment.