Neuroimaging Quality Control (niQC) special interest group to develop protocols, tools and manuals

TL;DR:

Background

Assessing the quality of neuroimaging data, be it a raw MR acquisition (fMRI run or diffusion- or T1-weighted scan) or the result of a particular image processing task (i.e. reconstructed gray or white matter surfaces) requires human visual inspection. Given the complex nature, diverse presentations, and three-dimensional anatomy of image volumes, this requires inspection in all the three planes and multiple cross-sections through each volume. Often, looking at raw data is not sufficient, especially to spot subtle errors, wherein statistical measurements (e.g. across space or time) can greatly assist in identifying the artefacts or rating their severity. For certain data (such as assessing the accuracy of cortical thickness e.g. generated by Freesurfer, or in reviewing an EPI sequence), multiple types of visualizations (such as surface-rendering of pial surface or carpet plots with specific temporal stats in fMRI) and metrics (SNR, CNR, DVARS, Euler number) need to be taken into account for proper quality control (QC). This process is time-consuming and subjected to large intra- and inter-rater variabilities. The inter-rater variability arises from the costly training and “calibration” between two or more raters using the same annotation protocol. The intra-rater variability sources from the individual rater gain in experience, but also from human errors, including inaccurate bookkeeping, fatigue, limitations of the annotation protocol/settings that hide away or obscure imaging artifacts and other defects, changes in the annotation protocol, etc.

As the datasets have bigger sample sizes and more modalities, there is a great need for developing appropriate quality annotation protocols and corresponding assistive tools. Quality control of neuroimaging data has been studied and reported in different dimensions, including developing algorithms to detect unusable scans built on more (i.e. image quality metrics) or less interpretable features that are extracted from images and through visual screening following a prescribed protocol and visualization tools. However, a common lesson learnt from previous research in multiple modalities is that the accuracy of these “automatic” methods is too low to be relied on for routine usage and that manual visual inspection is necessary.

As we attempt to build a cohesive tool, we were met with many open questions, such as

  • what kind of preprocessing need be applied before review?
  • what are the widely accepted definition for old and recently discovered artefacts?
  • what are the acceptable grades of artefact severity?
  • what quality metrics should be used for what QC purpose?
  • what steps of different pipelines must be QCed?
  • when can we rely on automatic methods?
  • should we enforce QC to be a requirement for all publications (during peer-review)?
  • how do we reduce differences in the way QC processes are reported?

These aspects are yet to be widely and openly consulted for considerations towards defining best practices. Such open discussion and wider consultation is necessary to the development of protocols. Establishing a task force would facilitate much needed discussion and encourage wide participation.

In addition to the above discussion, there is a clear need for developing educational resources such as comprehensive manuals (covering all modalities in neuroimaging data, different approaches to QC and curation), as well as easy to use tools that integrate the educational components well with the developed manuals and protocols.

Hence, we get together to announce this task force on Neuroimaging Quality Control (niQC) with the following overarching goals:

  • develop a comprehensive manual for quality control of neuroimaging data,
  • develop guidelines and best practices for conducting and reporting QC,
  • publish protocols for different use cases,
  • develop easy to use tools implementing those guidelines and protocols,
    • integrating manuals and educational components when possible.

Join us : https://groups.google.com/d/forum/niqc

Frequently Asked Questions (FAQ)

  • Do we really need more tools? How does this compare to existing tools?
    • There are many great tools now for preprocessing your neuroimaging data (structural, functional or diffusion), but their design and primary purpose tends to be batch processing. Some tools produce useful set of diagnostics to QC their outputs and to troubleshoot the pipeline. However, these pipelines are only a set of ways to preprocess and analyse one’s data. For a given dataset, there are many ways to process and analyze it, but QC is common to them all. Hence, separating the QC tools from the processing tools could be valuable. It can also lead to the development of a more comprehensive (broader scope) and sustainable tools (not all types of QC are important for a given pipeline).
  • QC of one modality/processing is complicated enough, and you aim to target all of neuroimaging data. With many use cases, Isn’t the scope just too big?
    • Yes, this is certainly a bigger effort than a typical assistive tool. As I read through the QC literatures in T1w, fMRI and DWI, I am realizing there are many common threads:
      • all of them are imaging data on rectangular grid
      • artefacts due to MR physics (ringing, ghosting etc.) are common and
      • non-MR artefacts due to participant (motion etc.) are also common to them. In addition, the origin of many of the artefacts are common, and they vary in their manifestation in specific instances.
      • Moreover, the ways to detect and rate artefacts are also common e.g. visual inspection to judge the SNR/CNR/anatomical accuracy,
      • animation through volumes, over time or gradients, to reveal more subtle or complicated artefacts
      • spatial alignment checks is common to all modalities e.g. against an anatomical reference, or against other volumes in the same modalities from the same session.
  • Is developing the tool(s) the only goal?
    • No. In addition to building a tool to assist in quality control, we hope to contribute towards education via a comprehensive manual and regular workshops. For example, the review tool could produce examples of a particular type of artifact when requested, so a [novice] rater could be more confident about their rating. A good way to present an artifact would be to show several different variations of it, clean examples without this artifact present, some background information on the source of this artefact and how to stage it.
  • Okay, we have a good tool(s), well-defined manuals and some consensus on its importance and implementation. So what? What’s the guarantee it will be used in practice?We hope to encourage adoption of the best practices with following activities:
    • raise awareness of importance of QC at various stages of processing
    • organize regular QC workshops and technical demos, around versioned protocols (e.g. visualqc-func-mri-raw-epi v0.42)
    • working with various Editors and Reviewers to utilize these versioned protocols to minimize misunderstandings of, improve and harmonize data quality.
    • utilizing manual ratings and QC data produced by publications on open datasets to demonstrate the utility of such common tools
    • Crowd-sourcing the QC of ever-larger datasets, and using the manual ratings to further improve the outlier detection and automatic alert strategies

Current members

All the members in the niQC google group are considered to be members of the SIG.

Here is the original list of members we started the SIG with:

  • Pradeep Reddy Raamana
  • Stephen C. Strother
  • Ben A. Inglis
  • Jean Chen
  • Robert Cox
  • Pierre Bellec
  • Anisha Keshavan
  • Lei Liew
  • Oscar Esteban
  • Jean-Baptiste Poline
  • Sean Hill
  • Simon B. Eickhoff
  • David Kennedy
  • Simon Duchesne
  • Ted Satterthwaite
  • Thomas Nichols
  • Martin Lindquist
  • Michael Milham
  • Joset A. Etzel
  • Erin Dickie
  • Ali Golestani
  • David Rotenberg
  • You?

Useful links

Acknowledgements

I’d like to acknowledge the comments and help from Ben A. Inglis, Stephen C. Strother and Oscar Esteban on earlier and other versions of this document, which improved it significantly.

2 thoughts on “Neuroimaging Quality Control (niQC) special interest group to develop protocols, tools and manuals

  1. Pingback: Slides from educational course on Neuroimaging Quality Control at OHBM 2019 in Rome – cross invalidation

  2. Pingback: Whether to reform or abandon Canadian Common CV (CCV): et’s not throw the baby out with the bathwater – cross invalidation

Leave a comment