Running MRIQC

MRIQC is a BIDS-App [BIDSApps], and therefore it inherently understands the BIDS standard [BIDS]. Before moving forward, please make sure to have read and understood NiPreps’s introductory documentation).

Containerized execution with Docker and Singularity/Apptainer

For containerized execution with Docker or Singularity/Apptainer, please follow the documentation on the NiPreps site, which contains tip and troubleshooting guidelines for both Docker, and Singularity or Apptainer. In addition to container-specific guidelines, the documentation also includes specific help for processing DataLad-managed datasets.

The rest of this documentation page applies to both bare-metal and containerized execution modes.

MRIQC can fetch data in DataLad datasets

As of version 22.0.3, MRIQC bundles DataLad, enabling automatic data fetching in DataLad datasets. Employing this feature in containerized environments may lead to somewhat obscure errors (see, for example, nipreps/mriqc#1307). If you intend to use DataLad datasets, please read carefully NiPrepshelp for processing DataLad-managed datasets.

Alternatively, this feature can be disabled by adding --no-datalad-get to the command line. This will separate DataLad management from MRIQC’s operation, which can be an effective way of debugging issues and averting erroneous conditions.

Troubleshooting

If you encounter problems, please check our NiPreps Guidelines for Singularity or Apptainer. Common tips and guidelines that used to be found within MRIQC’s or fMRIPrep’s documentation sites have been relocated in the general NiPreps website.

A BIDS Apps command line interface

MRIQC follows the BIDS Apps standard command line interface:

mriqc bids-root/ output-folder/ participant

That simple command runs MRIQC on all the T1w and BOLD images found under the BIDS-compliant folder bids-root/. The last participant keyword indicates that the first level analysis is run. (i.e. extracting the IQMs from the images retrieved within bids-root/). The second level (group) is automatically run if no particular subject is provided for analysis.

Note

If the argument --participant-label is not provided, then all subjects will be processed and the group level analysis will automatically be executed without need of running the command in item 3.

To specify one particular subject, the --participant-label argument can be used:

mriqc bids-root/ output-folder/ participant --participant-label S01 S02 S03

That command will run MRIQC only on the subjects indicated: only bids-root/sub-S01, bids-root/sub-S02, and bids-root/sub-S03 will be processed. In this case, the group level will not be triggered automatically. We generate the group level results (the group level report and the features CSV table) with:

mriqc bids-root/ output-folder/ group

Examples of the generated visual reports are found in The MRIQC Reports.

Warning

MRIQC by default attempts to upload anonymized quality metrics to a publicly accessible web server (mriqc.nimh.nih.gov). The uploaded data consists only of calculated quality metrics and scanning parameters. It removes all personal health information and participant identifiers. We try to collect this data to build normal distributions for improved outlier detection, but if you do not wish to participate you can disable the submission with the --no-sub flag.

Command line interface

MRIQC 25.1.0.dev18+g35d2055 Automated Quality Control and visual reports for Quality Assessment of structural (T1w, T2w) and functional MRI of the brain.

IMPORTANT: Anonymized quality metrics (IQMs) will be submitted to MRIQC’s metrics repository. Submission of IQMs can be disabled using the --no-sub argument. Please visit https://mriqc.readthedocs.io/en/latest/dsa.html to revise MRIQC’s Data Sharing Agreement.

usage: mriqc [-h] [--version] [-v] [--species {human,rat}]
             [--participant-label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
             [--bids-filter-file PATH] [--session-id [SESSION_ID ...]]
             [--run-id [RUN_ID ...]] [--task-id [TASK_ID ...]]
             [-m [{T1w,T2w,bold,dwi} ...]] [--dsname DSNAME]
             [--bids-database-dir PATH] [--bids-database-wipe]
             [--no-datalad-get] [--nprocs NPROCS]
             [--omp-nthreads OMP_NTHREADS] [--mem MEMORY_GB] [--testing] [-f]
             [--pdb] [-w WORK_DIR] [--verbose-reports] [--reports-only]
             [--write-graph] [--dry-run] [--resource-monitor]
             [--use-plugin USE_PLUGIN] [--crashfile-format {txt,pklz}]
             [--no-sub] [--email EMAIL] [--webapi-url WEBAPI_URL]
             [--webapi-port WEBAPI_PORT] [--upload-strict] [--notrack]
             [--ants-float] [--ants-settings ANTS_SETTINGS]
             [--min-dwi-length MIN_LEN_DWI] [--min-bold-length MIN_LEN_BOLD]
             [--fft-spikes-detector] [--fd_thres FD_THRES] [--deoblique]
             [--despike] [--start-idx START_IDX] [--stop-idx STOP_IDX]
             bids_dir output_dir {participant,group} [{participant,group} ...]

Positional Arguments

bids_dir

The root folder of a BIDS valid dataset (sub-XXXXX folders should be found at the top level in this folder).

output_dir

The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of the participant level analysis.

analysis_level

Possible choices: participant, group

Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir.

Named Arguments

--version

show program’s version number and exit

-v, --verbose

Increases log verbosity for each occurrence, debug level is -vvv.

--species

Possible choices: human, rat

Use appropriate template for population

Options for filtering BIDS queries

--participant-label, --participant_label, --participant-labels, --participant_labels

A space delimited list of participant identifiers or a single identifier (the sub- prefix can be removed).

--bids-filter-file

a JSON file describing custom BIDS input filter using pybids {<suffix>:{<entity>:<filter>,…},…} (https://github.com/bids-standard/pybids/blob/master/bids/layout/config/bids.json)

--session-id

Filter input dataset by session ID.

--run-id

DEPRECATED - This argument will be disabled. Use --bids-filter-file instead.

--task-id

Filter input dataset by task ID.

-m, --modalities

Possible choices: T1w, T2w, bold, dwi

Filter input dataset by MRI type.

--dsname

A dataset name.

--bids-database-dir

Path to an existing PyBIDS database folder, for faster indexing (especially useful for large datasets).

--bids-database-wipe

Wipe out previously existing BIDS indexing caches, forcing re-indexing.

--no-datalad-get

Disable attempting to get remote files in DataLad datasets.

Options to handle performance

--nprocs, --n_procs, --n_cpus, -n-cpus

Maximum number of simultaneously running parallel processes executed by MRIQC (e.g., several instances of ANTs’ registration). However, when --nprocs is greater or equal to the --omp-nthreads option, it also sets the maximum number of threads that simultaneously running processes may aggregate (meaning, with --nprocs 16 --omp-nthreads 8 a maximum of two 8-CPU-threaded processes will be running at a given time). Under this mode of operation, --nprocs sets the maximum number of processors that can be assigned work within an MRIQC job, which includes all the processors used by currently running single- and multi-threaded processes. If None, the number of CPUs available will be automatically assigned (which may not be what you want in, e.g., shared systems like a HPC cluster.

--omp-nthreads, --ants-nthreads

Maximum number of threads that multi-threaded processes executed by MRIQC (e.g., ANTs’ registration) can use. If None, the number of CPUs available will be automatically assigned (which may not be what you want in, e.g., shared systems like a HPC cluster.

--mem, --mem_gb, --mem-gb

Upper bound memory limit for MRIQC processes.

--testing

Use testing settings for a minimal footprint.

-f, --float32

Cast the input data to float32 if it’s represented in higher precision (saves space and improves performance).

--pdb

Open Python debugger (pdb) on exceptions.

Instrumental options

-w, --work-dir

Path where intermediate results should be stored.

--verbose-reports
--reports-only
--write-graph

Write workflow graph.

--dry-run

Do not run the workflow.

--resource-monitor, --profile

Hook up the resource profiler callback to nipype.

--use-plugin

Nipype plugin configuration file.

--crashfile-format

Possible choices: txt, pklz

Nipype crashfile format

--no-sub

Turn off submission of anonymized quality metrics to MRIQC’s metrics repository.

--email

Email address to include with quality metric submission.

--webapi-url

IP address where the MRIQC WebAPI is listening.

--webapi-port

Port where the MRIQC WebAPI is listening.

--upload-strict

Upload will fail if upload is strict.

--notrack

Opt-out of sending tracking information of this run to the NiPreps developers. This information helps to improve MRIQC and provides an indicator of real world usage crucial for obtaining funding.

Specific settings for ANTs

--ants-float

Use float number precision on ANTs computations.

--ants-settings

Path to JSON file with settings for ANTs.

Diffusion MRI workflow configuration

--min-dwi-length

Drop DWI runs with fewer orientations than this threshold.

Functional MRI workflow configuration

--min-bold-length

Drop BOLD runs with fewer time points than this threshold.

--fft-spikes-detector

Turn on FFT based spike detector (slow).

--fd_thres

Threshold on framewise displacement estimates to detect outliers.

--deoblique

Deoblique the functional scans during head motion correction preprocessing.

--despike

Despike the functional scans during head motion correction preprocessing.

--start-idx

DEPRECATED Initial volume in functional timeseries that should be considered for preprocessing.

--stop-idx

DEPRECATED Final volume in functional timeseries that should be considered for preprocessing.

Running MRIQC on HPC with Singularity/Apptainer

We have profiled cores and memory usages with the resource profiler tool of Nipype.

An MRIQC run of one subject (from the ABIDE) dataset, containing only one run, one BOLD task (resting-state) yielded the following report:

Using the MultiProc plugin of nipype with nprocs=10, the workflow nodes run across the available processors for 41.68 minutes. A memory peak of 8GB is reached by the end of the runtime, when the plotting nodes are fired up.

We also profiled MRIQC on a dataset with 8 tasks (one run per task), on ds030 of OpenfMRI:

Again, we used n_procs=10. The software run for roughly about the same time (47.11 min). Most of the run time, memory usage keeps around a maximum of 10GB. Since we saw a memory consumption of 1-2GB during the the 1-task example, a rule of thumb may be that each task takes around 1GB of memory.