Running mriqc


Try MRIQC online on OpenNeuro - without installation!

MRIQC is a BIDS-App [BIDSApps], and therefore it inherently understands the BIDS standard [BIDS] and follows the BIDS-Apps standard command line interface:

mriqc bids-root/ output-folder/ participant

That simple command runs MRIQC on all the T1w and BOLD images found under the BIDS-compliant folder bids-root/. The last participant keyword indicates that the first level analysis is run. (i.e. extracting the IQMs from the images retrieved within bids-root/). The second level (group) is automatically run if no particular subject is provided for analysis.


If the argument --participant-label is not provided, then all subjects will be processed and the group level analysis will automatically be executed without need of running the command in item 3.

To specify one particular subject, the --participant-label argument can be used:

mriqc bids-root/ output-folder/ participant --participant-label S01 S02 S03

That command will run MRIQC only on the subjects indicated: only bids-root/sub-S01, bids-root/sub-S02, and bids-root/sub-S03 will be processed. In this case, the group level will not be triggered automatically. We generate the group level results (the group level report and the features CSV table) with:

mriqc bids-root/ output-folder/ group

Examples of the generated visual reports are found in The MRIQC Reports.


MRIQC by default attempts to upload anonymized quality metrics to a publicly accessible web server ( The uploaded data consists only of calculated quality metrics and scanning parameters. It removes all personal health information and participant identifiers. We try to collect this data to build normal distributions for improved outlier detection, but if you do not wish to participate you can disable the submission with the --no-sub flag.

BIDS data organization

The software automatically finds the data the input folder if it follows the BIDS standard [BIDS]. A fast and easy way to check that your dataset fulfills the BIDS standard is the BIDS validator.

BIDS-App levels

In the participant level, all individual images to be processed are run through the pipeline, and the MRIQC measures are extracted and the individual reports (see The MRIQC Reports) generated. In the group level, the IQMs extracted in first place are combined in a table and the group reports are generated.

Command line interface

MRIQC: MRI Quality Control

usage: mriqc [-h] [--version]
             [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
             [--session-id SESSION_ID [SESSION_ID ...]]
             [--run-id RUN_ID [RUN_ID ...]] [--task-id TASK_ID [TASK_ID ...]]
             [-m [{T1w,bold,T2w} [{T1w,bold,T2w} ...]]] [-w WORK_DIR]
             [--report-dir REPORT_DIR] [--verbose-reports] [--write-graph]
             [--dry-run] [--profile] [--use-plugin USE_PLUGIN] [--no-sub]
             [--email EMAIL] [-v] [--webapi-url WEBAPI_URL]
             [--webapi-port WEBAPI_PORT] [--upload-strict] [--n_procs N_PROCS]
             [--mem_gb MEM_GB] [--testing] [-f] [--ica] [--hmc-afni]
             [--hmc-fsl] [--fft-spikes-detector] [--fd_thres FD_THRES]
             [--ants-nthreads ANTS_NTHREADS] [--ants-float]
             [--ants-settings ANTS_SETTINGS] [--deoblique] [--despike]
             [--start-idx START_IDX] [--stop-idx STOP_IDX]
             bids_dir output_dir {participant,group} [{participant,group} ...]

Positional Arguments

bids_dir The directory with the input dataset formatted according to the BIDS standard.
output_dir The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of theparticipant level analysis.

Possible choices: participant, group

Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir.

Named Arguments

–version show program’s version number and exit

Options for filtering BIDS queries

–participant_label, –participant-label
 one or more participant identifiers (the sub- prefix can be removed)
–session-id select a specific session to be processed
–run-id select a specific run to be processed
–task-id select a specific task to be processed
-m, –modalities

Possible choices: T1w, bold, T2w

select one of the supported MRI types

Instrumental options

-w, –work-dir
–write-graph Write workflow graph.
–dry-run Do not run the workflow.
–profile hook up the resource profiler callback to nipype
–use-plugin nipype plugin configuration file
–no-sub Turn off submission of anonymized quality metrics to MRIQC’s metrics repository.
–email Email address to include with quality metric submission.
-v, –verbose increases log verbosity for each occurence, debug level is -vvv
–webapi-url IP address where the MRIQC WebAPI is listening
–webapi-port port where the MRIQC WebAPI is listening
 upload will fail if if upload is strict

Options to handle performance

–n_procs, –nprocs, –n_cpus, –nprocs
 number of threads
–mem_gb available total memory
–testing use testing settings for a minimal footprint
-f, –float32 Cast the input data to float32 if it’s represented in higher precision (saves space and improves perfomance)

Workflow configuration

–ica Run ICA on the raw data and include the componentsin the individual reports (slow but potentially very insightful)
–hmc-afni Use ANFI 3dvolreg for head motion correction (HMC) - default
–hmc-fsl Use FSL MCFLIRT instead of AFNI for head motion correction (HMC)
 Turn on FFT based spike detector (slow).
–fd_thres motion threshold for FD computation

Specific settings for ANTs

 number of threads that will be set in ANTs processes
–ants-float use float number precision on ANTs computations
 path to JSON file with settings for ANTS

Specific settings for AFNI

–deoblique Deoblique the functional scans during head motion correction preprocessing
–despike Despike the functional scans during head motion correction preprocessing
–start-idx Initial volume in functional timeseries that should be considered for preprocessing
–stop-idx Final volume in functional timeseries that should be considered for preprocessing
 Perform slice timing correction

Running MRIQC on HPC clusters

Singularity containers

Requesting resources

We have profiled cores and memory usages with the resource profiler tool of nipype.

An MRIQC run of one subject (from the ABIDE) dataset, containing only one run, one BOLD task (resting-state) yielded the following report:

Using the MultiProc plugin of nipype with nprocs=10, the workflow nodes run across the available processors for 41.68 minutes. A memory peak of 8GB is reached by the end of the runtime, when the plotting nodes are fired up.

We also profiled MRIQC on a dataset with 8 tasks (one run per task), on ds030 of OpenfMRI:

Again, we used n_procs=10. The software run for roughly about the same time (47.11 min). Most of the run time, memory usage keeps around a maximum of 10GB. Since we saw a memory consumption of 1-2GB during the the 1-task example, a rule of thumb may be that each task takes around 1GB of memory.