Running mriqc

Command line interface

MRIQC: MRI Quality Control

usage: mriqc [-h] [-v]
             [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
             [--session-id SESSION_ID [SESSION_ID ...]]
             [--run-id RUN_ID [RUN_ID ...]] [--task-id TASK_ID [TASK_ID ...]]
             [-m [{T1w,bold,T2w} [{T1w,bold,T2w} ...]]] [-w WORK_DIR]
             [--report-dir REPORT_DIR] [--verbose-reports] [--write-graph]
             [--dry-run] [--profile] [--use-plugin USE_PLUGIN] [--no-sub]
             [--email EMAIL] [--n_procs N_PROCS] [--mem_gb MEM_GB] [--testing]
             [-f] [--ica] [--hmc-afni] [--hmc-fsl] [--fft-spikes-detector]
             [--fd_thres FD_THRES] [--ants-nthreads ANTS_NTHREADS]
             [--ants-settings ANTS_SETTINGS] [--deoblique] [--despike]
             [--start-idx START_IDX] [--stop-idx STOP_IDX]
             [--correct-slice-timing]
             bids_dir output_dir {participant,group} [{participant,group} ...]

Positional Arguments

bids_dir The directory with the input dataset formatted according to the BIDS standard.
output_dir The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of theparticipant level analysis.
analysis_level

Possible choices: participant, group

Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir.

Named Arguments

-v, –version show program’s version number and exit

Options for filtering BIDS queries

–participant_label, –participant-label
 one or more participant identifiers (the sub- prefix can be removed)
–session-id select a specific session to be processed
–run-id select a specific run to be processed
–task-id select a specific task to be processed
-m, –modalities
 

Possible choices: T1w, bold, T2w

select one of the supported MRI types

Instrumental options

-w, –work-dir
–report-dir
–verbose-reports
 
–write-graph Write workflow graph.
–dry-run Do not run the workflow.
–profile hook up the resource profiler callback to nipype
–use-plugin nipype plugin configuration file
–no-sub Turn off submission of anonymized quality metrics to MRIQC’s metrics repository.
–email Email address to include with quality metric submission.

Options to handle performance

–n_procs, –nprocs, –n_cpus, –nprocs
 number of threads
–mem_gb available total memory
–testing use testing settings for a minimal footprint
-f, –float32 Cast the input data to float32 if it’s represented in higher precision (saves space and improves perfomance)

Workflow configuration

–ica Run ICA on the raw data and include the componentsin the individual reports (slow but potentially very insightful)
–hmc-afni Use ANFI 3dvolreg for head motion correction (HMC) - default
–hmc-fsl Use FSL MCFLIRT instead of AFNI for head motion correction (HMC)
–fft-spikes-detector
 Turn on FFT based spike detector (slow).
–fd_thres motion threshold for FD computation

Specific settings for ANTs

–ants-nthreads
 number of threads that will be set in ANTs processes
–ants-settings
 path to JSON file with settings for ANTS

Specific settings for AFNI

–deoblique Deoblique the functional scans during head motion correction preprocessing
–despike Despike the functional scans during head motion correction preprocessing
–start-idx Initial volume in functional timeseries that should be considered for preprocessing
–stop-idx Final volume in functional timeseries that should be considered for preprocessing
–correct-slice-timing
 Perform slice timing correction

“Bare-metal” installation (Python 2/3)

The software automatically finds the data the input folder if it follows the BIDS standard [BIDS]. A fast and easy way to check that your dataset fulfills the BIDS standard is the BIDS validator.

Since mriqc follows the [BIDSApps] specification, the execution is split in two consecutive steps: a first level (or participant) followed by a second level (or group level). In the participant level, all individual images to be processed are run through the pipeline, and the MRIQC measures are extracted and the individual reports (see The MRIQC Reports) generated. In the group level, the IQMs extracted in first place are combined in a table and the group reports are generated.

The first (participant) level is executed as follows:

mriqc bids-dataset/ out/ participant

Please note the keyword participant as fourth positional argument. It is possible to run mriqc on specific subjects using

mriqc bids-dataset/ out/ participant --participant_label S001 S002

where S001 and S002 are subject identifiers, corresponding to the folders sub-S001 and sub-S002 in the BIDS tree. Here, it is also accepted to use the sub- prefix

mriqc bids-dataset/ out/ participant --participant_label sub-S001 sub-S002

Note

If the argument --participant_label is not provided, then all subjects will be processed and the group level analysis will automatically be executed without need of running the command in item 3.

After running the participant level with the --participant_label argument, the group level will not be automatically triggered. To run the group level analysis:

mriqc bids-dataset/ out/ group

Examples of the generated visual reports are found in mriqc.org.

Depending on the input images, the resulting outputs will vary as described next.

Containerized versions

If you have Docker installed, the quickest way to get mriqc to work is following the running with docker guide.

Running MRIQC on HPC clusters

Singularity containers

Requesting resources

We have profiled cores and memory usages with the resource profiler tool of nipype.

An MRIQC run of one subject (from the ABIDE) dataset, containing only one run, one BOLD task (resting-state) yielded the following report:

Using the MultiProc plugin of nipype with nprocs=10, the workflow nodes run across the available processors for 41.68 minutes. A memory peak of 8GB is reached by the end of the runtime, when the plotting nodes are fired up.

We also profiled MRIQC on a dataset with 8 tasks (one run per task), on ds030 of OpenfMRI:

Again, we used n_procs=10. The software run for roughly about the same time (47.11 min). Most of the run time, memory usage keeps around a maximum of 10GB. Since we saw a memory consumption of 1-2GB during the the 1-task example, a rule of thumb may be that each task takes around 1GB of memory.