Skip to content

MRG: working on top of Eric's PARREC changes #261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 55 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
1270428
FIX: Support XYTZ order and multiple echos in PAR/REC files
larsoner Aug 10, 2014
2b0d540
ENH: Support for partial recordings, and overwrite checking
larsoner Aug 11, 2014
026b9ea
WIP: Add bvals/bvecs
larsoner Aug 18, 2014
3fb1785
FIX: Working for variable scale factors
larsoner Aug 19, 2014
8e66c61
ENH: Add dwell time
larsoner Aug 19, 2014
303542b
FIX: Verbose dwell output
larsoner Aug 19, 2014
3855e97
FIX: Fix dwell time
larsoner Aug 19, 2014
12ed949
FIX: Fix numpy
larsoner Aug 21, 2014
b4e5b69
FIX: Move most work to parrec
larsoner Aug 26, 2014
702ab15
FIX: Fix for truncated data
larsoner Aug 27, 2014
d27761c
FIX: Fix tests
larsoner Aug 27, 2014
4580262
FIX: Fix openers
larsoner Aug 27, 2014
30cab82
BF: fix the help string formatting
matthew-brett Sep 26, 2014
18a5976
DOC+TST: test array_to_file can use array-like
matthew-brett Oct 1, 2014
3052f66
RF: more idiomatic writing of array data in parrec
matthew-brett Oct 1, 2014
0961ded
NF: add nibabel-data and routines to use it
matthew-brett Oct 2, 2014
a53a54a
TST: add test of loading PAR images
matthew-brett Oct 2, 2014
d652525
TST: add tests for parrec2nii on balls data
matthew-brett Oct 3, 2014
68ffc14
RF: remove debug raise statement
matthew-brett Oct 3, 2014
b0ee791
RF: sort of deglobal the verbose flag
matthew-brett Oct 3, 2014
5058f14
RF: remove redundant set of extensions
matthew-brett Oct 3, 2014
7709ddd
RF: refactor mriutils; remove from top level
matthew-brett Oct 3, 2014
25b9e03
RF: require explicit field strength for dwell time
matthew-brett Oct 3, 2014
f4c2272
TST: test some options to parrec2nii
matthew-brett Oct 3, 2014
ca9430d
RF: revert fix allowing upper case file extensions
matthew-brett Oct 3, 2014
fe092d5
NF: function for upper/lower case file extensions
matthew-brett Oct 3, 2014
9209f98
RF: refactor parse_PARREC into smaller functions
matthew-brett Oct 4, 2014
6b23824
RF: parrec2nii uses one_line function from parrec
matthew-brett Oct 4, 2014
5e223ab
RF: refactor process of image lines
matthew-brett Oct 4, 2014
709cfcd
RF: remove unused calculated values
matthew-brett Oct 4, 2014
14a1728
NF: add nitest-balls1 PAR files for testing
matthew-brett Oct 5, 2014
45e91eb
RF: move permit_truncated to header init
matthew-brett Oct 6, 2014
fe8c8bc
NF: add helpers to find missing slices
matthew-brett Oct 6, 2014
5d36e58
TST: add tests for diffusion parameters
matthew-brett Oct 6, 2014
cb9f4fe
RF: refactor slice sorting using vol calculations
matthew-brett Oct 6, 2014
19624d1
BF+TST : fix and test selection of bvals, bvecs
matthew-brett Oct 6, 2014
0bb5b4e
RF: None from diffusion params if not diffusion
matthew-brett Oct 6, 2014
e5feaae
RF+TST: script uses new API to detect no diffusion
matthew-brett Oct 6, 2014
95c549e
RF: refactor getting number of slices and volumes
matthew-brett Oct 6, 2014
65b92d6
RF: use `dyn_scan` general info to detect dynamic
matthew-brett Oct 6, 2014
f496935
NF: add extra check for partial volumes
matthew-brett Oct 7, 2014
37ce2cf
RF+TST: refactor / rename / test truncation checks
matthew-brett Oct 7, 2014
aa50749
RF: rename init helper routines for clarity
matthew-brett Oct 7, 2014
f16c3bc
RF: refactor setting of header dtype
matthew-brett Oct 7, 2014
d8f541b
RF: generator to provide PAR files for tests
matthew-brett Oct 7, 2014
275b0eb
RF: deprecate PARREC get_voxel_size method
matthew-brett Oct 9, 2014
3e77153
BF: fix typo in PARREC general header field name
matthew-brett Oct 9, 2014
e68a98a
RF+TST: refactor / test ``_get_unique_image_prop``
matthew-brett Oct 10, 2014
83aa7e1
RF: fix futurewarning about None, None comparison
matthew-brett Oct 13, 2014
779f140
NF: add context manager to clear warnings registry
matthew-brett Oct 13, 2014
ff2a7f7
RF+TST: rename and test sorted_slice_indices
matthew-brett Oct 13, 2014
539c1c1
RF: do not cache slice_orientation
matthew-brett Oct 13, 2014
c3e7d97
RF: copy input parameters on header creation
matthew-brett Oct 13, 2014
5da5130
BF: fix voxel_size code for new unique props
matthew-brett Oct 13, 2014
efd215e
RF: refactor catch_warns calls for Eric's comments
matthew-brett Oct 13, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "nibabel-data/nitest-balls1"]
path = nibabel-data/nitest-balls1
url = https://github.com/yarikoptic/nitest-balls1
2 changes: 2 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ before_install:
# e.g. pip install -r requirements.txt # --use-mirrors
install:
- python setup.py install
# Point to nibabel data directory
- export NIBABEL_DATA_DIR="$PWD/nibabel-data"
# command to run tests, e.g. python setup.py test
script:
# Change into an innocuous directory and find tests from installation
Expand Down
14 changes: 14 additions & 0 deletions COPYING
Original file line number Diff line number Diff line change
Expand Up @@ -191,3 +191,17 @@ The files::

are from http://psydata.ovgu.de/philips_achieva_testfiles, and released under
the PDDL version 1.0 available at http://opendatacommons.org/licenses/pddl/1.0/

The files:

nibabel/nibabel/tests/data/DTI.PAR
nibabel/nibabel/tests/data/NA.PAR
nibabel/nibabel/tests/data/T1.PAR
nibabel/nibabel/tests/data/T2-interleaved.PAR
nibabel/nibabel/tests/data/T2.PAR
nibabel/nibabel/tests/data/T2_-interleaved.PAR
nibabel/nibabel/tests/data/T2_.PAR
nibabel/nibabel/tests/data/fieldmap.PAR

are from https://github.com/yarikoptic/nitest-balls1, also released under the
the PDDL version 1.0 available at http://opendatacommons.org/licenses/pddl/1.0/
272 changes: 162 additions & 110 deletions bin/parrec2nii
Original file line number Diff line number Diff line change
Expand Up @@ -4,105 +4,155 @@
from __future__ import division, print_function, absolute_import

from optparse import OptionParser, Option
import numpy as np
import sys
import os
import gzip
import nibabel
import nibabel.parrec as pr
from nibabel.parrec import one_line
from nibabel.mriutils import calculate_dwell_time, MRIError
import nibabel.nifti1 as nifti1
from nibabel.filename_parser import splitext_addext
from nibabel.volumeutils import seek_tell

# global verbosity switch
verbose_switch = False
from nibabel.volumeutils import array_to_file, fname_ext_ul_case


def get_opt_parser():
# use module docstring for help output
p = OptionParser(
usage="%s [OPTIONS] <PAR files>\n\n" % sys.argv[0] + __doc__,
version="%prog " + nibabel.__version__)

usage="%s [OPTIONS] <PAR files>\n\n" % sys.argv[0] + __doc__,
version="%prog " + nibabel.__version__)
p.add_option(
Option("-v", "--verbose", action="store_true",
dest="verbose", default=False,
help="Make some noise."))
Option("-v", "--verbose", action="store_true", dest="verbose",
default=False,
help="""Make some noise."""))
p.add_option(
Option("-o", "--output-dir",
action="store", type="string", dest="outdir",
default=None,
help=\
"""Destination directory for NIfTI files. Default: current directory."""))
Option("-o", "--output-dir", action="store", type="string",
dest="outdir", default=None,
help=one_line("""Destination directory for NIfTI files.
Default: current directory.""")))
p.add_option(
Option("-c", "--compressed", action="store_true",
dest="compressed", default=False,
help="Whether to write compressed NIfTI files or not."))
p.add_option(
Option("--origin", action="store",
dest="origin", default="scanner",
help=\
"""Reference point of the q-form transformation of the NIfTI image. If 'scanner'
the (0,0,0) coordinates will refer to the scanner's iso center. If 'fov', this
coordinate will be the center of the recorded volume (field of view). Default:
'scanner'."""))
Option("-p", "--permit-truncated", action="store_true",
dest="permit_truncated", default=False,
help=one_line(
"""Permit conversion of truncated recordings. Support for
this is experimental, and results *must* be checked
afterward for validity.""")))
p.add_option(
Option("-b", "--bvs", action="store_true", dest="bvs", default=False,
help=one_line(
"""Output bvals/bvecs files in addition to NIFTI
image.""")))
p.add_option(
Option("-d", "--dwell-time", action="store_true", default=False,
dest="dwell_time",
help=one_line(
"""Calculate the scan dwell time. If supplied, the magnetic
field strength should also be supplied using
--field-strength (default 3). The field strength must be
supplied because it is not encoded in the PAR/REC
format.""")))
p.add_option(
Option("--field-strength", action="store", type="float",
dest="field_strength",
help=one_line(
"""The magnetic field strength of the recording, only needed
for --dwell-time. The field strength must be supplied
because it is not encoded in the PAR/REC format.""")))
p.add_option(
Option("--minmax", action="store", nargs=2,
dest="minmax", help=\
"""Mininum and maximum settings to be stored in the NIfTI header. If any of
them is set to 'parse', the scaled data is scanned for the actual minimum and
maximum. To bypass this potentially slow and memory intensive step (the data
has to be scaled and fully loaded into memory), fixed values can be provided as
space-separated pair, e.g. '5.4 120.4'. It is possible to set a fixed minimum
as scan for the actual maximum (and vice versa). Default: 'parse parse'."""))
Option("--origin", action="store", dest="origin", default="scanner",
help=one_line(
"""Reference point of the q-form transformation of the NIfTI
image. If 'scanner' the (0,0,0) coordinates will refer to
the scanner's iso center. If 'fov', this coordinate will be
the center of the recorded volume (field of view). Default:
'scanner'.""")))
p.add_option(
Option("--minmax", action="store", nargs=2, dest="minmax",
help=one_line(
"""Mininum and maximum settings to be stored in the NIfTI
header. If any of them is set to 'parse', the scaled data is
scanned for the actual minimum and maximum. To bypass this
potentially slow and memory intensive step (the data has to
be scaled and fully loaded into memory), fixed values can be
provided as space-separated pair, e.g. '5.4 120.4'. It is
possible to set a fixed minimum as scan for the actual
maximum (and vice versa). Default: 'parse parse'.""")))
p.set_defaults(minmax=('parse', 'parse'))
p.add_option(
Option("--store-header", action="store_true",
dest="store_header", default=False,
help=\
"""If set, all information from the PAR header is stored in an extension of
the NIfTI file header. Default: off"""))
Option("--store-header", action="store_true", dest="store_header",
default=False,
help=one_line(
"""If set, all information from the PAR header is stored in
an extension ofthe NIfTI file header. Default: off""")))
p.add_option(
Option("--scaling", action="store", dest="scaling", default='dv',
help=\
"""Choose data scaling setting. The PAR header defines two different data
scaling settings: 'dv' (values displayed on console) and 'fp' (floating point
values). Either one can be chosen, or scaling can be disabled completely
('off'). Note that neither method will actually scale the data, but just store
the corresponding settings in the NIfTI header. Default: 'dv'"""))
help=one_line(
"""Choose data scaling setting. The PAR header defines two
different data scaling settings: 'dv' (values displayed on
console) and 'fp' (floating point values). Either one can be
chosen, or scaling can be disabled completely ('off'). Note
that neither method will actually scale the data, but just
store the corresponding settings in the NIfTI header, unless
non-uniform scaling is used, in which case the data is
stored in the file in scaled form. Default: 'dv'""")))
p.add_option(
Option("--overwrite", action="store_true", dest="overwrite",
default=False,
help=one_line("""Overwrite file if it exists. Default:
False""")))
return p


def verbose(msg, indent=0):
if verbose_switch:
if verbose.switch:
print("%s%s" % (' ' * indent, msg))


def error(msg, exit_code):
sys.stderr.write(msg + '\n')
sys.exit(exit_code)


def proc_file(infile, opts):
# load the PAR header
pr_img = pr.load(infile)
# figure out the output filename, and see if it exists
basefilename = splitext_addext(os.path.basename(infile))[0]
if opts.outdir is not None:
# set output path
basefilename = os.path.join(opts.outdir, basefilename)

# prep a file
if opts.compressed:
verbose('Using gzip compression')
outfilename = basefilename + '.nii.gz'
else:
outfilename = basefilename + '.nii'
if os.path.isfile(outfilename) and not opts.overwrite:
raise IOError('Output file "%s" exists, use --overwrite to '
'overwrite it' % outfilename)

# load the PAR header and data
scaling = None if opts.scaling == 'off' else opts.scaling
infile = fname_ext_ul_case(infile)
pr_img = pr.load(infile, opts.permit_truncated, scaling)
pr_hdr = pr_img.header
# get the raw unscaled data form the REC file
raw_data = pr_img.dataobj.get_unscaled()

# compute affine with desired origin
affine = pr_hdr.get_affine(origin=opts.origin)

# create an nifti image instance -- to get a matching header
nimg = nifti1.Nifti1Image(raw_data, affine)
nimg = nifti1.Nifti1Image(raw_data, affine, pr_hdr)
nhdr = nimg.header

if 'parse' in opts.minmax:
# need to get the scaled data
verbose('Load (and scale) the data to determine value range')
verbose('Loading (and scaling) the data to determine value range')
if opts.scaling == 'off':
scaled_data = raw_data
else:
slope, intercept = pr_hdr.get_data_scaling(method=opts.scaling)
scaled_data = slope * raw_data
scaled_data += intercept
scaled_data = slope * raw_data + intercept
if opts.minmax[0] == 'parse':
nhdr.structarr['cal_min'] = scaled_data.min()
else:
Expand All @@ -113,87 +163,89 @@ def proc_file(infile, opts):
nhdr.structarr['cal_max'] = float(opts.minmax[1])

# container for potential NIfTI1 header extensions
exts = nifti1.Nifti1Extensions()

if opts.store_header:
exts = nifti1.Nifti1Extensions()
# dump the full PAR header content into an extension
fobj = open(infile, 'r')
hdr_dump = fobj.read()
dump_ext = nifti1.Nifti1Extension('comment', hdr_dump)
fobj.close()
with open(infile, 'r') as fobj:
hdr_dump = fobj.read()
dump_ext = nifti1.Nifti1Extension('comment', hdr_dump)
exts.append(dump_ext)

# put any extensions into the image
nimg.extra['extensions'] = exts

# image description
descr = "%s;%s;%s;%s" % (
pr_hdr.general_info['exam_name'],
pr_hdr.general_info['patient_name'],
pr_hdr.general_info['exam_date'].replace(' ',''),
pr_hdr.general_info['protocol_name'])
nhdr.structarr['descrip'] = descr[:80]

if pr_hdr.general_info['max_dynamics'] > 1:
# fMRI
nhdr.structarr['pixdim'][4] = pr_hdr.general_info['repetition_time']
# store units -- always mm and msec
nhdr.set_xyzt_units('mm', 'msec')
else:
# anatomical or DTI
nhdr.set_xyzt_units('mm', 'unknown')

# get original scaling
if opts.scaling == 'off':
slope = 1.0
intercept = 0.0
else:
slope, intercept = pr_hdr.get_data_scaling(method=opts.scaling)
nhdr.set_slope_inter(slope, intercept)

# finalize the header: set proper data offset, pixdims, ...
nimg.update_header()

# figure out the output filename
outfilename = splitext_addext(os.path.basename(infile))[0]
if not opts.outdir is None:
# set output path
outfilename = os.path.join(opts.outdir, outfilename)

# prep a file
if opts.compressed:
verbose('Using gzip compression')
outfilename += '.nii.gz'
outfile = gzip.open(outfilename, 'wb')
else:
outfilename += '.nii'
outfile = open(outfilename, 'wb')

# get original scaling, and decide if we scale in-place or not
if opts.scaling == 'off':
slope = np.array([1.])
intercept = np.array([0.])
else:
verbose('Using data scaling "%s"' % opts.scaling)
slope, intercept = pr_hdr.get_data_scaling(method=opts.scaling)
verbose('Writing %s' % outfilename)
# first write the header
nhdr.set_slope_inter(slope, intercept)
if not np.any(np.diff(slope)) and not np.any(np.diff(intercept)):
# Single scalefactor case
nhdr.set_slope_inter(slope.ravel()[0], intercept.ravel()[0])
data_obj = raw_data
else:
# Multi scalefactor case
nhdr.set_slope_inter(1, 0)
nhdr.set_data_dtype(np.float64)
data_obj = pr_img.dataobj
nhdr.write_to(outfile)
# Seek to data offset position
offset = nhdr.get_data_offset()
seek_tell(outfile, offset, write0=True)
# now the data itself, but prevent any casting or scaling
nibabel.volumeutils.array_to_file(
raw_data,
outfile,
offset=offset)
array_to_file(data_obj, outfile, offset=offset)

# write out bvals/bvecs if requested
if opts.bvs:
bvals, bvecs = pr_hdr.get_bvals_bvecs()
if (bvals, bvecs) == (None, None):
verbose('No DTI volumes detected, bvals and bvecs not written')
else:
verbose('Writing .bvals and .bvecs files')
with open(basefilename + '.bvals', 'w') as fid:
# np.savetxt could do this, but it's just a loop anyway
for val in bvals:
fid.write('%s ' % val)
fid.write('\n')
with open(basefilename + '.bvecs', 'w') as fid:
for row in bvecs.T:
for val in row:
fid.write('%s ' % val)
fid.write('\n')

# write out dwell time if requested
if opts.dwell_time:
try:
dwell_time = calculate_dwell_time(
pr_hdr.get_water_fat_shift(),
pr_hdr.get_echo_train_length(),
opts.field_strength)
except MRIError:
verbose('No EPI factors, dwell time not written')
else:
verbose('Writing dwell time (%r sec) calculated assuming %sT '
'magnet' % (dwell_time, opts.field_strength))
with open(basefilename + '.dwell_time', 'w') as fid:
fid.write('%r\n' % dwell_time)
# done
outfile.close()


def main():
parser = get_opt_parser()
(opts, infiles) = parser.parse_args()

global verbose_switch
verbose_switch = opts.verbose
verbose.switch = opts.verbose

if not opts.origin in ['scanner', 'fov']:
if opts.origin not in ['scanner', 'fov']:
error("Unrecognized value for --origin: '%s'." % opts.origin, 1)
if opts.dwell_time and opts.field_strength is None:
error('Need --field-strength for dwell time calculation', 1)

# store any exceptions
errs = []
Expand All @@ -206,7 +258,7 @@ def main():

if len(errs):
error('Caught %i exceptions. Dump follows:\n\n %s'
% (len(errs), '\n'.join(errs)), 1)
% (len(errs), '\n'.join(errs)), 1)
else:
verbose('Done')

Expand Down
Loading