Skip to content

NF: Functionality for reading AFNI BRIK/HEAD #561

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Jun 1, 2018

Conversation

rmarkello
Copy link
Contributor

In partial response to #310, this is a work-in-progress for providing functionality to read AFNI BRIK/HEAD files.

For example, the following should work with these changes:

>>> import nibabel as nib
>>> brik = nib.load('example_4d+orig.BRIK')
>>> brik.shape
(33, 41, 25, 3)

I tried to model the framework off other interfaces (e.g., copying functions from PARREC and MINC—hopefully that's okay!), only modifying what was necessary to suit the specifics of BRIK/HEAD format. As of now this is pretty bare bones, but it does seem to work on a variety of manual tests and a few automated ones adopted from nibabel/tests/test_minc1.py.

I'd appreciate any feedback or insight, including on what functionality/tests should be added.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.09%) to 96.176% when pulling 8efba75 on rmarkello:brikhead into dbe74e1 on nipy:master.

@effigies
Copy link
Member

effigies commented Oct 6, 2017

Hi, thanks for this. We've fixed some tests in master, if you want to rebase or merge.

@codecov-io
Copy link

codecov-io commented Oct 9, 2017

Codecov Report

Merging #561 into master will increase coverage by 0.11%.
The diff coverage is 97.59%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #561      +/-   ##
==========================================
+ Coverage   90.31%   90.42%   +0.11%     
==========================================
  Files          87       88       +1     
  Lines       10816    10982     +166     
  Branches     1794     1814      +20     
==========================================
+ Hits         9768     9930     +162     
- Misses        719      720       +1     
- Partials      329      332       +3
Impacted Files Coverage Δ
nibabel/imageclasses.py 100% <100%> (ø) ⬆️
nibabel/brikhead.py 97.57% <97.57%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 636d5d2...c5f0ecf. Read the comment docs.

@coveralls
Copy link

coveralls commented Oct 9, 2017

Coverage Status

Coverage increased (+0.09%) to 93.444% when pulling c5f0ecf on rmarkello:brikhead into 636d5d2 on nipy:master.

@matthew-brett
Copy link
Member

Sorry to get to this so slowly - I've just started a new job ...

I noticed the reference to https://github.com/florisvanvugt/afnipy/blob/master/afni.py - but that project is GPL licensed - at a superficial first glance, you haven't used that code - is that right? If you have, we have to get permission to use it without the GPL.

Just at a superficial glance, it looks like a very good job of a first pass, and pretty close to ready. For my taste (and my interpretation of PEP8) you have too many blank lines, and I'd like to see some doctests, and eventually, we could negotiate some Image API tests for this image type (tests/test_image_api.py), but generally - excellent - thanks.

@rmarkello
Copy link
Contributor Author

No need to apologize -- I've been meaning to add some more tests to get it out of its current failing state. I assumed that was the reason behind any delays!

As for the reference to afni.py: I didn't use any of that code, but rather used it for some insight into how AFNI data was stored in the BRIK file. I left the reference there to indicate the source of that insight, but it can certainly be removed since the official AFNI documentation is just as valid.

I'll work on paring down some of the blank lines and adding some doctests, but do you have a format you'd like followed for the latter? I (sort of) based this on an interpretation of parrec.py and minc1.py, neither of which have doctests. I see some minimal ones in nifti1.py that I can follow, but even there most of the functions aren't tested. Thanks for the guidance!

@Shotgunosine
Copy link

I emailed Floris and got him to update the license to MIT. I'm glad you had already found his code.

@Shotgunosine
Copy link

Shotgunosine commented Jan 22, 2018

Hey, we'd love to see this PR merged. What needs to be done? Anything we can do to help?

@effigies
Copy link
Member

Doctests that generally mirror nifti1.py (when appropriate) seem like a good start. And from a quick glance through, I wonder if we can subclass AFNIArrayProxy from ArrayProxy, to avoid rewriting more than absolutely necessary.

Nothing else really stuck out to me from a cursory read through. Merging master into this branch should make it easier for the coverage checks to identify which lines aren't being tested, and we can see about improving coverage to make sure everything's behaving as it should.

Examples of invalid BRIK/HEAD images would also be useful for checking corner cases.

@rmarkello
Copy link
Contributor Author

Apologies for my neglect on this!

I was able to successfully subclass AFNIArrayProxy from ArrayProxy, getting rid of a few unnecessary functions in the process, and am currently working on some doc-tests. I'll take a look around for some invalid images, but might ask for some help from @Shotgunosine on that front.

Regarding coverage: would you prefer a merge or a rebase from master? I only ask given the explicit guidelines in the documentation.

@effigies
Copy link
Member

Merges and rebases are equally acceptable. If you're comfortable with rebases, they're often cleaner to review on a commit-by-commit basis.

[ 0. , 0. , 3. , -52.3511],
[ 0. , 0. , 0. , 1. ]])
>>> head = load('example4d+orig.HEAD')
>>> (head.get_data() == brik.get_data()).all()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Breaking on memmap for old numpy. The following should work, though you may need to add an import numpy as np at the top of the doctest.

>>> np.array_equal(head.get_data(), brik.get_data())
True

@rmarkello
Copy link
Contributor Author

Pretty sure this is broken due to the new testing file I added. Will reduce its size, hopefully seeing the Appveyor fixed, later today!

@effigies
Copy link
Member

Seems to be specific to Python 3.5+, so I suspect a pretty small change will fix it.

@effigies
Copy link
Member

Oh, I see. You meant the test BRIK file, not the Python test files. Carry on. :-)

@Shotgunosine
Copy link

I've asked around here and it doesn't look like anyone's got any bad brik-head files handy that we could test on.

@matthew-brett
Copy link
Member

Could you make some bad ones by setting bad values into the header? Maybe you could save space by building and testing the bad BRIK files during the tests?

@rmarkello
Copy link
Contributor Author

I've added a few broken HEAD files where I manually changed the attributes, as suggested. Thankfully, since the HEAD file gets read in first, there's no need for an accompanying BRIK file since it will error out before getting there.

However, taking a closer look at the AppVeyor error, I'm a bit confused. From some digging, I think it's specific to the example4d+orig.BRIK.gz file (the doc-test that uses that file is failing, too). The only change made to that file prior to the AppVeyor failing was that I gzipped it (87873ee) in order to test capabilities for detecting compressed files. I'm unclear why that would have screwed things up (and only on Python 3.5+, as @effigies pointed out). I'll keep digging, but if you have any ideas they'd be appreciated!

@effigies
Copy link
Member

From a quick search, it seems others have run up against this on AppVeyor when using mmaped files. It's resolved by explicitly closing the file handles.

This seems like probably any easy fix to attempt before we dig deeper.

Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a quick look through for unclosed mmaps. I may have found it, or may be declaring a premature victory.

hdr_fobj)
brik_fobj = file_map['image'].get_prepare_fileobj()
data = klass.ImageArrayProxy(brik_fobj, hdr,
mmap=mmap)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue might be here. I don't think you need to get_prepare_fileobj in this method.

data = klass.ImageArrayProxy(file_map['image'], hdr, mmap=mmap)

I think this will help release the filehandles between reads. Please see analyze.py for probably the most exercised (as the Nifti1Image superclass) variant of from_file_map.

"""
file_map = klass.filespec_to_file_map(filename)
# only BRIK can be compressed, but `filespec_to_file_map` doesn't
# handle that case; remove potential compression suffixes from HEAD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a point of personal preference, I would rather improve filespec_to_filemap to handle the case where header may not be compressed. (But if that makes more sense as a separate PR, just to get this in, that's fine by me.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be good to see this PR merged, first, if that's all right! Once this is in I'll be happy to raise an issue to discuss this change, since, as far as I can tell filespec_to_file_map is used by most image classes in nibabel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a little wrapper around filespec_to_filemap in this PR, to point the way, and keep that part of the logic separate?

Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memmap issue cleared up with that last commit, and the test coverage is looking good. I've got no more comments at this time.

@effigies
Copy link
Member

effigies commented Feb 9, 2018

How's everybody feeling about this PR?

@rmarkello @Shotgunosine are you using this regularly and it's operating as you'd hope? Any further additions you're thinking of making?

@matthew-brett Have you had a chance to look over this? A lot of this plumbing is stuff you're much more familiar with than I am.

@rmarkello
Copy link
Contributor Author

I haven't run into any issues with my own use of this, but, admittedly, I've been utilizing it in some pretty pedestrian workflows (though I suppose that will be the normal use case!).

In terms of further additions: I think, as mentioned above, modifying the filespec_to_file_map will be the next step.

Copy link
Member

@matthew-brett matthew-brett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some thoughts reading through.

Generally - thanks - you did a good job of working out the general structure in nibabel.

At some point, we will need some API tests in test_image_api.py.

'check .HEAD file and try again.')
# get data type and key; if error, bad attribute/HEAD file
try:
atype = TYPE_RE.findall(var)[0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is OK to find more than one TYPE?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider more specific error messages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should never be more than one TYPE associated with an attribute in a HEAD file. If there is, the HEAD file is corrupted and an error should be raised. I'll add that check and make the error messages more explicit.

See https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.attributes.html
for more information on required information to have a valid BRIK/HEAD dataset.

Examples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to appear as part of a main module docstring - I guess that's a bit ugly.

An alternative might be to define data_dir and use explicit os.path.join(datadir, 'my_file.HEAD') in the doctests, assuming the user can work out what datadir is.

(4, 'mni', 'MNI')), fields=('code', 'label', 'space'))


class AFNIError(Exception):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also use HeaderDataError (and ImageDataError) from spatialimages.py, or inherit from these. I realize that the other image types don't do that, but I think it's more correct.

Parameters
----------
info : dict
As obtained by `parse_AFNI_header()`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicer Sphinx output from

As obtained by :func:`parse_AFNI_header`


Returns
-------
(key, value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for other docstrings, this line should have a name and type, for example:

Returns
--------
name : str
    Name of attribute
value : object
    Value of attribute.

shape = tuple(self.info['DATASET_DIMENSIONS'][:dset_rank[0]])
n_vols = dset_rank[1]

return shape + (n_vols,) if n_vols > 1 else shape
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that correct, that you can't have a shape (x, y, z, 1) AFNI dataset, it is forced back to (x, y, z) shape?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, no. By default, AFNI datasets are actually always ndim=4 of shape (x, y, z, >=1), because of how the dimensions are stored in the HEAD file (see response below). You'll never get an (x, y, z) dataset because that would preclude you from subsetting along the "time" axis, and AFNI programs feature heavily in permitting you to do this. I'll make sure to remedy this in my next update.

floatfacs = self.info.get('BRICK_FLOAT_FACS', None)
if floatfacs is None or not np.any(floatfacs):
return None
scale = np.ones(self.info['DATASET_RANK'][1])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is DATASET_RANK[1] the number of time points always? So when the AFNI docs say time axis, they mean DATASET_RANK[1] > 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. DATASET_RANK[0] must be set to 3, denoting a 3D image, and DATASET_RANK[1] will always determine how many "sub-bricks" (in AFNI parlance) / volumes there are along the fourth (traditionally, but not exclusively) time axis. As in my above response, DATASET_RANK[1] will (at least as far as I'm aware) always be >=1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - would you mind putting this in comments somewhere? Maybe in an extra extended docstring at the top? As in:

""" The current module docstring

Some comments about this module for public consumption
"""

""" More notes on the AFNI format

DATASET_RANK[0] must be set to 3, denoting a 3D image, and DATASET_RANK[1] will always determine how many "sub-bricks" (in AFNI parlance) / volumes there are along the fourth (traditionally, but not exclusively) time axis. As in my above response, DATASET_RANK[1] will (at least as far as I (RM) am aware) always be >=1.
"""


return scale

def get_slope_inter(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you need this? Maybe NotImplemented if it doesn't make sense for this format? Or just remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because AFNIArrayProxy inherits from ArrayProxy, this needs to be included. ArrayProxy demands that the provided "spec" (i.e., AFNIHeader in this case) implement this (see: arrayproxy.py:140). I added the inheritance in response to @effigies suggestion, above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add comment to this effect?

"""
file_map = klass.filespec_to_file_map(filename)
# only BRIK can be compressed, but `filespec_to_file_map` doesn't
# handle that case; remove potential compression suffixes from HEAD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a little wrapper around filespec_to_filemap in this PR, to point the way, and keep that part of the logic separate?

head_fname = file_map['header'].filename
if not os.path.exists(head_fname):
for ext in klass._compressed_suffixes:
head_fname = re.sub(ext, '', head_fname)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is going to remove gz or whatever inside filenames - for example - consider head_fname = 'foogz_bar.nii.gz'.

I think you want something like:

for ext in klass._compressed_suffixes:
     head_fname = head_fname[:-len(ext)] if head_fname.endswith(ext) else head_fname

rmarkello added a commit to rmarkello/nibabel that referenced this pull request Feb 25, 2018
Edited brikhead.py in response to @matthew-brett reviews in nipy#561. Cleaned up
doc-strings, added/clarified error mesages to be more specific, and modified
how AFNI files are read in to ensure that they accurately represent underlying
data (i.e., are always 4D). Added `filespec_to_file_map` class method for
`AFNIImage` to separate logic of reading in files with compression; likely need
to edit `filename_parser.types_filenames` instead.
@rmarkello
Copy link
Contributor Author

rmarkello commented Feb 26, 2018

To do:

  • Add relevant API tests to test_image_api.py
  • Wrap filename_parser.types_filenames to account for BRIK/HEAD idiosyncracies currently being handled by AFNIImage.filespec_to_file_map

@rmarkello
Copy link
Contributor Author

I think this might be ready for another look! Let me know if there's anything else I can do.

Copy link
Member

@matthew-brett matthew-brett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on the main file.


class AFNIImageError(ImageDataError):
"""Error when reading AFNI BRIK files"""
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You actually don't need the pass, given you have the comment.

raise AFNIHeaderError('Invalid attribute name entry in HEAD file. '
'%s' % err_msg)
atype = _attr_dic.get(atype[0], str)
attr = ' '.join(var.strip().split('\n')[3:])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe attr = ' '.join(var.strip().splitlines()[3:]) - deals with Windows line endings too, just in case.

except ValueError:
raise AFNIHeaderError('Failed to read variable from HEAD file due '
'to improper type casting. %s' % err_msg)
if len(attr) == 1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe return aname, attr[0] if len(attr) == 1 else attr . If you prefer your version, that's also fine.

>>> print(info['BRICK_TYPES'])
[1, 1, 1]
"""
from six import string_types
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import better at top of file?

self._scaling = header.get_data_scaling()

@property
def scaling(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe note this new attribute in the class docstring.

fake_data = strided_scalar(self._shape)
_, scaling = np.broadcast_arrays(fake_data, scaling)
raw_data = raw_data * scaling[slicer]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a general preference for avoiding blank lines, probably from contact with PEP8 "Use blank lines in functions, sparingly, to indicate logical sections.". Your call though.

Get image zooms from header data

Spatial axes are first three indices, time axis is last index. If
dataset is not a time series the last index will be zero.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you mean that the last value will be zero, rather than the last index.


Examples
--------
>>> brik = load(os.path.join(datadir, 'example4d+orig.BRIK.gz'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to use nib.load in example (they might not realize where load comes from).

If `filespec` is not recognizable as being a filename for this
image type.
"""
# copied from filebasedimages.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you post-process instead? As in, inherit from FileBasedImage (sorry - why aren't you doing that? - I'm sure there's a good reason), and post-process the result?

file_map = super(AFNIImage, klass).filespec_to_file_map(filespec)

and then use file_map['image'].filename and file_map['header'].filename?

Do you agree that it seems cleaner to raise an error for my_image.HEAD.gz, even if my_image.BRIK.gz exists?

"""
Make `file_map` from filename `filespec`

Deals with idiosyncracies of AFNI BRIK / HEAD formats
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add something like:

AFNI BRIK files can be compressed, but HEAD files cannot - see https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.compression.html. Let's say you have an AFNI file you want to load, with files my_image.HEAD and my_image.BRIK.gz. To identify the AFNI BRIK / HEAD pair, you can specify:

  • The HEAD filename - e.g. my_image.HEAD
  • The BRIK filename without compressed extension, e.g. my_image.BRIK
  • The full BRIK filename - e.g. my_image.BRIK.gz.

As a side note, I guess you're not proposing to support .Z compressed files. I bet they are rare.

Copy link
Member

@matthew-brett matthew-brett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to be slow - was moving house, things got hectic.

Here a few small comments on brikhead.py. I also made some small edits, doc extensions in rmarkello#1

raw_data = super(AFNIArrayProxy, self).__getitem__(slicer)
# apply volume specific scaling (may change datatype!)
if self._scaling is not None:
scaling = self._scaling.copy()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the copy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember anymore! It doesn't seem necessary so I'll fix it asap.

"""
bo = info['BYTEORDER_STRING']
bt = info['BRICK_TYPES']
if isinstance(bt, list):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ever not a list?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes; for a single volume image, this would be an integer value.

rmarkello and others added 16 commits May 22, 2018 10:45
Includes new test file `example_4d+orig.BRIK/HEAD`
Changed AFNIArrayProxy to subclass from ArrayProxy. Removed whitespace.
Created new tests for brikhead.py and added/modified datafiles to support them.
Notably, added scaled+tlrc.BRIK/HEAD to test inclusion of data scaling factors
and gzipped exampled4d+orig.BRIK to accommodate loading gzipped files.
Added a few "broken" .HEAD files to the testing suite for brikhead.py and
implemented a few try-except blocks to catch the resultant errors. Maybe (?)
resolved the issues with the AppVeyor OSError on Python 3 for numpy memmap in
loading the .BRIK files. Fixed a doc-test in brikhead.py.
`brikhead.AFNIImage.from_file_map()` was needlessly opening the 'image' file
before passing it to ImageArrayProxy, potentially causing issues seen on
AppVeyor with mmaps. This commit modifies the call to be more in line with how
this is handled in analyze.py.
Edited brikhead.py in response to @matthew-brett reviews in nipy#561. Cleaned up
doc-strings, added/clarified error mesages to be more specific, and modified
how AFNI files are read in to ensure that they accurately represent underlying
data (i.e., are always 4D). Added `filespec_to_file_map` class method for
`AFNIImage` to separate logic of reading in files with compression; likely need
to edit `filename_parser.types_filenames` instead.
Created minimal test_image_api test for brikhead, modeled on PARREC image tests.
Edited filespec_to_file_map in AFNIImage to handle BRIK/HEAD compression
business more smoothly.
Mostly edited ``AFNIImage.filespec_to_file_map``, but also made minor changes in
response to review comments. Added doc-strings describing relevant AFNI-specific
features.
Forgot to import nibabel in doctests...
Minor updates to tests to check error handling
@rmarkello
Copy link
Contributor Author

This should be ready to go, @matthew-brett ! Thanks so much for the helpful feedback.

This was referenced May 29, 2018
@effigies
Copy link
Member

effigies commented Jun 1, 2018

@matthew-brett Sorry to keep bothering you, but is this ready to go? I see you made some contributions... not sure if that was final touch-ups or not.

Copy link
Member

@matthew-brett matthew-brett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice - thanks - and very sorry for the delay. Let's put this in, and see what happens :)

@matthew-brett
Copy link
Member

Chris - if you're happy for this to go in the release, please merge away.

@effigies
Copy link
Member

effigies commented Jun 1, 2018

So. Happy.

@effigies effigies merged commit 4c2bca7 into nipy:master Jun 1, 2018
@nibotmi nibotmi mentioned this pull request Jun 1, 2018
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants