Skip to content

Streamlines API #243

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 56 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
e82c3ee
First draft
MarcCote Feb 21, 2014
fbfb504
A working prototype of the new streamlines API
MarcCote Jul 19, 2014
532c32c
Added a LazyStreamlines class and made sure streamlines are in voxel …
MarcCote Mar 4, 2015
0f7b2a6
Fixed bug in TRK count function
MarcCote Mar 4, 2015
66d5cd1
Removed unused function check_integrity
MarcCote Mar 5, 2015
656a664
Refactored code and tests
MarcCote Apr 17, 2015
3f43b05
Fixed import of OrderedDict and added support to apply transformation…
MarcCote Apr 17, 2015
418eb26
Added a Streamline class used when indexing/iterating a Streamlines o…
MarcCote Sep 22, 2015
12c49cc
Merge branch 'master' into streamlines
MarcCote Oct 28, 2015
b66bcc5
Finished merging with master
MarcCote Oct 28, 2015
c6d93e8
Fixed tests to use clear_and_catch_warnings
MarcCote Oct 28, 2015
fcd962f
Added the CompactList data structure to keep points and scalars
MarcCote Oct 29, 2015
b8dcee9
Old unit tests are all passing
MarcCote Oct 31, 2015
a37ff54
Moves some unit tests
MarcCote Oct 31, 2015
21f783a
Reduced memory usage.
MarcCote Nov 2, 2015
a17a8cf
Refactored streamlines->tractogram and points->streamlines
MarcCote Nov 2, 2015
9fc0f86
Refactoring StreamingFile
MarcCote Nov 2, 2015
9a66d04
NF: Tractogram is now more specific and supports multiple keys for sc…
Garyfallidis Nov 2, 2015
57ef141
Merge branch 'streamlines_scalars_props' of git://github.com/Garyfall…
MarcCote Nov 2, 2015
b6e9cf0
Fixed some unit tests for Tractogram and TractogramItem
MarcCote Nov 2, 2015
40bf350
Updated TractogramFile
Garyfallidis Nov 2, 2015
c84b70c
minor cleanup
Garyfallidis Nov 2, 2015
19268ff
Changed unit tests to reflect modifications made to Tractogram
MarcCote Nov 3, 2015
66a0ce7
Refactored LazyTractogram
MarcCote Nov 3, 2015
3b70569
Added CompactList to init
MarcCote Nov 8, 2015
f5f4f91
Added get_streamlines method to TractogramFile
MarcCote Nov 8, 2015
4480505
Added save and load support for compact_list
MarcCote Nov 8, 2015
0516732
DOC: load and save utils functions
MarcCote Nov 8, 2015
ba9e3f4
Refactored streamlines API and added unit tests for LazyTractogram
MarcCote Nov 11, 2015
5b65636
Save scalars and properties name when using the TRK file format
MarcCote Nov 17, 2015
6f7690e
BF: Extend on empty CompactList is now allowed
MarcCote Nov 18, 2015
999bba4
BF: Support creating TractogramHeader from dict
MarcCote Nov 18, 2015
396e02e
BF: Fixed creating TractogramHeader from another TractogramHeader whe…
MarcCote Nov 18, 2015
84112db
BF: Not all property and scalar name were save in the TRK header
MarcCote Nov 19, 2015
bfdeb6b
ENH: CompactList support advance indexing with ndarray of data type bool
MarcCote Nov 19, 2015
5857c43
ENH: Tractogram support advance indexing with ndarray of data type bo…
MarcCote Nov 19, 2015
b9b844d
Added support for voxel order other than RAS in TRK file format
MarcCote Nov 22, 2015
cf6de5a
Fixed LazyTractogram
MarcCote Nov 22, 2015
29c05a3
BF: Handle empty LazyTractogram
MarcCote Nov 22, 2015
765053a
TRK supports LazyTractogram
MarcCote Nov 22, 2015
a2957af
Fixed some unit tests
MarcCote Nov 23, 2015
03a6229
BF: limit property and scalar names to 18 characters
MarcCote Nov 24, 2015
438eeba
BF: Only store the nb of values in the property or scalar name if it …
MarcCote Nov 24, 2015
69e20f3
Added a script to generate standard test object.
MarcCote Nov 29, 2015
ed12031
Added the mask used by the standard test object
MarcCote Nov 29, 2015
6aa94a9
Added support for Python 3
MarcCote Nov 29, 2015
39ad83f
Revert changes made to __init__.py
MarcCote Nov 29, 2015
3f5e608
Removed streamlines benchmark for now.
MarcCote Nov 29, 2015
4e3c545
Python2.6 compatibility fix. Thanks @effigies
MarcCote Nov 29, 2015
4ebbdc2
Added more unit tests to increase coverage.
MarcCote Nov 29, 2015
35e5330
Changed function name 'isiterable' to 'check_iteration'
MarcCote Dec 1, 2015
753e181
Support upper case file extension
MarcCote Dec 1, 2015
2b9d57b
Fixed typo
MarcCote Dec 1, 2015
e180d3a
Use isinstance instead of type() whenever possible
MarcCote Dec 1, 2015
876a6f8
Added the module streamlines to nibabel
MarcCote Dec 2, 2015
8667fe9
BF: CompactList.extend with a sliced CompactList was not doing the ri…
MarcCote Dec 2, 2015
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions nibabel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@
from .imageclasses import class_map, ext_map, all_image_classes
from . import trackvis
from . import mriutils
from . import streamlines

# be friendly on systems with ancient numpy -- no tests, but at least
# importable
Expand Down
1 change: 1 addition & 0 deletions nibabel/externals/six.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ class _MovedItems(types.ModuleType):
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),

MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
Expand Down
131 changes: 131 additions & 0 deletions nibabel/streamlines/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
import os
from ..externals.six import string_types

from .header import TractogramHeader
from .compact_list import CompactList
from .tractogram import Tractogram, LazyTractogram

from nibabel.streamlines.trk import TrkFile
#from nibabel.streamlines.tck import TckFile
#from nibabel.streamlines.vtk import VtkFile

# List of all supported formats
FORMATS = {".trk": TrkFile,
#".tck": TckFile,
#".vtk": VtkFile,
}


def is_supported(fileobj):
''' Checks if the file-like object if supported by NiBabel.

Parameters
----------
fileobj : string or file-like object
If string, a filename; otherwise an open file-like object pointing
to a streamlines file (and ready to read from the beginning of the
header)

Returns
-------
is_supported : boolean
'''
return detect_format(fileobj) is not None


def detect_format(fileobj):
''' Returns the StreamlinesFile object guessed from the file-like object.

Parameters
----------
fileobj : string or file-like object
If string, a filename; otherwise an open file-like object pointing
to a tractogram file (and ready to read from the beginning of the
header)

Returns
-------
tractogram_file : ``TractogramFile`` class
Returns an instance of a `TractogramFile` class containing data and
metadata of the tractogram contained from `fileobj`.
'''
for format in FORMATS.values():
try:
if format.is_correct_format(fileobj):
return format

except IOError:
pass

if isinstance(fileobj, string_types):
_, ext = os.path.splitext(fileobj)
return FORMATS.get(ext.lower())

return None


def load(fileobj, lazy_load=False, ref=None):
''' Loads streamlines from a file-like object in voxel space.

Parameters
----------
fileobj : string or file-like object
If string, a filename; otherwise an open file-like object
pointing to a streamlines file (and ready to read from the beginning
of the streamlines file's header).

lazy_load : boolean (optional)
Load streamlines in a lazy manner i.e. they will not be kept
in memory.

ref : filename | `Nifti1Image` object | 2D array (4,4) (optional)
Reference space where streamlines will live in `fileobj`.

Returns
-------
tractogram_file : ``TractogramFile``
Returns an instance of a `TractogramFile` class containing data and
metadata of the tractogram loaded from `fileobj`.
'''
tractogram_file = detect_format(fileobj)

if tractogram_file is None:
raise TypeError("Unknown format for 'fileobj': {}".format(fileobj))

return tractogram_file.load(fileobj, lazy_load=lazy_load)


def save(tractogram_file, filename):
''' Saves a tractogram to a file.

Parameters
----------
tractogram_file : ``TractogramFile`` object
Tractogram to be saved on disk.

filename : str
Name of the file where the tractogram will be saved. The format will
be guessed from `filename`.
'''
tractogram_file.save(filename)


def save_tractogram(tractogram, filename, **kwargs):
''' Saves a tractogram to a file.

Parameters
----------
tractogram : ``Tractogram`` object
Tractogram to be saved.

filename : str
Name of the file where the tractogram will be saved. The format will
be guessed from `filename`.
'''
tractogram_file_class = detect_format(filename)

if tractogram_file_class is None:
raise TypeError("Unknown tractogram file format: '{}'".format(filename))

tractogram_file = tractogram_file_class(tractogram, **kwargs)
tractogram_file.save(filename)
221 changes: 221 additions & 0 deletions nibabel/streamlines/compact_list.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
import numpy as np


class CompactList(object):
""" Class for compacting list of ndarrays with matching shape except for
the first dimension.
"""

BUFFER_SIZE = 10000000 # About 128 Mb if item shape is 3.

def __init__(self, iterable=None):
"""
Parameters
----------
iterable : iterable (optional)
If specified, create a ``CompactList`` object initialized from
iterable's items. Otherwise, create an empty ``CompactList``.

Notes
-----
If `iterable` is a ``CompactList`` object, a view is returned and no
memory is allocated. For an actual copy use the `.copy()` method.
"""
# Create new empty `CompactList` object.
self._data = None
self._offsets = []
self._lengths = []

if isinstance(iterable, CompactList):
# Create a view.
self._data = iterable._data
self._offsets = iterable._offsets
self._lengths = iterable._lengths

elif iterable is not None:
# Initialize the `CompactList` object from iterable's item.
offset = 0
for i, e in enumerate(iterable):
e = np.asarray(e)
if i == 0:
new_shape = (CompactList.BUFFER_SIZE,) + e.shape[1:]
self._data = np.empty(new_shape, dtype=e.dtype)

end = offset + len(e)
if end >= len(self._data):
# Resize needed, adding `len(e)` new items plus some buffer.
nb_points = len(self._data)
nb_points += len(e) + CompactList.BUFFER_SIZE
self._data.resize((nb_points,) + self.shape)

self._offsets.append(offset)
self._lengths.append(len(e))
self._data[offset:offset+len(e)] = e
offset += len(e)

# Clear unused memory.
if self._data is not None:
self._data.resize((offset,) + self.shape)

@property
def shape(self):
""" Returns the matching shape of the elements in this compact list. """
if self._data is None:
return None

return self._data.shape[1:]

def append(self, element):
""" Appends `element` to this compact list.

Parameters
----------
element : ndarray
Element to append. The shape must match already inserted elements
shape except for the first dimension.

Notes
-----
If you need to add multiple elements you should consider
`CompactList.extend`.
"""
if self._data is None:
self._data = np.asarray(element).copy()
self._offsets.append(0)
self._lengths.append(len(element))
return

if element.shape[1:] != self.shape:
raise ValueError("All dimensions, except the first one,"
" must match exactly")

self._offsets.append(len(self._data))
self._lengths.append(len(element))
self._data = np.append(self._data, element, axis=0)

def extend(self, elements):
""" Appends all `elements` to this compact list.

Parameters
----------
elements : list of ndarrays, ``CompactList`` object
Elements to append. The shape must match already inserted elements
shape except for the first dimension.

"""
if self._data is None:
elem = np.asarray(elements[0])
self._data = np.zeros((0, elem.shape[1]), dtype=elem.dtype)

next_offset = self._data.shape[0]

if isinstance(elements, CompactList):
self._data.resize((self._data.shape[0]+sum(elements._lengths),
self._data.shape[1]))

for offset, length in zip(elements._offsets, elements._lengths):
self._offsets.append(next_offset)
self._lengths.append(length)
self._data[next_offset:next_offset+length] = elements._data[offset:offset+length]
next_offset += length

else:
self._data = np.concatenate([self._data] + list(elements), axis=0)
lengths = list(map(len, elements))
self._lengths.extend(lengths)
self._offsets.extend(np.cumsum([next_offset] + lengths).tolist()[:-1])

def copy(self):
""" Creates a copy of this ``CompactList`` object. """
# We do not simply deepcopy this object since we might have a chance
# to use less memory. For example, if the compact list being copied
# is the result of a slicing operation on a compact list.
clist = CompactList()
total_lengths = np.sum(self._lengths)
clist._data = np.empty((total_lengths,) + self._data.shape[1:],
dtype=self._data.dtype)

idx = 0
for offset, length in zip(self._offsets, self._lengths):
clist._offsets.append(idx)
clist._lengths.append(length)
clist._data[idx:idx+length] = self._data[offset:offset+length]
idx += length

return clist

def __getitem__(self, idx):
""" Gets element(s) through indexing.

Parameters
----------
idx : int, slice or list
Index of the element(s) to get.

Returns
-------
ndarray object(s)
When `idx` is an int, returns a single ndarray.
When `idx` is either a slice or a list, returns a list of ndarrays.
"""
if isinstance(idx, int) or isinstance(idx, np.integer):
start = self._offsets[idx]
return self._data[start:start+self._lengths[idx]]

elif isinstance(idx, slice):
clist = CompactList()
clist._data = self._data
clist._offsets = self._offsets[idx]
clist._lengths = self._lengths[idx]
return clist

elif isinstance(idx, list):
clist = CompactList()
clist._data = self._data
clist._offsets = [self._offsets[i] for i in idx]
clist._lengths = [self._lengths[i] for i in idx]
return clist

elif isinstance(idx, np.ndarray) and idx.dtype == np.bool:
clist = CompactList()
clist._data = self._data
clist._offsets = [self._offsets[i]
for i, take_it in enumerate(idx) if take_it]
clist._lengths = [self._lengths[i]
for i, take_it in enumerate(idx) if take_it]
return clist

raise TypeError("Index must be either an int, a slice, a list of int"
" or a ndarray of bool! Not " + str(type(idx)))

def __iter__(self):
if len(self._lengths) != len(self._offsets):
raise ValueError("CompactList object corrupted:"
" len(self._lengths) != len(self._offsets)")

for offset, lengths in zip(self._offsets, self._lengths):
yield self._data[offset: offset+lengths]

def __len__(self):
return len(self._offsets)

def __repr__(self):
return repr(list(self))


def save_compact_list(filename, clist):
""" Saves a `CompactList` object to a .npz file. """
np.savez(filename,
data=clist._data,
offsets=clist._offsets,
lengths=clist._lengths)


def load_compact_list(filename):
""" Loads a `CompactList` object from a .npz file. """
content = np.load(filename)
clist = CompactList()
clist._data = content["data"]
clist._offsets = content["offsets"].tolist()
clist._lengths = content["lengths"].tolist()
return clist
Loading