Skip to content

Conversation

dominikwelke
Copy link
Contributor

@dominikwelke dominikwelke commented Mar 26, 2025

hi all
as discussed in #13033 here a draft PR to use the official curry reader code.

in a first step i just use the reader as a module (only fixed formatting to force it past pre-commit).
it has some drawbacks (e.g. data is always loaded) and i did not implement all possible data yet (eg hpi, or epoched recordings) but in general it already works pretty well. tested it with all their example data, and one of my own recordings that didnt work in mne before.

it would be great to get some feedback how you want me to proceed with this @drammock @larsoner:

  • do we want to stick with the module approach, leave their code untouched and work with the output (would allow easier updating when they push changes)
  • or should i merge the code more thoroughly. making it easier to maintain and in terms of clarity

BACKGROUND:
the curry data reader currently cant handle all/newer curry files
plan is port code from the official curry reader into mne-python

for permission see #12855

closes #12795
closes #13033
closes #12855

@@ -0,0 +1,633 @@
# Authors: The MNE-Python contributors.
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this file the official reader? is this file copied from somewhere?

Copy link
Contributor Author

@dominikwelke dominikwelke Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's copied from https://github.com/neuroscan/curry-python-reader

the false info you flagged was added by [autofix.ci]
further formatting changes were necessary to pacify pre-commit hook

Copy link
Contributor Author

@dominikwelke dominikwelke Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you discussed this topic with a compumedics dev in #12855

they said they wont supply a pypi or conda version, but we are free to use the code.
the github repo has a BSD3 license applied, but they dont include any note in the file itself

@drammock
Copy link
Member

do we want to stick with the module approach, leave their code untouched and work with the output (would allow easier updating when they push changes), or should i merge the code more thoroughly. making it easier to maintain and in terms of clarity

Given that @CurryKaiser refused our offer to help them package up their reader for PyPI / conda-forge, I see two remaining options:

  1. "vendor" their code. To make it slightly future-proof, we could write a script (in tools/ I guess) that fetches the latest code from their repo, auto-formats it to make it compatible with MNE's pre-commit requirements, and puts the (formatted but otherwise unmodified) code in mne/io/curry/_vendored.py. (This is basically a manual version of git submodule update because I don't think we should invoke git submodule for this use case.) We then adapt our code in mne/io/curry.py to be a wrapper around their code that basically just gets things into proper MNE container objects; and know that we might need to tweak our wrappers any time the vendored code is updated.

  2. Fully incorporate their code. Re-write their reader to align better with our codebase, in terms of variable names, idioms like _check_option or _validate_type, features like preload=False, etc.

Personally I lean toward option 2. I say this because if we're going to try to support curry files, at a minimum we need to be able to fix bugs when they arise, and ideally we should be willing/able to incorporate new features that have high demand from our users (preload=False is an obvious first example). But if we're fixing bugs, do we open PRs to upstream (with no guarantee of responsiveness), or tweak our "thin" wrapper to handle more and more corner cases? Neither option is appealing, so at that point it starts to seem easier to me to just maintain the entire reader ourselves.

@agramfort
Copy link
Member

agramfort commented Mar 28, 2025 via email

@drammock
Copy link
Member

hum... my first reaction is to push a version 0.1 of their package on pypi and rely on this. Basically we maintain a fork and hope that the fork changes are accepted upstream... it feels less hacky and they also have a CI and some testing setup with test_data that I would not duplicate in mne-python...

that indeed is less hacky than my approach to vendoring. I'd be OK with that outcome, though curious what @larsoner will think.

@larsoner
Copy link
Member

I'm fine with that idea but it would be good to get some blessing/permission from them to do this

@drammock
Copy link
Member

@CurryKaiser

I'm fine with that idea but it would be good to get some blessing/permission from them to do this

xref to #12855 (comment) where I've asked for confirmation that Compumedics really doesn't want to be the packager and they're OK with us doing it.

@CurryKaiser
Copy link

@CurryKaiser

I'm fine with that idea but it would be good to get some blessing/permission from them to do this

xref to #12855 (comment) where I've asked for confirmation that Compumedics really doesn't want to be the packager and they're OK with us doing it.

And nothing has changed, so all good from our side. Sorry we couldn't package it for you. And thank you for working on this!

@dominikwelke
Copy link
Contributor Author

thanks @CurryKaiser !

ok, sounds like a plan.. I can start working on this again soon, if you give a go @agramfort @drammock @larsoner

I guess the fork should live in the mne-tools org? I have the necessary rights to create it

@dominikwelke dominikwelke changed the title [draft] new reader for curry files, using curry-pyhon-reader code [draft] new reader for curry files, using curry-python-reader code Apr 1, 2025
@larsoner
Copy link
Member

larsoner commented Apr 1, 2025

Yeah I think so

@drammock
Copy link
Member

drammock commented Apr 1, 2025

Yeah I think so

I already made the fork

@agramfort
Copy link
Member

agramfort commented Apr 1, 2025 via email

@drammock
Copy link
Member

drammock commented Apr 2, 2025

xref to mne-tools/curry-python-reader#1

@dominikwelke
Copy link
Contributor Author

i could need some guidance on 2 things:

  1. channel locations:
    curry files come with channel locations, and for EEG it was straight forward to build a montage and apply.
    but for MEG it seems i need to use other functions. any pointers would help!
    do i need to populate info["dig"] directly?

  2. HPI/CHPI data:
    some MEG files seem to come with these data. how do i store them in the raw object?

a few other things to discuss:

  • preload
    easiest would be to not offer preload=False and just load the data to memory.
    a single load_data() call would also be doable with the official reader, but a chunk reader not really (if we dont want to hack it; e.g. load all data and discard large parts). not sure i'm deep enough in the mne codebase to know what the implications are (e.g. computations, plots etc with unloaded data)
    what are your thought?

  • epoched files
    the reader code looks as if there could be files with epoched recordings, but there are none among their sample files. do any of you know more about this? otherwise ill ask the curry devs

@dominikwelke
Copy link
Contributor Author

p.s. and could you remind me how to switch off the CIs when pushing these early commits?

@larsoner
Copy link
Member

larsoner commented Apr 3, 2025

Push commits with [ci skip] in the commit message and they long / expensive CIs shouldn't run (a few quick ones still will I think)

@larsoner
Copy link
Member

larsoner commented Apr 3, 2025

... for the cHPI stuff it's probably easiest to load a Neuromag raw FIF file with cHPI info and look at how the info is stored for example in info["hpi_subsystem"]. You can also look at the Info docs, especially the notes. It's not complete but it will help.

For preload, since preload=False is in main it would be a functionality regression to remove it. Once you know how the data are stored on disk and how to read an arbitrary time slice from it, it's really not bad to do the work to make preload=False work. So if you can figure this part out in some function, I can help you fit it into the _read_segment_file code. Since the .py file is only a few hundred lines (a lot of which seems like plotting etc. that we won't use), I'm cautiously optimistic we can figure it out and make it work. And then the python-curry-reader code can really be for reading metadata, annotations, sensor locs, etc. plus knowing where to read the data from disk. We can probably even keep the existing _read_segment_file, it should work in theory...

@dominikwelke
Copy link
Contributor Author

dominikwelke commented Apr 14, 2025

ok, _read_segment_file does indeed work unchanged.
the reader should now be more/less functional

  • I'd still need some guidance on handling/storing the channel locations, esp. for MEG data
  • HPI data - looks like I got it wrong - there might not be cHPI data after all, only HPI marker locations provided in different formats depending on the system

@dominikwelke
Copy link
Contributor Author

@CurryKaiser
thanks for the permission to use the code, also from my side!

in another place you said you might be able to provide us with test files - could we perhaps get a small one with epoched recordings in it (format version shouldn't matter)?
your repository for the python reader contains some test files that the reader interprets as epoched, but they dont seem to really be (perhaps the files were truncated for size)

@CurryKaiser
Copy link

Could be that they were truncated, let me check.

@CurryKaiser
Copy link

Ok, try these:
EpochedData

@dominikwelke
Copy link
Contributor Author

thanks for the file @CurryKaiser
fyi, we have now packaged and published the curryreader on PyPI.
it can be installed via pip install curryreader

@dominikwelke
Copy link
Contributor Author

@drammock @larsoner @agramfort
it is on PyPI but not on conda-forge - how is this case dealt with in MNE? should we also submit it to conda forge?

currently pip install mne[full] fetches it, but conda env create --file environment.yml doesnt

related question:
which pip dependency level in pyproject.toml should this go to? i treated curryreader like the antio package (for ANT neuro files) but this makes it an optional requirement (in mne[full]). i believe this mean it wont be automatically installed when calling pip install mne?

@larsoner
Copy link
Member

it is on PyPI but not on conda-forge - how is this case dealt with in MNE? should we also submit it to conda forge?

Yeah use grayskull, it's not too painful, see for example conda-forge/staged-recipes#28279

@dominikwelke
Copy link
Contributor Author

Yeah use grayskull,

see conda-forge PR: conda-forge/staged-recipes#29754

@dominikwelke
Copy link
Contributor Author

@larsoner @drammock this is still waiting for review

@larsoner
Copy link
Member

larsoner commented Aug 8, 2025

@larsoner @drammock this is still waiting for review

Ahh okay the comment from 2 weeks ago I read as meaning you were going to add to test data, modify code, etc. and then I assumed you would ping for review in another comment. Looks like you modified the code but didn't ping for review until today. Most maintainers don't get notified when commits are made for PRs -- it's waaaay too noisy -- so I didn't realize you had pushed. I should be able to look soon!

Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before I look at the MEG data (hopefully Monday), noticed a lot of red in the tests that I figured I'd ask about now

pytest.param(curry8_bdf_file, id="curry 8"),
],
)
@pytest.mark.parametrize("mock_dev_head_t", [True, False])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hoping existing tests would be touched/changed as little as possible... Can you explain why this test had to be removed? Or could it be put back and still work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curryreader cannot read the c,rfDC testfiles used in this and other tests.
they have more MEG sensor label, positions etc in the header file than channels in the data file (164 vs 148) and this case is explicitly set to raise in curryreader.

so apparently this shouldnt happen -
i need to know more about these files.. can you give me some background?

  • were these files created by some 3rd party software?
  • can i just use some other test file, or is there a specific reason to use these files here?
  • fwiw, it's possible to "fix" the files for reading by deleting the surplus channels from he header file. again, i'd need more information if this makes sense

"fname",
[
pytest.param(curry_dir / "test_bdf_stim_channel Curry 7.cef", id="7"),
pytest.param(curry_dir / "test_bdf_stim_channel Curry 8.cdt.cef", id="8"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here (and below) ... why did this have to be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the annotations test was just replaced by another test that does the same.

there are a few minor test that actually forgot to put back, though..
i did that and will push the commit once my fixes to the curryreader package are online (mne-tools/curry-python-reader#6)

@larsoner
Copy link
Member

@dominikwelke so this code, which uses sample's MRI as a surrogate (which isn't a great match but that's okay):

import mne
subjects_dir = mne.datasets.sample.data_path() / "subjects"
raw = mne.io.read_raw_curry("~/Desktop/HPI/HPI.cdt")
print(raw.info["dev_head_t"])
trans = mne.transforms.Transform("mri", "head")
mne.viz.plot_alignment(raw.info, coord_frame="meg", dig=True, meg=("helmet", "sensors"), subject="sample", subjects_dir=subjects_dir, surfaces=dict(head=0.2), trans=trans)

fails on main but on this PR:

image

So a couple of things are noticeable here. First, the helmet doesn't match the sensors. This is because the sensor defs are wrong:

>>> raw.info["chs"][0]
{'loc': array([-0.04734   ,  0.1175    ,  0.01531   ,  0.        , -0.32730447,
        0.94491893, -0.93714016, -0.32973247, -0.11421394, -0.34895319,
        0.88552147,  0.30673016]), 'unit_mul': 0 (FIFF_UNITM_NONE), 'range': 1.0, 'cal': 1.0, 'kind': 1 (FIFFV_MEG_CH), 'coil_type': 3024 (FIFFV_COIL_VV_MAG_T3), 'unit': 112 (FIFF_UNIT_T), 'coord_frame': 1 (FIFFV_COORD_DEVICE), 'ch_name': 'M01', 'scanno': 1, 'logno': 1}

This does not appear to be a Neuromag system I think (184 channels), so the coil_type is wrong here.

Second, the dev_head_t is None, which shouldn't be the case for most datasets. I saw during reading:

found 2 cHPI samples for 10 coils

I haven't looked into what this means... but if there were 5 HPI coils, and they were defined in the head coordinate frame (5 points) and the MEG/device coordinate frame (same matched 5 points) then these can be used to get a proper info["dev_head_t"] using mne.transforms._fit_matched_points. I don't know the Curry format well enough to know if this is what's needed, but it's possible...

@dominikwelke
Copy link
Contributor Author

Ahh okay the comment from 2 weeks ago I read as meaning you were going to add to test data, modify code, etc.

yeah, this was a misunderstanding . i had repeatedly asked for review, with specific questions. no progress to be made without you actually looking into the code, so i just bumped this up again before leaving for conference travel :)

thanks for having a look @larsoner! will comment on your points below

@dominikwelke
Copy link
Contributor Author

dominikwelke commented Sep 3, 2025

This does not appear to be a Neuromag system I think (184 channels), so the coil_type is wrong here.

noted.. that's the default coil create_info sets for channel_type mag; will override with FIFFV_COIL_CTF_GRAD as in the legacy reader

Second, the dev_head_t is None, which shouldn't be the case for most datasets. (..)

thanks for the lead. i cannot push the code yet, as curryreader is outdated, but i do get a weird error in _fit_matched_points (within mne.io.ctf.trans._quaernion_align).

i call the following - hpi_c/hpi_u are as from the HPI.cdt file:

import numpy as np
from mne.io.ctf.trans import _quaternion_align

hpi_u = np.array([[ 0.0503383, -0.0474806,  0.096383 ],
       [ 0.0154084,  0.0409653,  0.124915 ],
       [-0.0210227, -0.0238944,  0.128951 ],
       [-0.0702213,  0.0336918,  0.101976 ],
       [-0.0835989, -0.0297455,  0.0697126]])
hpi_c = np.array([[ 0.05053   , -0.06227   ,  0.1512    ],
       [ 0.02811   ,  0.05691   ,  0.14875999],
       [-0.02168   , -0.04361   ,  0.17547   ],
       [-0.05725   ,  0.02361   ,  0.1471    ],
       [-0.08078   , -0.03251   ,  0.12539   ]])
       
unknown_curry_t = _quaternion_align("unknown", "ctf_meg", hpi_u, hpi_c, 1e-2)

and get:

error - Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function dot>) found for signature:
 
 >>> dot(array(float64, 2d, F), array(float32, 2d, C))
 
There are 4 candidate implementations:
  - Of which 2 did not match due to:
  Overload in function 'dot_2': File: numba/np/linalg.py: Line 536.
    With argument(s): '(array(float64, 2d, F), array(float32, 2d, C))':
   Rejected as the implementation raised a specific error:
     TypingError: Failed in nopython mode pipeline (step: nopython frontend)
   No implementation of function Function(<intrinsic _impl>) found for signature:
    
    >>> _impl(array(float64, 2d, F), array(float32, 2d, C))
    
   There are 2 candidate implementations:
     - Of which 2 did not match due to:
     Intrinsic in function 'dot_2_impl.<locals>._impl': File: numba/np/linalg.py: Line 554.
       With argument(s): '(array(float64, 2d, F), array(float32, 2d, C))':
      Rejected as the implementation raised a specific error:
        TypingError: np.dot() arguments must all have the same dtype
     raised from /Users/phtn595/miniconda3/envs/mnedev/lib/python3.13/site-packages/numba/np/linalg.py:574
   
   During: resolving callee type: Function(<intrinsic _impl>)
   During: typing of call at /Users/phtn595/miniconda3/envs/mnedev/lib/python3.13/site-packages/numba/np/linalg.py (593)
   
   
   File "../../../miniconda3/envs/mnedev/lib/python3.13/site-packages/numba/np/linalg.py", line 593:
               def _dot2_codegen(context, builder, sig, args):
                   <source elided>
   
           return lambda left, right: _impl(left, right)
           ^
   
   During: Pass nopython_type_inference
  raised from /Users/phtn595/miniconda3/envs/mnedev/lib/python3.13/site-packages/numba/core/typeinfer.py:1074
  - Of which 2 did not match due to:
  Overload in function 'dot_3': File: numba/np/linalg.py: Line 795.
    With argument(s): '(array(float64, 2d, F), array(float32, 2d, C))':
   Rejected as the implementation raised a specific error:
     TypingError: missing a required argument: 'out'
  raised from /Users/phtn595/miniconda3/envs/mnedev/lib/python3.13/site-packages/numba/core/typing/templates.py:791

During: resolving callee type: Function(<built-in function dot>)
During: typing of call at /Users/phtn595/Work/git_contributions/mne-python/mne/transforms.py (1477)


File "mne/transforms.py", line 1477:
def _fit_matched_points(p, x, weights=None, scale=False):
    <source elided>
    mu_p = np.dot(weights_.T, p)[0]
    mu_x = np.dot(weights_.T, x)[0]
    ^

During: Pass nopython_type_inference

any idea how to get around this @larsoner ?

@larsoner
Copy link
Member

larsoner commented Sep 3, 2025

numba is very picky about dtypes, pass the function hpi_u.astype(np.float64) and hpi_c.astype(np.float64) and it should work. It's not happy about one being float64 and the other being float32

@dominikwelke
Copy link
Contributor Author

dominikwelke commented Sep 5, 2025

pass the function hpi_u.astype(np.float64) and hpi_c.astype(np.float64)

this solved the issue

directly following from that @larsoner -

in the example case of HPI.cdt the matching is not very close:

Quaternion matching (desired vs. transformed):
 50.53  -62.27  151.20 mm <->   52.73  -59.35  145.81 mm (orig :   50.34  -47.48   96.38 mm) diff =    6.512 mm
 28.11   56.91  148.76 mm <->   24.94   35.26  157.37 mm (orig :   15.41   40.97  124.92 mm) diff =   23.519 mm
-21.68  -43.61  175.47 mm <->  -14.44  -24.73  177.40 mm (orig :  -21.02  -23.89  128.95 mm) diff =   20.312 mm
-57.25   23.61  147.10 mm <->  -62.52   29.30  142.28 mm (orig :  -70.22   33.69  101.98 mm) diff =    9.133 mm
-80.78  -32.51  125.39 mm <->  -81.78  -38.35  125.06 mm (orig :  -83.60  -29.75   69.71 mm) diff =    5.931 mm

which is beyond both the default tol=1e-4 (.1mm) as well as tol=1e-2 (1cm) as set in the legacy read_raw_curry.

this currently breaks the reader as _quaternion_align simply raises a RuntimeError
(and not a very informative one - ("Something is wrong: quaternion matching did not work (see above)"))

how to proceed?

  • should i remove/crank up the tolerance and replace by a warning if the matching is bad?
  • ..or should we keep the 10mm tolerance, and fall back to `dev_head_t=None' if matching is too bad?

fwiw, the sensor alignment looks better with this transform, but probably not great yet? idk if this is due to the sample mri..
image
image

@drammock
Copy link
Member

drammock commented Sep 5, 2025

curryreader 0.1.2 release is up: https://pypi.org/project/curryreader/

@drammock
Copy link
Member

drammock commented Sep 5, 2025

fwiw, the sensor alignment looks better with this transform, but probably not great yet? idk if this is due to the sample mri..

yikes, looks like the subject's nose is intersecting with the helmet surface!

@dominikwelke
Copy link
Contributor Author

yeah, the nasion landmark definitely doesnt align with the sample subject's nasion :)

@dominikwelke
Copy link
Contributor Author

dominikwelke commented Sep 8, 2025

just pushed the version with updated coil type and dev_head_t that produces the plots above.

for you to make a call @larsoner :

  1. comment on the alignment plots above
  2. violated tolerance in HPI point matching:

this currently breaks the reader as _quaternion_align simply raises a RuntimeError

  • should i remove/crank up the tolerance and replace by a warning if the matching is bad?
  • ..or keep the 10mm tolerance, and fall back to `dev_head_t=None' if matching is too bad?

on another note:
the failing tests are expected, as explained in another response above.
there were questions, too @larsoner - tl;dr: more info on the used testfiles (c,rfDC Curry 7.dat / 8.cdt) would help! they dont seem to be official curry files?
i can "fix" the header files, and then all tests pass

@larsoner
Copy link
Member

should i remove/crank up the tolerance and replace by a warning if the matching is bad?

Let's add a on_bad_match="warn" using _on_missing internally

fwiw, the sensor alignment looks better with this transform, but probably not great yet? idk if this is due to the sample mri..

I think there is a bug where the helmet surface is in the wrong place. If you look just at the MEG sensors (or plot with meg="sensors") it seems reasonable. The EEG electrodes (or dig points?) in pink look like they might be spun 180 degrees, or flipped forward-backward or something. The trans is also wrong which makes the MRI head surface poorly aligned with the head coordinate frame...

Can you paste a minimal example to reproduce the plot_alignment? Image? Don't need every angle, just one plot I can click around etc.

there were questions, too @larsoner - tl;dr: more info on the used testfiles (c,rfDC Curry 7.dat / 8.cdt) would help! they dont seem to be official curry files?

I am not sure about these... can you browse through git blame to find a PR where they're first used? Ideally there it would say where the files came from. If you can't figure it out I can help



def _read_dig_montage_curry(ch_names, ch_types, ch_pos, landmarks, landmarkslabels):
import re
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to nest this, should go at the top

rpa=landmark_dict["RPA"],
hsp=hsp_pos,
hpi=hpi_pos,
coord_frame="head",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless you've transformed this to the head coord frame already (and it doesn't look like you have?) this should probalby be

Suggested change
coord_frame="head",
coord_frame="unknown",

that way it'll transform everything to head coords using lpa/nasion/rpa

coord_frame="head",
)
else: # not recorded?
warn("No eeg sensor locations found in file.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should be an error if you try to read dig and can't

_check_curry_filename,
_extract_curry_info,
)
from ._dig_montage_utils import _read_dig_montage_curry
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to nest this one

Comment on lines +120 to +121
# for fname in [curry7_rfDC_file, curry8_rfDC_file]:
# raw = read_raw_curry(fname, verbose=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# for fname in [curry7_rfDC_file, curry8_rfDC_file]:
# raw = read_raw_curry(fname, verbose=True)

Comment on lines +316 to +318
raw1 = read_raw_curry(fname, preload=False)
raw1.load_data()
assert raw1 == read_raw_curry(fname, preload=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you use just _test_raw_reader it should automatically check preloading equivalence (that func has a test_preloading=True default)

events, _ = events_from_annotations(raw, event_id=EVENT_ID)
assert_allclose(events, REF_EVENTS)
assert raw.info["dev_head_t"] is None
assert not raw.info["dev_head_t"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change seems less explicit?



# In the new version based on curryreader package, time_step is always prioritized, i.e.
# sfreq in the header file will be ignored and overridden by sampling interval
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we detect this mismatch and at least keep as a warning?

@drammock
Copy link
Member

I think there is a bug where the helmet surface is in the wrong place. If you look just at the MEG sensors (or plot with meg="sensors") it seems reasonable.

Does that mean that https://mne.tools/dev/auto_examples/visualization/meg_sensors.html#ctf is incorrect? Locally interacting with that 3D plot it looks OK (though I've not used a CTF system so just using instinct about what looks "reasonable")

@larsoner
Copy link
Member

larsoner commented Sep 10, 2025

Does that mean that https://mne.tools/dev/auto_examples/visualization/meg_sensors.html#ctf is incorrect? Locally interacting with that 3D plot it looks OK (though I've not used a CTF system so just using instinct about what looks "reasonable")

I think that one is okay... the sensors there look properly aligned with the helmet. I suspect the curry code modifies the MEG sensor coordinates or something -- at least compared to how we represent them when reading native files from the CTF system -- so then when we put the helmet in assuming the MEG sensors are in the same place, it's wrong. At least that's what I'm assuming is going on.

Maybe a dumb question but are those CTF sensors? In the code we have ch["coil_type"] = FIFF.FIFFV_COIL_CTF_GRAD hard-coded, but if it's actually from a different system (like KIT) then it'll pull the wrong helmet to show...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Loading data file in CDT format CURRY Data Format Reader only works in specific cases MEG data problem consulting
5 participants