-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
groupby(pd.Grouper) ignores loffset #28302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
These calls aren't really equivalent as the first returns a reindexed data frame whereas the second maintains the original index values but buckets them according to a particular time grouping |
Hmm actually not sure. I don't think we have a ton of test coverage on loffset with groupby though. If you want to take a look would certainly accept PRs to clean up |
@WillAyd I forgot to mention that not only the index is "wrong", but also the aggregation does not take the offset into account. I will update the issue shortly. |
I was mixing things up. loffset should not have an effect on the aggregation, but only on the "resampled time labels" (quote from resample doc). For Grouper, the documentation does not really state what it is actually for. For me it seems natural that it should be the same as in resample and therefore I still think it is an issue. |
Yea I think so too. That part of GroupBy I don't think we have a lot of coverage on so yea would certainly welcome any investigation and PRs you have into it |
WORKAROUND: use base= instead of loffset= in pd.Grouper, developer's implementation and documentation are still pending. Ideally, there needs to be 3 parameters, loffset (label_offset, default=0, the starting boundary offset of every group label), gstart (group_start, default=loffset, the starting offset of every group's data) and gspan (group_span, default=1, the span of every group's data). For example, if we want to split a time-series data from 9:00am every day to 9:00am the next day, while keeping the group label at 0am (so that the datetime object reduces to date, i.e., 2020-05-26 00:00:00 => 2020-05-26), however, within each group we want the past 2 days of data (i.e., from 9am two days before that day til 9am on that day) in addition to that day's data (i.e., from 9am on that day til 9am the next day), [in other words, there will be duplicate entries among adjacent groups and the total number of data rows will be tripled], then we should call pd.Grouper(freq='D', loffset=0, gstart=9.0/24-2, gspan=3). This will give rise to: |
Code Sample, a copy-pastable example if possible
Problem description
I thought the two calls should be equivalent. However, the output is:
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-17134-Microsoft
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.1
pytz : 2019.2
dateutil : 2.8.0
pip : 19.2.3
setuptools : 41.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : 7.8.0
pandas_datareader: 0.7.4
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.14.1
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
The text was updated successfully, but these errors were encountered: