You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a large data set that includes a "category" column. I'm working to use the data to build several models, one for each category in the data (5 categories total). I'd like to use groupby to do this, and I wrote a custom function that can be used in GroupBy.apply. It works fine, although the function is quite slow when it's just building one model, let alone 5. Unfortunately, because of the way GroupBy.apply works, it's actually building 6 models instead of 5. As a result, my code using GroupBy.apply is 20% slower than it would be if I'd just written a for loop.
In the current implementation apply calls func twice on the first group to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice for the first group.
Since I know that my function is going to be slow anyway, I'm wondering if it's possible to add an argument to apply that will let me opt to take the "slow code path" from the start, and skip the test-run of the function on the first group? When my custom function takes 20 minutes each time it runs anyway, being able to cut out the extra iteration would be quite nice, and I can't imagine that any time lost by taking the slow path over the fast path would make a difference. (Or am I completely wrong about that?)
Sure, I could just write a for loop and everything would work and be fine, but I really like the "neatness" of using GroupBy, if that makes sense. Additionally, it would be nice if I could use GroupBy.apply in a similar way to how the do function in dplyr works after grouping.
Code Samples
Current functionality
In [1]: importpandasaspdIn [2]: df=pd.DataFrame({'A': list('aaabbbcccc'),
'B': [3,4,3,6,5,2,1,9,5,4],
'C': [4,0,2,2,2,7,8,6,2,8]})
In [3]: defprint_name_and_describe(g):
...: print(g.name)
...: returng.describe()
...:
In [4]: df.groupby('A').apply(print_name_and_describe)
aabcOut[4]:
BCAacount3.0000003.000000mean3.3333332.000000std0.5773502.000000min3.0000000.00000025%3.0000001.00000050%3.0000002.00000075%3.5000003.000000max4.0000004.000000bcount3.0000003.000000mean4.3333333.666667std2.0816662.886751min2.0000002.00000025%3.5000002.00000050%5.0000002.00000075%5.5000004.500000max6.0000007.000000ccount4.0000004.000000mean4.7500006.000000std3.3040382.828427min1.0000002.00000025%3.2500005.00000050%4.5000007.00000075%6.0000008.000000max9.0000008.000000
Suggested functionality
In [4]: df.groupby('A').apply(print_name_and_describe, use_slow_path=True)
abcOut[4]:
BCAacount3.0000003.000000mean3.3333332.000000std0.5773502.000000min3.0000000.00000025%3.0000001.00000050%3.0000002.00000075%3.5000003.000000max4.0000004.000000bcount3.0000003.000000mean4.3333333.666667std2.0816662.886751min2.0000002.00000025%3.5000002.00000050%5.0000002.00000075%5.5000004.500000max6.0000007.000000ccount4.0000004.000000mean4.7500006.000000std3.3040382.828427min1.0000002.00000025%3.2500005.00000050%4.5000007.00000075%6.0000008.000000max9.0000008.000000
I have a large data set that includes a "category" column. I'm working to use the data to build several models, one for each category in the data (5 categories total). I'd like to use
groupby
to do this, and I wrote a custom function that can be used inGroupBy.apply
. It works fine, although the function is quite slow when it's just building one model, let alone 5. Unfortunately, because of the wayGroupBy.apply
works, it's actually building 6 models instead of 5. As a result, my code usingGroupBy.apply
is 20% slower than it would be if I'd just written a for loop.As the documentation for GroupBy.apply states:
Since I know that my function is going to be slow anyway, I'm wondering if it's possible to add an argument to
apply
that will let me opt to take the "slow code path" from the start, and skip the test-run of the function on the first group? When my custom function takes 20 minutes each time it runs anyway, being able to cut out the extra iteration would be quite nice, and I can't imagine that any time lost by taking the slow path over the fast path would make a difference. (Or am I completely wrong about that?)Sure, I could just write a
for
loop and everything would work and be fine, but I really like the "neatness" of usingGroupBy
, if that makes sense. Additionally, it would be nice if I could useGroupBy.apply
in a similar way to how thedo
function indplyr
works after grouping.Code Samples
Current functionality
Suggested functionality
INSTALLED VERSIONS
commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.5.0
Cython: None
numpy: 1.14.1
scipy: None
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0b10
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: