Skip to content

avoid runtime warnings from nan comparisons and divide by zero #429

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Feb 21, 2018

Conversation

wholmgren
Copy link
Member

@wholmgren wholmgren commented Feb 14, 2018

  • Closes avoid and/or silence common warnings #428
  • Fully tested. Added and/or modified tests to ensure correct behavior for all reasonable inputs. Tests must pass on the TravisCI and Appveyor testing services.
  • Code quality and style is sufficient. Passes git diff upstream/master -u -- "*.py" | flake8 --diff and/or landscape.io linting service.
  • New code is fully documented. Includes sphinx/numpydoc compliant docstrings and comments in the code where necessary.
  • Updates entries to docs/sphinx/source/api.rst for API changes.
  • Adds description and name entries in the appropriate docs/sphinx/source/whatsnew file for all changes.

I'll look into adding a few more of the items in #428.

@wholmgren wholmgren added this to the 0.5.2 milestone Feb 14, 2018
@wholmgren
Copy link
Member Author

@adriesse pointed out that some of these changes have a negative impact on readability. I agree that the more conservative masking pattern is harder to read.

As of now, I'm only ignoring warnings using with np.errstate in situations where doing otherwise would seriously hurt readability. But that's just my opinion of what constitutes serious readability issues. We could stick with more of the older patterns and make more use of with np.errstate. Or maybe there's a better pattern that I don't know about.

For what it's worth, the process of hunting down these warnings has certainly improved the test suite. I think that says something to the importance of paying attention to the warnings rather than silencing them.

So, overall, I think the changes are a net positive, but I would revert some of the changes if the community feels differently.

The one thing that I do not want to do is advise users to set their global error state to ignore warnings.

@cwhanse
Copy link
Member

cwhanse commented Feb 15, 2018

Personally I like the changes, and don't see them as obscuring the calculations.

The one thing that I do not want to do is advise users to set their global error state to ignore warnings.

^ That.

It would be nice to better handle the np.exp() overflow warnings in v_from_i if you get that far.

@adriesse
Copy link
Member

adriesse commented Feb 15, 2018

Readability is subjective and as long as you have it on your radar screen that's fine with me.

I was silencing warnings on a module, function and/or line number level at one point. So that seems pretty conservative to me and I would have no hesitations about recommending that to anyone.

Here is an example:

import warnings  
warnings.resetwarnings()
warnings.simplefilter(action='ignore', category=DeprecationWarning)
warnings.filterwarnings(action='ignore', category=RuntimeWarning, module='pandas.formats.format')
warnings.filterwarnings(action='ignore', category=RuntimeWarning, module='pvlib.clearsky', lineno=182)

@wholmgren
Copy link
Member Author

@adriesse I do appreciate the push back on the readability issue. I've changed some of the mask array names to improve readability of the new pattern. The np.where statements now read as, for example, iam = np.where(aoi_gte_90, 0, iam) instead of the more generic looking iam = np.where(mask, 0, iam). The mask creation code is still a bit clunky, but I hope this is helps.

I didn't know that you could silence a warning based on line number -- nice trick!

I think this PR is ready to merge if we can agree that the numpy version bump discussed in #430 is ok.

with np.errstate(over='ignore'):
argW = (I0[idx_p] / (Gsh[idx_p]*a[idx_p]) *
np.exp((-I[idx_p] + IL[idx_p] + I0[idx_p]) /
(Gsh[idx_p]*a[idx_p])))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cwhanse (and maybe @thunderfish24): I added a with np.errstate(over='ignore'): context to v_from_i. Perhaps this will be made obsolete before too long.


pdc = p_dc / p_nom
vdc = v_dc / v_nom
with np.errstate(invalid='ignore', divide='ignore'):
Copy link
Member

@adriesse adriesse Feb 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment to insert: zero voltage will lead to division by zero, but since power is set to night time value later, these errors can be safely ignored

ac_power = np.where((ac_power < p_nt) | (vdc == 0), p_nt, ac_power)
ac_power = np.where(ac_power > pac_max, pac_max, ac_power)
ac_power = np.where(vdc == 0, p_nt, ac_power)
ac_power = np.maximum(ac_power, p_nt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about using the out parameter here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either way is fine with me. maximum and minimum ignore nan values, and I figured the maximum function was easier to read that the where/out pattern.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On my computer np.maximum([1,2,3,np.nan], 2) gives a runtime warning...

Also, maximum has an out parameter too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a numpy version difference? I'm using 1.14.0 in my test environment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appear to be using 1.11.3.

iam = np.where(aoi > 90, 0, iam)
aoi_gt_90 = np.full_like(aoi, False, dtype='bool')
np.greater(aoi, 90, where=notnan, out=aoi_gt_90)
iam = np.where(aoi_gt_90, 0, iam)
Copy link
Member

@adriesse adriesse Feb 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest a block ignore invalid on these (originally) three statements (lines 839-848). I think aoi nans propagate fine to iam, and then the logic is all correct the way it is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the errstate call is easier to read here and I don't think we lose anything by using it here. Updated accordingly.

@wholmgren
Copy link
Member Author

closes #430.

@wholmgren wholmgren merged commit 04bd753 into pvlib:master Feb 21, 2018
@adriesse
Copy link
Member

I only really aspire to nudge things a little one way or another, and I guess I've succeeded at that. I learned a few new tricks too. Still, I would have preferred fewer masks and more commented ignores in the end result.

@wholmgren wholmgren deleted the nancomp branch February 22, 2018 21:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants