Skip to content

Conversation

@alex-lew
Copy link
Collaborator

This fixes two things about the current implementation of log-categorical:

  1. The scorer for the distribution had been turning a probability, instead of a log probability.
  2. The scores-to-probabilities function had reverted to the version on my explorations branch, instead of the fixed version from master. This goes back to the version from master, plus one extra fix for stability: when all scores passed to log-categorical are negative infinity, we won't error.

@fsaad
Copy link
Collaborator

fsaad commented Feb 11, 2019

2. plus one extra fix for stability: when all scores passed to log-categorical are negative infinity, we won't error.

In the case that all scores are negative infinity it seems more sensible to throw a human-readable error indicating that a vector of zeros is an invalid probability vector, as opposed to defaulting to the uniform. When a user calls log-cateogrical on a vector of zeros there is typically something wrong that happened in the user's code that they are better off explicitly handling themselves.

@alex-lew
Copy link
Collaborator Author

alex-lew commented Feb 11, 2019

@fsaad

  1. Should the implementation of importance resampling use log-categorical?
  2. What should happen when importance resampling if all particles have zero weight?

My instinct is that (1) should be “yes,” since we’d like inference algorithms in Metaprob to be written like models, using the same random primitives. And I’m not sure I want importance resampling to error in (2), but could be convinced.

@joshuathayer
Copy link
Contributor

Ah, the reversion to the version in your branch away from what was on master was my mistake when doing the "big merge" some weeks ago. Sorry about that.

Should we open an issue to change the implementation of importance resampling?

@fsaad
Copy link
Collaborator

fsaad commented Feb 11, 2019

  1. Should the implementation of importance resampling use log-categorical?

I agree with you that the answer is yes.

However, the behavior of log-categorical should be reasoned about independently of how importance resampling will use it. It is still possible for log-categorical to throw an error with impossible weights but importance resampling to handle this case as it pleases: either by failing (which is what happens in cgpm here where we check for infinities before calling log_pflip and raise an error to the user); or by deciding that infinities means uniform (which is currently baked into log-categorical in the pull request).

  1. What should happen when importance resampling if all particles have zero weight?

I'm interested in your reasoning why you would not prefer an error.

My reasoning is that a vector of either all negative infinities or a single positive infinity is an indication that something has gone wrong upstream. Possibilities include:

(i) the user may have provided impossible constraints, so that all possible target densities are zero (which could give a weight of log(0) = -inf or a math domain error);
(ii) the user may have provided highly unlikely constraint so that the target densities at the proposal points are zero (which could give a weight of log(0) = -inf or a math domain error);
(iii) the user specified a proposal whose support does not contain the target so that proposal densities are zero (and weight of log(c/0) = infinity);
(iv) the user both specifies a narrow proposal and an impossible constraint which gives both zeros for the target and proposal density (and weight log(0/0) = log(nan)).

In all these cases my experience has been it is preferable for the user to know that something has gone wrong rather than for them to assume everything is OK and receive uniformly selected particles.

@zane
Copy link
Collaborator

zane commented Feb 11, 2019

Oh, I didn't see the discussion here until after I approved this PR. Will wait to merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants