-
Notifications
You must be signed in to change notification settings - Fork 229
BUG. LMNN loops when a bad update is calculated. #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is a major issue ! I completely agree with all2187 on what he wrote above, thanks for reporting (although I do not understand why, even an unoptimized quick fix, nothing has been done to this day) . I did something like below on my local version of this repository and it works just fine now. The thing is that it can get worst a every now and then but that is not an issue if the
One solution to do a correct set up of the |
Stores L and G in addition to what was stored before
I just proposed a quick fix in this PR: . It stores |
* FIX fixes #88 Stores L and G in addition to what was stored before * TST: non regression test for this PR - test that LMNN converges on a simple example where it should converge - test that the objective function never has twice the same value * MAINT: Invert the order of algorithm: Try forward updates rather than doing rollback after wrong updates * MAINT: update code according to comments #101 (review) * FIX: update also test_convergence_simple_example * FIX: remove \xc2 character * FIX: use list to copy list for python2 compatibility * MAINT: make code more readable with while break (see #101 (comment)) * FIX: remove non ascii character * FIX: remove keyring.deb * STY: remove unused imports
There seems to be a relatively large bug in the code for LMNN. In lines 157-164 we have:
But in an iteration the objective is calculated as (line 149-150):
Where none of total_active, reg, G, or L depend on the learn_rate. Since that is the only thing changed when the above error occurs, the result of the iteration following the roll back will be the same as teh one that caused the roll back. Hence as soon as lines 157 to 164 are executed, the algorithm will just keep halving the learning rate and re calculating the same values until the max iteration is hit (as can be seen below):

The solution most likely is to either make the calculation of the new objective dependent on the learning rate, or revert L to its previous value also (as in the value it had when the previous iteration that did not cause a roll back started).
The text was updated successfully, but these errors were encountered: