@@ -152,7 +152,7 @@ neighbors (with same labels) of :math:`\mathbf{x}_{i}`, :math:`y_{ij}=0`
152
152
indicates :math: `\mathbf {x}_{i}, \mathbf {x}_{j}` belong to different classes,
153
153
:math: `[\cdot ]_+=\max (0 , \cdot )` is the Hinge loss.
154
154
155
- .. topic :: Example Code:
155
+ .. rubric :: Example Code
156
156
157
157
::
158
158
@@ -167,15 +167,15 @@ indicates :math:`\mathbf{x}_{i}, \mathbf{x}_{j}` belong to different classes,
167
167
lmnn = LMNN(k=5, learn_rate=1e-6)
168
168
lmnn.fit(X, Y, verbose=False)
169
169
170
- .. topic :: References:
170
+ .. rubric :: References
171
171
172
- .. [1 ] Weinberger et al. `Distance Metric Learning for Large Margin
173
- Nearest Neighbor Classification
174
- <http://jmlr.csail.mit.edu/papers/volume10/weinberger09a/weinberger09a.pdf> `_.
175
- JMLR 2009
176
172
177
- .. [2 ] `Wikipedia entry on Large Margin Nearest Neighbor <https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor >`_
178
-
173
+ .. container :: hatnote hatnote-gray
174
+
175
+ [1]. Weinberger et al. `Distance Metric Learning for Large Margin Nearest Neighbor Classification <http://jmlr.csail.mit.edu/papers/volume10/weinberger09a/weinberger09a.pdf >`_. JMLR 2009.
176
+
177
+ [2]. `Wikipedia entry on Large Margin Nearest Neighbor <https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor >`_.
178
+
179
179
180
180
.. _nca :
181
181
@@ -216,7 +216,7 @@ the sum of probability of being correctly classified:
216
216
217
217
\mathbf {L} = \text {argmax}\sum _i p_i
218
218
219
- .. topic :: Example Code:
219
+ .. rubric :: Example Code
220
220
221
221
::
222
222
@@ -231,13 +231,14 @@ the sum of probability of being correctly classified:
231
231
nca = NCA(max_iter=1000)
232
232
nca.fit(X, Y)
233
233
234
- .. topic :: References:
234
+ .. rubric :: References
235
+
236
+
237
+ .. container :: hatnote hatnote-gray
235
238
236
- .. [1 ] Goldberger et al.
237
- `Neighbourhood Components Analysis <https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf >`_.
238
- NIPS 2005
239
+ [1]. Goldberger et al. `Neighbourhood Components Analysis <https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf >`_. NIPS 2005.
239
240
240
- .. [2 ] `Wikipedia entry on Neighborhood Components Analysis <https://en.wikipedia.org/wiki/Neighbourhood_components_analysis >`_
241
+ [2]. `Wikipedia entry on Neighborhood Components Analysis <https://en.wikipedia.org/wiki/Neighbourhood_components_analysis >`_.
241
242
242
243
243
244
.. _lfda :
@@ -289,7 +290,7 @@ nearby data pairs in the same class are made close and the data pairs in
289
290
different classes are separated from each other; far apart data pairs in the
290
291
same class are not imposed to be close.
291
292
292
- .. topic :: Example Code:
293
+ .. rubric :: Example Code
293
294
294
295
::
295
296
@@ -309,15 +310,14 @@ same class are not imposed to be close.
309
310
310
311
To work around this, fit instances of this class to data once, then keep the instance around to do transformations.
311
312
312
- .. topic :: References:
313
+ .. rubric :: References
313
314
314
- .. [1 ] Sugiyama. `Dimensionality Reduction of Multimodal Labeled Data by Local
315
- Fisher Discriminant Analysis <http://www.jmlr.org/papers/volume8/sugiyama07b/sugiyama07b.pdf> `_.
316
- JMLR 2007
317
315
318
- .. [2 ] Tang. `Local Fisher Discriminant Analysis on Beer Style Clustering
319
- <https://gastrograph.com/resources/whitepapers/local-fisher
320
- -discriminant-analysis-on-beer-style-clustering.html#> `_.
316
+ .. container :: hatnote hatnote-gray
317
+
318
+ [1]. Sugiyama. `Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis <http://www.jmlr.org/papers/volume8/sugiyama07b/sugiyama07b.pdf >`_. JMLR 2007.
319
+
320
+ [2]. Tang. `Local Fisher Discriminant Analysis on Beer Style Clustering <https://gastrograph.com/resources/whitepapers/local-fisher-discriminant-analysis-on-beer-style-clustering.html# >`_.
321
321
322
322
.. _mlkr :
323
323
@@ -363,7 +363,7 @@ calculating a weighted average of all the training samples:
363
363
364
364
\hat {y}_i = \frac {\sum _{j\neq i}y_jk_{ij}}{\sum _{j\neq i}k_{ij}}
365
365
366
- .. topic :: Example Code:
366
+ .. rubric :: Example Code
367
367
368
368
::
369
369
@@ -377,10 +377,12 @@ calculating a weighted average of all the training samples:
377
377
mlkr = MLKR()
378
378
mlkr.fit(X, Y)
379
379
380
- .. topic :: References:
380
+ .. rubric :: References
381
+
382
+
383
+ .. container :: hatnote hatnote-gray
381
384
382
- .. [1 ] Weinberger et al. `Metric Learning for Kernel Regression <http://proceedings.mlr.
383
- press/v2/weinberger07a/weinberger07a.pdf> `_. AISTATS 2007
385
+ [1]. Weinberger et al. `Metric Learning for Kernel Regression <http://proceedings.mlr.press/v2/weinberger07a/weinberger07a.pdf >`_. AISTATS 2007.
384
386
385
387
386
388
.. _supervised_version :
@@ -417,7 +419,7 @@ quadruplets, where for each quadruplet the two first points are from the same
417
419
class, and the two last points are from a different class (so indeed the two
418
420
last points should be less similar than the two first points).
419
421
420
- .. topic :: Example Code:
422
+ .. rubric :: Example Code
421
423
422
424
::
423
425
0 commit comments