Commit 71c2b029 authored by Davis King's avatar Davis King

Clarified docs.

--HG--
extra : convert_revision : svn%3Afdd8eb12-d10e-0410-9acb-85c331704f74/trunk%403606
parent a033cd99
...@@ -557,38 +557,39 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf" ...@@ -557,38 +557,39 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
</p> </p>
<p> <p>
In the above setting all the training data consists of labeled samples. In the above setting, all the training data consists of labeled samples.
However, it would be nice to be able to benefit from unlabeled data However, it would be nice to be able to benefit from unlabeled data.
(see the <a href="linear_manifold_regularizer_ex.cpp.html">example program</a> The idea of manifold regularization is to extract useful information from
for this object for an example where unlabeled unlabeled data by defining which data samples are "close" to each other
data is useful). The idea of manifold regularization is to extract useful (perhaps by using their 3 <a href="#find_k_nearest_neighbors">nearest neighbors</a>)
information from unlabeled data by defining which data samples are "close" and then adding a term to
to each other (perhaps by using their 3 <a href="#find_k_nearest_neighbors">nearest neighbors</a>) the loss function that penalizes any decision rule which produces
and then adding a term to the loss function that penalizes any decision rule which produces different outputs on data samples which we have designated as being close.
different output on data samples that we have designated as being close.
</p> </p>
<p> <p>
It turns out that it is possible to turn these manifold regularized loss It turns out that it is possible to transform these manifold regularized loss
functions into the normal form shown above by applying a certain kind functions into the normal form shown above by applying a certain kind of
of processing to all of our data samples. Once this is done we can use preprocessing to all our data samples. Once this is done we can use a
a normal learning algorithm, such as the <a href="#svm_c_linear_trainer">svm_c_linear_trainer</a>, normal learning algorithm, such as the <a href="#svm_c_linear_trainer">svm_c_linear_trainer</a>,
on just the labeled data samples and obtain the same output as the manifold regularized on just the
learner would have produced. Therefore, the linear_manifold_regularizer is labeled data samples and obtain the same output as the manifold regularized
a tool for creating this preprocessing transformation. In particular, the learner would have produced.
transformation is linear. That is, it is just a matrix you multiply with
all your samples.
</p> </p>
<p> <p>
For a more detailed discussion of this topic you should consult the following The linear_manifold_regularizer is a tool for creating this preprocessing
paper. In particular, see section 4.2. This object computes the inverse T transformation. In particular, the transformation is linear. That is, it
matrix described in that section. is just a matrix you multiply with all your samples. For a more detailed
discussion of this topic you should consult the following paper. In
particular, see section 4.2. This object computes the inverse T matrix
described in that section.
<blockquote> <blockquote>
Linear Manifold Regularization for Large Scale Semi-supervised Learning Linear Manifold Regularization for Large Scale Semi-supervised Learning
by Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin by Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin
</blockquote> </blockquote>
</p> </p>
</description> </description>
<examples> <examples>
<example>linear_manifold_regularizer_ex.cpp.html</example> <example>linear_manifold_regularizer_ex.cpp.html</example>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment