Commit 1f508170 authored by Davis King's avatar Davis King

Made docs more clear

--HG--
extra : convert_revision : svn%3Afdd8eb12-d10e-0410-9acb-85c331704f74/trunk%403646
parent fda75654
...@@ -542,10 +542,10 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf" ...@@ -542,10 +542,10 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
<spec_file link="true">dlib/manifold_regularization/linear_manifold_regularizer_abstract.h</spec_file> <spec_file link="true">dlib/manifold_regularization/linear_manifold_regularizer_abstract.h</spec_file>
<description> <description>
<p> <p>
Many learning algorithms attempt to minimize a loss function that, Many learning algorithms attempt to minimize a function that, at a high
at a high level, looks like this: level, looks like this:
<pre> <pre>
loss(w) == complexity + training_set_error f(w) == complexity + training_set_error
</pre> </pre>
</p> </p>
...@@ -563,13 +563,13 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf" ...@@ -563,13 +563,13 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
unlabeled data by first defining which data samples are "close" to each other unlabeled data by first defining which data samples are "close" to each other
(perhaps by using their 3 <a href="#find_k_nearest_neighbors">nearest neighbors</a>) (perhaps by using their 3 <a href="#find_k_nearest_neighbors">nearest neighbors</a>)
and then adding a term to and then adding a term to
the loss function that penalizes any decision rule which produces the above function that penalizes any decision rule which produces
different outputs on data samples which we have designated as being close. different outputs on data samples which we have designated as being close.
</p> </p>
<p> <p>
It turns out that it is possible to transform these manifold regularized loss It turns out that it is possible to transform these manifold regularized learning
functions into the normal form shown above by applying a certain kind of problems into the normal form shown above by applying a certain kind of
preprocessing to all our data samples. Once this is done we can use a preprocessing to all our data samples. Once this is done we can use a
normal learning algorithm, such as the <a href="#svm_c_linear_trainer">svm_c_linear_trainer</a>, normal learning algorithm, such as the <a href="#svm_c_linear_trainer">svm_c_linear_trainer</a>,
on just the on just the
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment