Commit 184652c2 authored by Davis King's avatar Davis King

Updated docs

--HG--
extra : convert_revision : svn%3Afdd8eb12-d10e-0410-9acb-85c331704f74/trunk%404116
parent 24e0df65
......@@ -83,6 +83,7 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
<item>mlp</item>
<item>krls</item>
<item>krr_trainer</item>
<item>rr_trainer</item>
<item>svr_trainer</item>
<item>rvm_regression_trainer</item>
<item>rbf_network_trainer</item>
......@@ -952,9 +953,10 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
represents the learned function.
</p>
The implementation is done using the <a href="#empirical_kernel_map">empirical_kernel_map</a> and
<a href="#linearly_independent_subset_finder">linearly_independent_subset_finder</a>
and thus allows you to run the algorithm on large datasets and obtain sparse outputs. It is also
capable of automatically estimating its regularization parameter using leave-one-out cross-validation.
<a href="#linearly_independent_subset_finder">linearly_independent_subset_finder</a> to kernelize
the <a href="#rr_trainer">rr_trainer</a> object. Thus it allows you to run the algorithm on large
datasets and obtain sparse outputs. It is also capable of automatically estimating its
regularization parameter using leave-one-out cross-validation.
</description>
<examples>
<example>krr_regression_ex.cpp.html</example>
......@@ -963,6 +965,27 @@ Davis E. King. <a href="http://www.jmlr.org/papers/volume10/king09a/king09a.pdf"
</component>
<!-- ************************************************************************* -->
<component>
<name>rr_trainer</name>
<file>dlib/svm.h</file>
<spec_file link="true">dlib/svm/rr_trainer_abstract.h</spec_file>
<description>
<p>
Performs linear ridge regression and outputs a <a href="#decision_function">decision_function</a> that
represents the learned function. In particular, this object can only be used with
the <a href="#linear_kernel">linear_kernel</a>. It is optimized for the linear case where
the number of features in each sample vector is small (i.e. on the order of 1000 or less since the
algorithm is cubic in the number of features.).
If you want to use a nonlinear kernel then you should use the <a href="#krr_trainer">krr_trainer</a>.
</p>
This object is capable of automatically estimating its regularization parameter using
leave-one-out cross-validation.
</description>
</component>
<!-- ************************************************************************* -->
<component>
......
......@@ -180,6 +180,7 @@
<term file="ml.html" name="svm_c_ekm_trainer"/>
<term file="ml.html" name="rvm_trainer"/>
<term file="ml.html" name="krr_trainer"/>
<term file="ml.html" name="rr_trainer"/>
<term file="ml.html" name="svr_trainer"/>
<term file="ml.html" name="rvm_regression_trainer"/>
<term file="ml.html" name="rbf_network_trainer"/>
......@@ -199,6 +200,7 @@
<term link="ml.html#svm_nu_trainer" name="support vector machine"/>
<term link="ml.html#rvm_trainer" name="relevance vector machine"/>
<term link="ml.html#krr_trainer" name="kernel ridge regression"/>
<term link="ml.html#rr_trainer" name="ridge regression"/>
<term link="ml.html#svr_trainer" name="support vector regression"/>
<term link="ml.html#krr_trainer" name="regularized least squares"/>
<term link="ml.html#krr_trainer" name="least squares SVM"/>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment