Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
233a2393
Commit
233a2393
authored
Dec 17, 2017
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
updated docs
parent
43c802c9
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
393 additions
and
23 deletions
+393
-23
algorithms.xml
docs/docs/algorithms.xml
+15
-0
imaging.xml
docs/docs/imaging.xml
+18
-0
main_menu.xml
docs/docs/main_menu.xml
+24
-0
ml.xml
docs/docs/ml.xml
+23
-2
optimization.xml
docs/docs/optimization.xml
+173
-19
release_notes.xml
docs/docs/release_notes.xml
+128
-1
term_index.xml
docs/docs/term_index.xml
+12
-1
No files found.
docs/docs/algorithms.xml
View file @
233a2393
...
@@ -25,6 +25,7 @@
...
@@ -25,6 +25,7 @@
<name>
Tools
</name>
<name>
Tools
</name>
<item>
bigint
</item>
<item>
bigint
</item>
<item>
disjoint_subsets
</item>
<item>
disjoint_subsets
</item>
<item>
disjoint_subsets_sized
</item>
<item
nolink=
"true"
>
<item
nolink=
"true"
>
<name>
Quantum Computing
</name>
<name>
Quantum Computing
</name>
<sub>
<sub>
...
@@ -463,6 +464,20 @@
...
@@ -463,6 +464,20 @@
</component>
</component>
<!-- ************************************************************************* -->
<component>
<name>
disjoint_subsets_sized
</name>
<file>
dlib/disjoint_subsets.h
</file>
<spec_file
link=
"true"
>
dlib/disjoint_subsets/disjoint_subsets_sized_abstract.h
</spec_file>
<description>
This object is just like
<a
href=
"#disjoint_subsets"
>
disjoint_subsets
</a>
except that it
also keeps track of the size of each set.
</description>
</component>
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<component>
<component>
...
...
docs/docs/imaging.xml
View file @
233a2393
...
@@ -222,6 +222,7 @@
...
@@ -222,6 +222,7 @@
<item>
rotate_image_dataset
</item>
<item>
rotate_image_dataset
</item>
<item>
extract_image_chips
</item>
<item>
extract_image_chips
</item>
<item>
random_cropper
</item>
<item>
random_cropper
</item>
<item>
jitter_image
</item>
<item>
sub_image
</item>
<item>
sub_image
</item>
</section>
</section>
...
@@ -480,6 +481,7 @@
...
@@ -480,6 +481,7 @@
<example>
train_shape_predictor.py.html
</example>
<example>
train_shape_predictor.py.html
</example>
<example>
face_landmark_detection.py.html
</example>
<example>
face_landmark_detection.py.html
</example>
<example>
face_alignment.py.html
</example>
</examples>
</examples>
</component>
</component>
...
@@ -1740,6 +1742,22 @@
...
@@ -1740,6 +1742,22 @@
</examples>
</examples>
</component>
</component>
<!-- ************************************************************************* -->
<component>
<name>
jitter_image
</name>
<file>
dlib/image_transforms.h
</file>
<spec_file
link=
"true"
>
dlib/image_transforms/interpolation_abstract.h
</spec_file>
<description>
Randomly jitters an image by slightly rotating, scaling, and translating it.
There is also a 50% chance it will be mirrored left to right.
</description>
<examples>
<example>
dnn_metric_learning_on_images_ex.cpp.html
</example>
<example>
dnn_face_recognition_ex.cpp.html
</example>
</examples>
</component>
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<component>
<component>
...
...
docs/docs/main_menu.xml
View file @
233a2393
...
@@ -143,14 +143,30 @@
...
@@ -143,14 +143,30 @@
<item
nolink=
"true"
>
<item
nolink=
"true"
>
<name>
Examples: Python
</name>
<name>
Examples: Python
</name>
<sub>
<sub>
<item>
<name>
Global Optimization
</name>
<link>
global_optimization.py.html
</link>
</item>
<item>
<item>
<name>
Face Clustering
</name>
<name>
Face Clustering
</name>
<link>
face_clustering.py.html
</link>
<link>
face_clustering.py.html
</link>
</item>
</item>
<item>
<name>
Face Jittering/Augmentation
</name>
<link>
face_jitter.py.html
</link>
</item>
<item>
<name>
Face Alignment
</name>
<link>
face_alignment.py.html
</link>
</item>
<item>
<item>
<name>
Video Object Tracking
</name>
<name>
Video Object Tracking
</name>
<link>
correlation_tracker.py.html
</link>
<link>
correlation_tracker.py.html
</link>
</item>
</item>
<item>
<name>
Binary Classification
</name>
<link>
svm_binary_classifier.py.html
</link>
</item>
<item>
<item>
<name>
Face Landmark Detection
</name>
<name>
Face Landmark Detection
</name>
<link>
face_landmark_detection.py.html
</link>
<link>
face_landmark_detection.py.html
</link>
...
@@ -232,6 +248,14 @@
...
@@ -232,6 +248,14 @@
<name>
Deep Face Recognition
</name>
<name>
Deep Face Recognition
</name>
<link>
dnn_face_recognition_ex.cpp.html
</link>
<link>
dnn_face_recognition_ex.cpp.html
</link>
</item>
</item>
<item>
<name>
Deep Learning Semantic Segmentation Trainer
</name>
<link>
dnn_semantic_segmentation_train_ex.cpp.html
</link>
</item>
<item>
<name>
Deep Learning Semantic Segmentation
</name>
<link>
dnn_semantic_segmentation_ex.cpp.html
</link>
</item>
<item>
<item>
<name>
Deep Learning Vehicle Detection
</name>
<name>
Deep Learning Vehicle Detection
</name>
<link>
dnn_mmod_find_cars_ex.cpp.html
</link>
<link>
dnn_mmod_find_cars_ex.cpp.html
</link>
...
...
docs/docs/ml.xml
View file @
233a2393
...
@@ -209,6 +209,10 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -209,6 +209,10 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<name>
htan
</name>
<name>
htan
</name>
<link>
dlib/dnn/layers_abstract.h.html#htan_
</link>
<link>
dlib/dnn/layers_abstract.h.html#htan_
</link>
</item>
</item>
<item>
<name>
softmax_all
</name>
<link>
dlib/dnn/layers_abstract.h.html#softmax_all_
</link>
</item>
<item>
<item>
<name>
softmax
</name>
<name>
softmax
</name>
<link>
dlib/dnn/layers_abstract.h.html#softmax_
</link>
<link>
dlib/dnn/layers_abstract.h.html#softmax_
</link>
...
@@ -230,6 +234,18 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -230,6 +234,18 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<name>
EXAMPLE_LOSS_LAYER
</name>
<name>
EXAMPLE_LOSS_LAYER
</name>
<link>
dlib/dnn/loss_abstract.h.html#EXAMPLE_LOSS_LAYER_
</link>
<link>
dlib/dnn/loss_abstract.h.html#EXAMPLE_LOSS_LAYER_
</link>
</item>
</item>
<item>
<name>
loss_dot
</name>
<link>
dlib/dnn/loss_abstract.h.html#loss_dot_
</link>
</item>
<item>
<name>
loss_epsilon_insensitive
</name>
<link>
dlib/dnn/loss_abstract.h.html#loss_epsilon_insensitive_
</link>
</item>
<item>
<name>
loss_ranking
</name>
<link>
dlib/dnn/loss_abstract.h.html#loss_ranking_
</link>
</item>
<item>
<item>
<name>
loss_binary_hinge
</name>
<name>
loss_binary_hinge
</name>
<link>
dlib/dnn/loss_abstract.h.html#loss_binary_hinge_
</link>
<link>
dlib/dnn/loss_abstract.h.html#loss_binary_hinge_
</link>
...
@@ -264,10 +280,10 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -264,10 +280,10 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
</item>
</item>
<item>
<item>
<name>
loss_mean_squared_per_pixel
</name>
<name>
loss_mean_squared_per_pixel
</name>
<link>
#loss_mean_squared_per_pixel_
</link>
<link>
dlib/dnn/loss_abstract.h.html
#loss_mean_squared_per_pixel_
</link>
</item>
</item>
<item>
<item>
<name>
loss_mean_squared_multioutput
_
</name>
<name>
loss_mean_squared_multioutput
</name>
<link>
dlib/dnn/loss_abstract.h.html#loss_mean_squared_multioutput_
</link>
<link>
dlib/dnn/loss_abstract.h.html#loss_mean_squared_multioutput_
</link>
</item>
</item>
</sub>
</sub>
...
@@ -498,6 +514,8 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -498,6 +514,8 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<example>
dnn_metric_learning_ex.cpp.html
</example>
<example>
dnn_metric_learning_ex.cpp.html
</example>
<example>
dnn_metric_learning_on_images_ex.cpp.html
</example>
<example>
dnn_metric_learning_on_images_ex.cpp.html
</example>
<example>
dnn_face_recognition_ex.cpp.html
</example>
<example>
dnn_face_recognition_ex.cpp.html
</example>
<example>
dnn_semantic_segmentation_ex.cpp.html
</example>
<example>
dnn_semantic_segmentation_train_ex.cpp.html
</example>
</examples>
</examples>
</component>
</component>
...
@@ -525,6 +543,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -525,6 +543,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<example>
dnn_mmod_train_find_cars_ex.cpp.html
</example>
<example>
dnn_mmod_train_find_cars_ex.cpp.html
</example>
<example>
dnn_metric_learning_ex.cpp.html
</example>
<example>
dnn_metric_learning_ex.cpp.html
</example>
<example>
dnn_metric_learning_on_images_ex.cpp.html
</example>
<example>
dnn_metric_learning_on_images_ex.cpp.html
</example>
<example>
dnn_semantic_segmentation_train_ex.cpp.html
</example>
</examples>
</examples>
</component>
</component>
...
@@ -552,6 +571,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -552,6 +571,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<example>
dnn_face_recognition_ex.cpp.html
</example>
<example>
dnn_face_recognition_ex.cpp.html
</example>
<example>
dnn_mmod_face_detection_ex.cpp.html
</example>
<example>
dnn_mmod_face_detection_ex.cpp.html
</example>
<example>
dnn_mmod_dog_hipsterizer.cpp.html
</example>
<example>
dnn_mmod_dog_hipsterizer.cpp.html
</example>
<example>
dnn_semantic_segmentation_train_ex.cpp.html
</example>
</examples>
</examples>
</component>
</component>
...
@@ -1204,6 +1224,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
...
@@ -1204,6 +1224,7 @@ Davis E. King. <a href="http://jmlr.csail.mit.edu/papers/volume10/king09a/king09
<examples>
<examples>
<example>
svm_pegasos_ex.cpp.html
</example>
<example>
svm_pegasos_ex.cpp.html
</example>
<example>
svm_sparse_ex.cpp.html
</example>
<example>
svm_sparse_ex.cpp.html
</example>
<example>
svm_binary_classifier.py.html
</example>
</examples>
</examples>
</component>
</component>
...
...
docs/docs/optimization.xml
View file @
233a2393
<?xml version="1.0" encoding="
ISO-8859-1
"?>
<?xml version="1.0" encoding="
utf8
"?>
<?xml-stylesheet type="text/xsl" href="stylesheet.xsl"?>
<?xml-stylesheet type="text/xsl" href="stylesheet.xsl"?>
<doc>
<doc>
...
@@ -30,14 +30,16 @@
...
@@ -30,14 +30,16 @@
<item>
find_min_single_variable
</item>
<item>
find_min_single_variable
</item>
<item>
find_min_using_approximate_derivatives
</item>
<item>
find_min_using_approximate_derivatives
</item>
<item>
find_min_bobyqa
</item>
<item>
find_min_bobyqa
</item>
<item>
find_min_global
</item>
<item>
find_max
</item>
<item>
find_max
</item>
<item>
find_max_box_constrained
</item>
<item>
find_max_box_constrained
</item>
<item>
find_max_single_variable
</item>
<item>
find_max_single_variable
</item>
<item>
find_max_using_approximate_derivatives
</item>
<item>
find_max_using_approximate_derivatives
</item>
<item>
find_max_bobyqa
</item>
<item>
find_max_bobyqa
</item>
<item>
find_max_global
</item>
<item>
global_function_search
</item>
<item>
find_max_trust_region
</item>
<item>
find_max_trust_region
</item>
<item>
find_min_trust_region
</item>
<item>
find_min_trust_region
</item>
<item>
find_optimal_parameters
</item>
</section>
</section>
<section>
<section>
...
@@ -54,6 +56,7 @@
...
@@ -54,6 +56,7 @@
<item>
solve_least_squares
</item>
<item>
solve_least_squares
</item>
<item>
solve_least_squares_lm
</item>
<item>
solve_least_squares_lm
</item>
<item>
solve_trust_region_subproblem
</item>
<item>
solve_trust_region_subproblem
</item>
<item>
solve_trust_region_subproblem_bounded
</item>
<item>
max_cost_assignment
</item>
<item>
max_cost_assignment
</item>
<item>
max_sum_submatrix
</item>
<item>
max_sum_submatrix
</item>
<item>
find_max_factor_graph_nmplp
</item>
<item>
find_max_factor_graph_nmplp
</item>
...
@@ -88,6 +91,7 @@
...
@@ -88,6 +91,7 @@
<item>
potts_model_score
</item>
<item>
potts_model_score
</item>
<item>
parse_tree_to_string
</item>
<item>
parse_tree_to_string
</item>
<item>
find_trees_not_rooted_with_tag
</item>
<item>
find_trees_not_rooted_with_tag
</item>
<item>
upper_bound_function
</item>
</section>
</section>
</top>
</top>
...
@@ -398,6 +402,25 @@
...
@@ -398,6 +402,25 @@
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<component>
<name>
solve_trust_region_subproblem_bounded
</name>
<file>
dlib/optimization.h
</file>
<spec_file
link=
"true"
>
dlib/optimization/optimization_trust_region_abstract.h
</spec_file>
<description>
This function solves the following optimization problem:
<pre>
Minimize: f(p) == 0.5*trans(p)*B*p + trans(g)*p
subject to the following constraint:
length(p)
<
= radius
lower(i)
<
= p(i)
<
= upper(i), for all i
</pre>
</description>
</component>
<!-- ************************************************************************* -->
<component>
<component>
<name>
solve_trust_region_subproblem
</name>
<name>
solve_trust_region_subproblem
</name>
<file>
dlib/optimization.h
</file>
<file>
dlib/optimization.h
</file>
...
@@ -898,7 +921,6 @@ Or it can use the elastic net regularizer:
...
@@ -898,7 +921,6 @@ Or it can use the elastic net regularizer:
</description>
</description>
<examples>
<examples>
<example>
optimization_ex.cpp.html
</example>
<example>
optimization_ex.cpp.html
</example>
<example>
model_selection_ex.cpp.html
</example>
</examples>
</examples>
</component>
</component>
...
@@ -923,26 +945,10 @@ Or it can use the elastic net regularizer:
...
@@ -923,26 +945,10 @@ Or it can use the elastic net regularizer:
</description>
</description>
<examples>
<examples>
<example>
optimization_ex.cpp.html
</example>
<example>
optimization_ex.cpp.html
</example>
<example>
model_selection_ex.cpp.html
</example>
</examples>
</examples>
</component>
</component>
<!-- ************************************************************************* -->
<component>
<name>
find_optimal_parameters
</name>
<file>
dlib/optimization/find_optimal_parameters.h
</file>
<spec_file
link=
"true"
>
dlib/optimization/find_optimal_parameters_abstract.h
</spec_file>
<description>
Performs a constrained minimization of a function and doesn't require derivatives from the user.
This function is similar to
<a
href=
"#find_min_bobyqa"
>
find_min_bobyqa
</a>
and
<a
href=
"#find_min_single_variable"
>
find_min_single_variable
</a>
except that it
allows any number of variables and never throws exceptions when the max iteration
limit is reached (even if it didn't converge).
</description>
</component>
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<component>
<component>
...
@@ -1128,6 +1134,154 @@ Or it can use the elastic net regularizer:
...
@@ -1128,6 +1134,154 @@ Or it can use the elastic net regularizer:
</component>
</component>
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<!-- ************************************************************************* -->
<component>
<name>
global_function_search
</name>
<file>
dlib/global_optimization.h
</file>
<spec_file
link=
"true"
>
dlib/global_optimization/global_function_search_abstract.h
</spec_file>
<description>
This object performs global optimization of a set of user supplied
functions. That is, given a set of functions, each of which could take a different
number of arguments, this object allows you to find which function and which arguments
produce the maximal output.
<p>
Importantly, the global_function_search object does not require the user to
supply derivatives. Moreover, the functions being optimized may contain discontinuities,
behave stochastically, and have many local maxima. The global_function_search
object will attempt to find the global optima in the face of these challenges.
It is also designed to use as few function evaluations as possible, making
it suitable for optimizing functions that are very expensive to evaluate.
It does this by alternating between two modes: a global exploration mode
and a local optima refinement mode. This is accomplished by building and
maintaining two models of the objective function:
</p>
<ol>
<li>
A global model that
<a
href=
"#upper_bound_function"
>
upper bounds our objective function
</a>
. This is a non-parametric
piecewise linear model derived from all function evaluations ever seen by the
global_function_search object. This is based on the method described in
<i>
Global
Optimization of Lipschitz Functions
</i>
by Cédric Malherbe and Nicolas Vayatis in the
2017 International Conference on Machine Learning.
</li>
<li>
A local quadratic model fit around the best point seen so far. This uses
a trust region method similar to what is proposed in:
<i>
The NEWUOA software for unconstrained optimization without derivatives
</i>
By
M.J.D. Powell, 40th Workshop on Large Scale Nonlinear Optimization (Erice,
Italy, 2004)
</li>
</ol>
The behavior of the algorithm is illustrated in the following video, which shows the solver in action. In the video, the red line
is the function to be optimized and we are looking for the maximum point. Every time
the global_function_search samples a point from the function we note it with a little
box. The state of the solver is determined by the two models discussed above.
Therefore, we draw the upper bounding model as well as the current local quadratic model
so you can see how they evolve as the optimization proceeds. We also note the location of the
best point seen so far by a little vertical line.
<p>
You can see that the optimizer is alternating between picking the maximum upper bounding
point and the maximum point according to the quadratic model. As the optimization
progresses, the upper bound becomes progressively more accurate, helping to find the
best peak to investigate, while the quadratic model quickly finds a high precision
maximizer on whatever peak it currently rests. These two things together allow the
optimizer to find the true global maximizer to high precision (within 1e-9 in this case) by the time the
video concludes.
</p>
<center>
<video
src=
"find_max_global_example.webm"
>
Video of optimizer running
</video>
</center>
<p>
Finally, note that the
<a
href=
"#find_max_global"
>
find_max_global
</a>
routine is
essentially a simple wrapper around the global_function_search object and exists to
provide a convenient interface. Most users will therefore want to call find_max_global
rather than global_function_search. However, the API of global_function_search
is more general and allows for of a wider set of usage patterns, for example, executing
objective function evaluations in parallel. So more advanced users may want to use
global_function_search directly rather than find_max_global. But try to use find_max_global() first.
</p>
</description>
</component>
<!-- ************************************************************************* -->
<component>
<name>
find_max_global
</name>
<file>
dlib/global_optimization.h
</file>
<spec_file
link=
"true"
>
dlib/global_optimization/find_max_global_abstract.h
</spec_file>
<description>
This function performs global optimization of a function, subject
to bounds constraints. This means it attempts to find the global
maximizer, not just a local maximizer. The search is performed
using the
<a
href=
"#global_function_search"
>
global_function_search
</a>
object.
See global_function_search's documentation for details of the algorithm. Importantly,
find_max_global() does not require the user to specify derivatives
or starting guesses, all while taking efforts to use as few calls to
the objective function as possible. It is therefore appropriate for tasks
where evaluating the objective function is time consuming or
expensive, such as in hyper parameter optimization of machine
learning models.
</description>
<examples>
<example>
optimization_ex.cpp.html
</example>
<example>
model_selection_ex.cpp.html
</example>
<example>
global_optimization.py.html
</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>
<name>
find_min_global
</name>
<file>
dlib/global_optimization.h
</file>
<spec_file
link=
"true"
>
dlib/global_optimization/find_max_global_abstract.h
</spec_file>
<description>
This function is identical to the
<a
href=
"#find_max_global"
>
find_max_global
</a>
routine
except it negates the objective function before performing optimization.
Thus this function will attempt to find the minimizer of the objective rather than
the maximizer.
</description>
<examples>
<example>
optimization_ex.cpp.html
</example>
<example>
model_selection_ex.cpp.html
</example>
<example>
global_optimization.py.html
</example>
</examples>
</component>
<!-- ************************************************************************* -->
<component>
<name>
upper_bound_function
</name>
<file>
dlib/global_optimization.h
</file>
<spec_file
link=
"true"
>
dlib/global_optimization/upper_bound_function_abstract.h
</spec_file>
<description>
This object represents a piecewise linear non-parametric function that can
be used to define an upper bound on some more complex and unknown function.
<p>
This is based on the method described in
<i>
Global Optimization of Lipschitz
Functions
</i>
by Cédric Malherbe and Nicolas Vayatis in the 2017 International
Conference on Machine Learning. Here we have extended it to support modeling of
stochastic or discontinuous functions by adding a noise term. We also model separate
Lipschitz parameters for each dimension, allowing the model to handle functions with
widely varying sensitivities to each input variable.
</p>
</description>
</component>
<!-- ************************************************************************* -->
</components>
</components>
...
...
docs/docs/release_notes.xml
View file @
233a2393
...
@@ -11,6 +11,133 @@
...
@@ -11,6 +11,133 @@
<!-- ************************************************************************************** -->
<!-- ************************************************************************************** -->
<current>
<current>
New Features and Improvements:
- added disjoint_subsets_sized
- added semantic segmentation example: dnn_semantic_segmentation_ex.cpp and dnn_semantic_segmentation_train_ex.cpp
- optimization:
- Added upper_bound_function object.
- Added solve_trust_region_subproblem_bounded()
- Added tools for doing global optimization. The main new tools here are find_max_global() and global_function_search.
- Updated model_selection_ex.cpp and optimization_ex.cpp to use these new tools.
- Added loss_ranking_ layer
- Added loss_epsilon_insensitive_ layer
- Added softmax_all layer.
- Added loss_dot layer
- made log loss layers more numerically stable.
- Upgraded the con_ layer so that you can set the nr or nc to 0 in the layer
specification and this means "make the filter cover the whole input image
dimension". So it's just an easy way to make a filter sized exactly so that
it will have one output along that dimension.
- Add support for non-scale-invariant MMOD
- Python API for get_face_chips() which allows you to extract aligned face images.
- Also added python binding for count_steps_without_decrease() and count_steps_without_decrease_robust().
- Added jitter_image()
- Various improvements to cmake scripts.
- Added USE_NEON_INSTRUCTIONS cmake option.
- Added get_integer() and get_integer_in_range() to dlib::rand.
- Add get_net parameter that allows to call the function without forcing flush to disk.
- Sometimes the loss_mmod_ layer could experience excessively long runtime
during early iterations since the model might produce a huge number of false
alarms while the detector is still bad. Processing all these detections can
cause it to run slowly until the model is good enough to avoid really
excessive amounts of false alarms. This change puts more of a limit on the
number of false alarms processed during those early iterations and avoids
the slowdown.
- chol() will use a banded cholesky algorithm for banded matrices, making it much faster in these cases.
- Changed tensor so that, when reallocating memory, it frees any existing
memory *before* allocating new memory. It used to be the other way around
which caused momentary spikes of increased memory usage. This could put you
over the total memory available in some cases which is obviously less than
ideal behavior.
- Made resizable_tensor objects not perform a reallocation if they are
resized to be smaller. Instead, they now behave like std::vector in that
they just change their nominal size but keep the same memory, only
reallocating if they are resized to something larger than their underlying
memory block. This change makes some uses of dlib faster, in particular,
running networks on a large set of images of differing sizes will now run
faster since there won't be any GPU reallocations, which are notoriously
slow.
- Upgraded the input layer so you can give
input
<
std::array
<
matrix
<
T
>
,K
>>
types as input layer
specifications. This will create input tensors with K channels.
- Changed the timing code to use the C++11 high resolution clock and
atomics. This makes the timing code a lot more precise.
- Made the loss dumping between learning rate changes a little more relaxed.
In particular, rather than just dumping exactly 400 of the last loss values,
it now dumps 400 + 10% of the loss buffer. This way, the amount of the dump
is proportional to the steps without progress threshold. This is better
because when the user sets the steps without progress to something larger it
probably means you need to look at more loss values to determine that we
should stop, so dumping more in that case ought to be better.
Non-Backwards Compatible Changes:
- Changed the random_cropper's set_min_object_size() routine to take min box
dimensions in the same format as the mmod_options object (i.e. two lengths
measured in pixels). This should make defining random_cropping strategies
that are consistent with MMOD settings much more straightforward since you
can just take the mmod_options settings and give them to the random_cropper
and it will do the right thing.
- Changed the mean squared loss layers to return a loss that's the MSE, not
0.5*MSE. The only thing this effects is the logging messages that print
during training, which were confusing since the reported loss was half the
size you would expect.
- Changed test_regression_function() and cross_validate_regression_trainer()
to output 2 more statistics, which are the mean absolute error and the
standard deviation of the absolute error. This means these functions now
return 4D rather than 2D vectors. I also made test_regression_function()
take a non-const reference to the regression function so that DNN objects
can be tested.
- Fixed shape_predictor_trainer padding so that it behaves as it used to. In
dlib 19.7 the padding code was changed and accidentally doubled the size of
the applied padding when in the older (and still default) landmark_relative
padding mode. It's not a huge deal either way, but this change reverts back
to the intended behavior.
- Changed test_regression_function() and cross_validate_regression_trainer()
to output correlation rather than squared correlation. Also added two more
outputs: average absolute error and the standard deviation of the absolute
error.
Bug fixes:
- Fixed DLIB_ISO_CPP_ONLY not building.
- Fixed toMat() not compiling in some cases.
- Significantly reduced the compile time of the DNN example programs in visual studio.
- Fixed a few image processing functions that weren't using the generic image interface.
- Fixed a bug in the random_cropper where it might crash due to division by 0 if small images are given as input.
- Fixed a bug in how the mmod_options automatically determines detection window sizes. It would pick a bad size in some cases.
- Fixed load_image_dataset()'s skip_empty_images() option. It wasn't
skipping images that only have ignore boxes when you load into mmod_rects
like it should have been.
- Changed graph construction for chinese_whispers() so that each face is
always included in the edge graph. If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case. So
basically this fixes a bug where chinese_whispers(), when called from
python, would sometimes return a labels array that doesn't include labels
for all the inputs.
- Fixed a bug in dlib's MS Windows GUI code that was introduced a little
while back when we switched everything to std::shared_ptr. Turns out
std::shared_ptr has some surprising limitations. This change fixes a bug
where the program crashes or hangs sometimes during program shutdown.
- Fixed error in TIME_THIS(). It was still printing in seconds when it said minutes in the output.
- Adding missing implementation of tabbed_display::selected_tab
- Changed the windows signaler and mutex code to use the C++11 thread
library instead of the old win32 functions. I did this to work around how
windows unloads dlls. In particular, during dll unload windows will kill all
threads, THEN it will destruct global objects. So this leads to problems
where a global obejct that owns threads tries to tell them to shutdown and
everything goes wrong. The specific problem this code change fixes is when
signaler::broadcast() is called on a signaler that was being waited on by
one of these abruptly killed threads. In that case, the old code would
deadlock inside signaler::broadcast(). This new code doesn't seem to have
that problem, thereby mitigating the windows dll unload behavior in some
situations.
- Fixed DLIB_STACK_TRACE macro.
</current>
<!-- ************************************************************************************** -->
<old
name=
"19.7"
date=
"Sep 17, 2017"
>
New Features and Improvements:
New Features and Improvements:
- Deep Learning:
- Deep Learning:
- The CNN+MMOD detector is now a multi-class detector. In particular,
- The CNN+MMOD detector is now a multi-class detector. In particular,
...
@@ -50,7 +177,7 @@ Bug fixes:
...
@@ -50,7 +177,7 @@ Bug fixes:
exactly right when boxes were auto-ignored. There weren't any practical
exactly right when boxes were auto-ignored. There weren't any practical
user facing problems due to this, but it has nevertheless been fixed.
user facing problems due to this, but it has nevertheless been fixed.
</
current
>
</
old
>
<!-- ************************************************************************************** -->
<!-- ************************************************************************************** -->
...
...
docs/docs/term_index.xml
View file @
233a2393
...
@@ -122,6 +122,7 @@
...
@@ -122,6 +122,7 @@
<term
file=
"ml.html"
name=
"dnn_trainer"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"dnn_trainer"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/trainer_abstract.h.html"
name=
"force_flush_to_disk"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"EXAMPLE_LOSS_LAYER_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"EXAMPLE_LOSS_LAYER_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_binary_hinge_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_binary_hinge_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_binary_log_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_binary_log_"
include=
"dlib/dnn.h"
/>
...
@@ -129,6 +130,9 @@
...
@@ -129,6 +130,9 @@
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_multiclass_log_per_pixel_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_multiclass_log_per_pixel_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_multiclass_log_per_pixel_weighted_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_multiclass_log_per_pixel_weighted_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_mean_squared_per_pixel_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_mean_squared_per_pixel_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_ranking_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_dot_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/loss_abstract.h.html"
name=
"loss_epsilon_insensitive_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_metric_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_metric_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_mean_squared_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_mean_squared_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_mean_squared_multioutput_"
include=
"dlib/dnn.h"
/>
<term
file=
"ml.html"
name=
"loss_mean_squared_multioutput_"
include=
"dlib/dnn.h"
/>
...
@@ -160,6 +164,7 @@
...
@@ -160,6 +164,7 @@
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"sig_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"sig_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"htan_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"htan_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"softmax_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"softmax_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"softmax_all_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"add_prev_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"add_prev_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"concat_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"concat_"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"inception"
include=
"dlib/dnn.h"
/>
<term
file=
"dlib/dnn/layers_abstract.h.html"
name=
"inception"
include=
"dlib/dnn.h"
/>
...
@@ -257,6 +262,10 @@
...
@@ -257,6 +262,10 @@
<term
file=
"optimization.html"
name=
"find_min"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_max_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_max_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_max_global"
include=
"dlib/global_optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_global"
include=
"dlib/global_optimization.h"
/>
<term
file=
"optimization.html"
name=
"global_function_search"
include=
"dlib/global_optimization.h"
/>
<term
file=
"optimization.html"
name=
"upper_bound_function"
include=
"dlib/global_optimization.h"
/>
<term
file=
"optimization.html"
name=
"max_cost_assignment"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"max_cost_assignment"
include=
"dlib/optimization.h"
/>
<term
link=
"optimization.html#max_cost_assignment"
name=
"Hungarian Algorithm"
include=
"dlib/optimization.h"
/>
<term
link=
"optimization.html#max_cost_assignment"
name=
"Hungarian Algorithm"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"max_sum_submatrix"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"max_sum_submatrix"
include=
"dlib/optimization.h"
/>
...
@@ -290,10 +299,11 @@
...
@@ -290,10 +299,11 @@
<term
link=
"dlib/graph_cuts/min_cut_abstract.h.html#node_label"
name=
"FREE_NODE"
include=
"dlib/graph_cuts.h"
/>
<term
link=
"dlib/graph_cuts/min_cut_abstract.h.html#node_label"
name=
"FREE_NODE"
include=
"dlib/graph_cuts.h"
/>
<term
file=
"optimization.html"
name=
"solve_trust_region_subproblem"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"solve_trust_region_subproblem"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"solve_trust_region_subproblem_bounded"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_single_variable"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_single_variable"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_using_approximate_derivatives"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_using_approximate_derivatives"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_bobyqa"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"find_min_bobyqa"
include=
"dlib/optimization.h"
/>
<
term
file=
"optimization.html"
name=
"find_optimal_parameters"
include=
"dlib/optimization/find_optimal_parameters.h"
/
>
<
!-- DEPRECATED <term file="optimization.html" name="find_optimal_parameters" include="dlib/optimization/find_optimal_parameters.h"/> --
>
<term
file=
"optimization.html"
name=
"elastic_net"
include=
"dlib/optimization/elastic_net.h"
/>
<term
file=
"optimization.html"
name=
"elastic_net"
include=
"dlib/optimization/elastic_net.h"
/>
<term
file=
"optimization.html"
name=
"solve_qp_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"solve_qp_box_constrained"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"solve_qp_box_constrained_blockdiag"
include=
"dlib/optimization.h"
/>
<term
file=
"optimization.html"
name=
"solve_qp_box_constrained_blockdiag"
include=
"dlib/optimization.h"
/>
...
@@ -348,6 +358,7 @@
...
@@ -348,6 +358,7 @@
<term
file=
"algorithms.html"
name=
"create_max_margin_projection_hash"
include=
"dlib/lsh.h"
/>
<term
file=
"algorithms.html"
name=
"create_max_margin_projection_hash"
include=
"dlib/lsh.h"
/>
<term
file=
"imaging.html"
name=
"randomly_sample_image_features"
include=
"dlib/statistics.h"
/>
<term
file=
"imaging.html"
name=
"randomly_sample_image_features"
include=
"dlib/statistics.h"
/>
<term
file=
"algorithms.html"
name=
"disjoint_subsets"
include=
"dlib/disjoint_subsets.h"
/>
<term
file=
"algorithms.html"
name=
"disjoint_subsets"
include=
"dlib/disjoint_subsets.h"
/>
<term
file=
"algorithms.html"
name=
"disjoint_subsets_sized"
include=
"dlib/disjoint_subsets.h"
/>
<term
link=
"algorithms.html#disjoint_subsets"
name=
"union-find"
include=
"dlib/disjoint_subsets.h"
/>
<term
link=
"algorithms.html#disjoint_subsets"
name=
"union-find"
include=
"dlib/disjoint_subsets.h"
/>
<term
file=
"linear_algebra.html"
name=
"rectangle"
include=
"dlib/geometry.h"
/>
<term
file=
"linear_algebra.html"
name=
"rectangle"
include=
"dlib/geometry.h"
/>
<term
file=
"linear_algebra.html"
name=
"drectangle"
include=
"dlib/geometry.h"
/>
<term
file=
"linear_algebra.html"
name=
"drectangle"
include=
"dlib/geometry.h"
/>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment