Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
a5121e57
Commit
a5121e57
authored
Dec 18, 2017
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
updated docs
parent
4149ade9
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
81 additions
and
103 deletions
+81
-103
optimization.xml
docs/docs/optimization.xml
+1
-1
release_notes.xml
docs/docs/release_notes.xml
+80
-102
No files found.
docs/docs/optimization.xml
View file @
a5121e57
...
...
@@ -1225,7 +1225,7 @@ Or it can use the elastic net regularizer:
using the
<a
href=
"#global_function_search"
>
global_function_search
</a>
object.
See global_function_search's documentation for details of the algorithm. Importantly,
find_max_global() does not require the user to specify derivatives
or starting guesses, all while
taking efforts
to use as few calls to
or starting guesses, all while
attempting
to use as few calls to
the objective function as possible. It is therefore appropriate for tasks
where evaluating the objective function is time consuming or
expensive, such as in hyper parameter optimization of machine
...
...
docs/docs/release_notes.xml
View file @
a5121e57
...
...
@@ -12,127 +12,105 @@
<current>
New Features and Improvements:
- added disjoint_subsets_sized
- added semantic segmentation example: dnn_semantic_segmentation_ex.cpp and dnn_semantic_segmentation_train_ex.cpp
- optimization:
- Added upper_bound_function object.
- Added solve_trust_region_subproblem_bounded()
- Added tools for doing global optimization. The main new tools here are find_max_global() and global_function_search.
- Updated model_selection_ex.cpp and optimization_ex.cpp to use these new tools.
- added call_function_and_expand_args()
- Added loss_ranking_ layer
- Added loss_epsilon_insensitive_ layer
- Added softmax_all layer.
- Added loss_dot layer
- made log loss layers more numerically stable.
- Upgraded the con_ layer so that you can set the nr or nc to 0 in the layer
specification and this means "make the filter cover the whole input image
dimension". So it's just an easy way to make a filter sized exactly so that
it will have one output along that dimension.
- Add support for non-scale-invariant MMOD
- Python API for get_face_chips() which allows you to extract aligned face images.
- Also added python binding for count_steps_without_decrease() and count_steps_without_decrease_robust().
- Added jitter_image()
- Various improvements to cmake scripts.
- Added USE_NEON_INSTRUCTIONS cmake option.
- Added get_integer() and get_integer_in_range() to dlib::rand.
- Add get_net parameter that allows to call the function without forcing flush to disk.
- Sometimes the loss_mmod_ layer could experience excessively long runtime
during early iterations since the model might produce a huge number of false
alarms while the detector is still bad. Processing all these detections can
cause it to run slowly until the model is good enough to avoid really
excessive amounts of false alarms. This change puts more of a limit on the
number of false alarms processed during those early iterations and avoids
the slowdown.
- chol() will use a banded cholesky algorithm for banded matrices, making it much faster in these cases.
- Changed tensor so that, when reallocating memory, it frees any existing
memory *before* allocating new memory. It used to be the other way around
which caused momentary spikes of increased memory usage. This could put you
over the total memory available in some cases which is obviously less than
ideal behavior.
- Made resizable_tensor objects not perform a reallocation if they are
resized to be smaller. Instead, they now behave like std::vector in that
they just change their nominal size but keep the same memory, only
reallocating if they are resized to something larger than their underlying
memory block. This change makes some uses of dlib faster, in particular,
running networks on a large set of images of differing sizes will now run
faster since there won't be any GPU reallocations, which are notoriously
slow.
- Upgraded the input layer so you can give
input
<
std::array
<
matrix
<
T
>
,K
>>
types as input layer
specifications. This will create input tensors with K channels.
- Added a global optimizer, find_max_global(), which is suitable for
optimizing expensive functions with many local optima. For example, you
can use it for hyperparameter optimization. See model_selection_ex.cpp
for an example.
- Updates to the deep learning tooling:
- Added semantic segmentation examples: dnn_semantic_segmentation_ex.cpp
and dnn_semantic_segmentation_train_ex.cpp
- New layers: loss_ranking, loss_epsilon_insensitive, softmax_all, and loss_dot.
- Made log loss layers more numerically stable.
- Upgraded the con layer so you can set the number of rows or columns to
0 in the layer specification. Doing this means "make the filter cover
the whole input image dimension". So it's an easy way to make a filter
sized exactly so that it will have one output along that dimension,
effectively making it like a fully connected layer operating on a row
or column.
- Added support for non-scale-invariant MMOD.
- Added an optional parameter to dnn_trainer::get_net() that allows you
to call the function without forcing a state flush to disk.
- Sometimes the loss_mmod layer could experience excessively long runtime
during early training iterations. This has been optimized and is now
much faster.
- Optimized the tensor's management of GPU memory. It now uses less memory
in some cases. It will also not perform a reallocation if resized to a
smaller size. Instead, tensors now behave like std::vector in that
they just change their nominal size but keep the same memory, only
reallocating if they are resized to something larger than their
underlying memory block. This change makes some uses of dlib faster, in
particular, running networks on a large set of images of differing
sizes will now run faster since there won't be any GPU reallocations,
which are notoriously slow.
- Upgraded the input layer so you can give
input
<
std::array
<
matrix
<
T
>
,K
>>
types as input. Doing
this will create input tensors with K channels.
- Added disjoint_subsets_sized
- Added Python APIs: get_face_chips(), count_steps_without_decrease(),
count_steps_without_decrease_robust(), and jitter_image().
- Various improvements to cmake scripts: e.g. improved warning and error
messages, added USE_NEON_INSTRUCTIONS option.
- chol() will use a banded cholesky algorithm for banded matrices, making it
much faster in these cases.
- Changed the timing code to use the C++11 high resolution clock and
atomics. This makes the timing code a lot more precise.
- Made the loss dumping between learning rate changes a little more relaxed.
In particular, rather than just dumping exactly 400 of the last loss values,
it now dumps 400 + 10% of the loss buffer. This way, the amount of the dump
is proportional to the steps without progress threshold. This is better
because when the user sets the steps without progress to something larger it
probably means you need to look at more loss values to determine that we
should stop, so dumping more in that case ought to be better.
Non-Backwards Compatible Changes:
- Changed the random_cropper's set_min_object_size() routine to take min box
dimensions in the same format as the mmod_options object (i.e. two lengths
measured in pixels). This should make defining random_cropping strategies
that are consistent with MMOD settings m
uch more straightforward since you
can just
take the mmod_options settings and give them to the random_cropper
that are consistent with MMOD settings m
ore straightforward since you can
simply
take the mmod_options settings and give them to the random_cropper
and it will do the right thing.
- Changed the mean squared loss layers to return a loss that's the MSE, not
0.5*MSE. The only thing this effects is the logging messages that print
during training, which were confusing since the reported loss was half the
size you would expect.
- Changed test_regression_function() and cross_validate_regression_trainer()
to output 2 more statistics, which are the mean absolute error and the
standard deviation of the absolute error. This means these functions now
return 4D rather than 2D vectors. I also made test_regression_function()
take a non-const reference to the regression function so that DNN objects
can be tested.
- Fixed shape_predictor_trainer padding so that it behaves as it used to. In
dlib 19.7 the padding code was changed and accidentally doubled the size of
the applied padding when in the older (and still default) landmark_relative
padding mode. It's not a huge deal either way, but this change reverts back
to the intended behavior.
- Changed test_regression_function() and cross_validate_regression_trainer()
to output correlation rather than squared correlation. Also added two more
outputs: average absolute error and the standard deviation of the absolute
error.
size you might naively expect.
- Changed the outputs of test_regression_function() and cross_validate_regression_trainer().
These functions now output 4D rather than 2D vectors. The new output is:
mean squared error, correlation, mean absolute error, and standard
deviation of absolute error. I also made test_regression_function() take
a non-const reference to the regression function so that DNN objects can
be tested.
- Fixed shape_predictor_trainer padding so it behaves as it used to. In
dlib 19.7 the padding code was changed and accidentally doubled the size
of the applied padding in some cases. It's not a huge deal either way, but
this change reverts back to the previous behavior.
Bug fixes:
- Fixed DLIB_ISO_CPP_ONLY not building.
- Fixed toMat() not compiling in some cases.
- Significantly reduced the compile time of the DNN example programs in visual studio.
- Fixed a few image processing functions that weren't using the generic image interface.
- Fixed a bug in the random_cropper where it might crash due to division by 0 if small images are given as input.
- Fixed a bug in how the mmod_options automatically determines detection window sizes. It would pick a bad size in some cases.
- Significantly reduced the compile time of the DNN example programs in
visual studio.
- Fixed a few image processing functions that weren't using the generic
image interface.
- Fixed a bug in the random_cropper where it might crash due to division by
0 if small images were given as input.
- Fixed a bug in how the mmod_options automatically determines detection
window sizes. It would pick a bad size in some cases.
- Fixed load_image_dataset()'s skip_empty_images() option. It wasn't
skipping images that only have ignore boxes when you load into mmod_rects
like it should have been.
- Changed graph construction for chinese_whispers() so that each face is
always included in the edge graph. If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case. So
basically this fixes a bug where chinese_whispers(), when called from
python, would sometimes return a labels array that doesn't include labels
for all the inputs.
skipping images that only have ignore boxes when you load into mmod_rect
objects.
- Fixed a bug where chinese_whispers(), when called from python, would
sometimes return a labels array that didn't include labels for all the
inputs.
- Fixed a bug in dlib's MS Windows GUI code that was introduced a little
while back when we switched everything to std::shared_ptr. Turns out
std::shared_ptr has some surprising limitations. This change fixes a bug
where the program crashes or hangs sometimes during program shutdown.
- Fixed error in TIME_THIS(). It was still printing in seconds when it said minutes in the output.
- Adding missing implementation of tabbed_display::selected_tab
while back when we switched everything to std::shared_ptr. This change
fixes a bug where the program crashes or hangs sometimes during program
shutdown.
- Fixed error in TIME_THIS() introduced in dlib 19.7. It was printing
seconds when it said minutes in the output.
- Adding missing implementation of tabbed_display::selected_tab.
- Changed the windows signaler and mutex code to use the C++11 thread
library instead of the old win32 functions. I did this to work around how
windows unloads dlls. In particular, during dll unload windows will kill all
threads, THEN it will destruct global objects. So this leads to problems
where a global obejct that owns threads tries to tell them to shutdown and
everything goes wrong. The specific problem this code change fixes is when
signaler::broadcast() is called on a signaler that was being waited on by
one of these abruptly killed threads. In that case, the old code would
deadlock inside signaler::broadcast(). This new code doesn't seem to have
that problem, thereby mitigating the windows dll unload behavior in some
situations.
- Fixed DLIB_STACK_TRACE macro.
windows unloads dlls. In particular, during dll unload windows will kill
all threads, THEN it will destruct global objects. So this can lead to
problems when a global object that owns threads tries to tell them to
shutdown, since the threads have already vanished. The new code mitigates
some of these problems, in particular, there were some cases where
unloading dlib's python extension would deadlock. This should now be
fixed.
- Fixed DLIB_STACK_TRACE macro not compiling.
</current>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment