Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
a68b1f7f
Commit
a68b1f7f
authored
Dec 17, 2016
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Added docs and cleaned up code slightly.
parent
124e0ff4
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
99 additions
and
2 deletions
+99
-2
loss.h
dlib/dnn/loss.h
+4
-2
loss_abstract.h
dlib/dnn/loss_abstract.h
+95
-0
No files found.
dlib/dnn/loss.h
View file @
a68b1f7f
...
@@ -900,6 +900,8 @@ namespace dlib
...
@@ -900,6 +900,8 @@ namespace dlib
}
}
}
}
float
get_margin
()
const
{
return
0
.
1
;
}
float
get_distance_threshold
()
const
{
return
0
.
75
;
}
template
<
template
<
typename
const_label_iterator
,
typename
const_label_iterator
,
...
@@ -925,8 +927,8 @@ namespace dlib
...
@@ -925,8 +927,8 @@ namespace dlib
grad
.
nc
()
==
1
);
grad
.
nc
()
==
1
);
const
float
margin
=
0
.
1
;
const
float
margin
=
get_margin
()
;
const
float
dist_thresh
=
0
.
75
;
const
float
dist_thresh
=
get_distance_threshold
()
;
temp
.
set_size
(
output_tensor
.
num_samples
(),
output_tensor
.
num_samples
());
temp
.
set_size
(
output_tensor
.
num_samples
(),
output_tensor
.
num_samples
());
grad_mul
.
copy_size
(
temp
);
grad_mul
.
copy_size
(
temp
);
...
...
dlib/dnn/loss_abstract.h
View file @
a68b1f7f
...
@@ -525,6 +525,99 @@ namespace dlib
...
@@ -525,6 +525,99 @@ namespace dlib
template
<
typename
SUBNET
>
template
<
typename
SUBNET
>
using
loss_mmod
=
add_loss_layer
<
loss_mmod_
,
SUBNET
>
;
using
loss_mmod
=
add_loss_layer
<
loss_mmod_
,
SUBNET
>
;
// ----------------------------------------------------------------------------------------
class
loss_metric_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it allows you to learn to map objects
into a vector space where objects sharing the same class label are close to
each other while objects with different labels are far apart.
To be specific, it optimizes the following loss function which considers
all pairs of objects in a mini-batch and computes a different loss depending
on their respective class labels. So if objects A1 and A2 in a mini-batch
share the same class label then their contribution to the loss is:
max(0, length(A1-A2)-get_distance_threshold() + get_margin())
While if A1 and B1 have different class labels then their contribution to
the loss function is:
max(0, get_distance_threshold()-length(A1-B1) + get_margin())
Therefore, this loss layer optimizes a version of the hinge loss.
Moreover, the loss is trying to make sure that all objects with the same
label are within get_distance_threshold() distance of each other.
Conversely, if two objects have different labels then they should be more
than get_distance_threshold() distance from each other in the learned
embedding. So this loss function gives you a natural decision boundary for
deciding if two objects are from the same class.
!*/
public
:
typedef
unsigned
long
training_label_type
;
typedef
matrix
<
float
,
0
,
1
>
output_label_type
;
template
<
typename
SUB_TYPE
,
typename
label_iterator
>
void
to_label
(
const
tensor
&
input_tensor
,
const
SUB_TYPE
&
sub
,
label_iterator
iter
)
const
;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
This loss expects the network to produce a single vector (per sample) as
output. This vector is the learned embedding. Therefore, to_label() just
copies these output vectors from the network into the output label_iterators
given to this function, one for each sample in the input_tensor.
!*/
float
get_margin
()
const
{
return
0
.
1
;
}
/*!
ensures
- returns the margin value used by the loss function. See the discussion
in WHAT THIS OBJECT REPRESENTS for details.
!*/
float
get_distance_threshold
()
const
{
return
0
.
75
;
}
/*!
ensures
- returns the distance threshold value used by the loss function. See the discussion
in WHAT THIS OBJECT REPRESENTS for details.
!*/
template
<
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
)
const
;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
};
template
<
typename
SUBNET
>
using
loss_metric
=
add_loss_layer
<
loss_metric_
,
SUBNET
>
;
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
class
loss_mean_squared_
class
loss_mean_squared_
...
@@ -584,6 +677,8 @@ namespace dlib
...
@@ -584,6 +677,8 @@ namespace dlib
template
<
typename
SUBNET
>
template
<
typename
SUBNET
>
using
loss_mean_squared
=
add_loss_layer
<
loss_mean_squared_
,
SUBNET
>
;
using
loss_mean_squared
=
add_loss_layer
<
loss_mean_squared_
,
SUBNET
>
;
// ----------------------------------------------------------------------------------------
}
}
#endif // DLIB_DNn_LOSS_ABSTRACT_H_
#endif // DLIB_DNn_LOSS_ABSTRACT_H_
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment