Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
2092e303
Commit
2092e303
authored
May 15, 2016
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Renamed compute_loss() to compute_loss_value_and_gradient() in the loss
interface.
parent
ee2a0070
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
23 additions
and
21 deletions
+23
-21
core.h
dlib/dnn/core.h
+4
-4
loss.h
dlib/dnn/loss.h
+3
-3
loss_abstract.h
dlib/dnn/loss_abstract.h
+16
-14
No files found.
dlib/dnn/core.h
View file @
2092e303
...
...
@@ -2171,7 +2171,7 @@ namespace dlib
{
subnetwork
.
forward
(
x
);
dimpl
::
subnet_wrapper
<
subnet_type
>
wsub
(
subnetwork
);
return
loss
.
compute_loss
(
x
,
lbegin
,
wsub
);
return
loss
.
compute_loss
_value_and_gradient
(
x
,
lbegin
,
wsub
);
}
template
<
typename
input_iterator
,
typename
label_iterator
>
...
...
@@ -2191,7 +2191,7 @@ namespace dlib
{
subnetwork
.
forward
(
x
);
dimpl
::
subnet_wrapper
<
subnet_type
>
wsub
(
subnetwork
);
return
loss
.
compute_loss
(
x
,
wsub
);
return
loss
.
compute_loss
_value_and_gradient
(
x
,
wsub
);
}
template
<
typename
input_iterator
>
...
...
@@ -2212,7 +2212,7 @@ namespace dlib
{
subnetwork
.
forward
(
x
);
dimpl
::
subnet_wrapper
<
subnet_type
>
wsub
(
subnetwork
);
double
l
=
loss
.
compute_loss
(
x
,
lbegin
,
wsub
);
double
l
=
loss
.
compute_loss
_value_and_gradient
(
x
,
lbegin
,
wsub
);
subnetwork
.
back_propagate_error
(
x
);
return
l
;
}
...
...
@@ -2232,7 +2232,7 @@ namespace dlib
{
subnetwork
.
forward
(
x
);
dimpl
::
subnet_wrapper
<
subnet_type
>
wsub
(
subnetwork
);
double
l
=
loss
.
compute_loss
(
x
,
wsub
);
double
l
=
loss
.
compute_loss
_value_and_gradient
(
x
,
wsub
);
subnetwork
.
back_propagate_error
(
x
);
return
l
;
}
...
...
dlib/dnn/loss.h
View file @
2092e303
...
...
@@ -47,7 +47,7 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
...
...
@@ -148,7 +148,7 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
...
...
@@ -259,7 +259,7 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
...
...
dlib/dnn/loss_abstract.h
View file @
2092e303
...
...
@@ -33,7 +33,8 @@ namespace dlib
Finally, note that there are two broad flavors of loss layer, supervised
and unsupervised. The EXAMPLE_LOSS_LAYER_ as shown here is a supervised
layer. To make an unsupervised loss you simply leave out the label_type
typedef, to_label(), and the truth iterator argument to compute_loss().
typedef, to_label(), and the truth iterator argument to
compute_loss_value_and_gradient().
!*/
public
:
...
...
@@ -90,7 +91,7 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
...
...
@@ -116,9 +117,10 @@ namespace dlib
- This function computes a loss function that describes how well the output
of sub matches the expected labels given by truth. Let's write the loss
function as L(input_tensor, truth, sub).
- Then compute_loss() computes the gradient of L() with respect to the
outputs in sub. Specifically, compute_loss() assigns the gradients into
sub by performing the following tensor assignments, for all valid i:
- Then compute_loss_value_and_gradient() computes the gradient of L() with
respect to the outputs in sub. Specifically, compute_loss_value_and_gradient()
assigns the gradients into sub by performing the following tensor
assignments, for all valid i:
- layer<i>(sub).get_gradient_input() = the gradient of
L(input_tensor,truth,sub) with respect to layer<i>(sub).get_output().
- returns L(input_tensor,truth,sub)
...
...
@@ -188,14 +190,14 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
)
const
;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
() except
it has the additional calling requirements that:
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
_value_and_gradient()
except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
...
...
@@ -254,14 +256,14 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
)
const
;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
() except
it has the additional calling requirements that:
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
_value_and_gradient()
except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
...
...
@@ -323,14 +325,14 @@ namespace dlib
typename
const_label_iterator
,
typename
SUBNET
>
double
compute_loss
(
double
compute_loss
_value_and_gradient
(
const
tensor
&
input_tensor
,
const_label_iterator
truth
,
SUBNET
&
sub
)
const
;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
() except
it has the additional calling requirements that:
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss
_value_and_gradient()
except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment