Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
e679d66a
Commit
e679d66a
authored
Oct 14, 2015
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
filled out the trainer spec
parent
0ad2cb71
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
125 additions
and
4 deletions
+125
-4
trainer_abstract.h
dlib/dnn/trainer_abstract.h
+125
-4
No files found.
dlib/dnn/trainer_abstract.h
View file @
e679d66a
...
@@ -28,6 +28,10 @@ namespace dlib
...
@@ -28,6 +28,10 @@ namespace dlib
in solvers_abstract.h
in solvers_abstract.h
WHAT THIS OBJECT REPRESENTS
WHAT THIS OBJECT REPRESENTS
This object is a tool training a deep neural network. To use it you supply
a neural network type and a solver, then you call train() with your
training data and it will output a new network instance that has hopefully
learned something useful from your training data.
!*/
!*/
...
@@ -38,46 +42,129 @@ namespace dlib
...
@@ -38,46 +42,129 @@ namespace dlib
dnn_trainer
(
dnn_trainer
(
);
);
/*!
ensures
- #get_net() == a default initialized net_type object.
- #get_solvers() == a set of default initialized solvers.
!*/
explicit
dnn_trainer
(
explicit
dnn_trainer
(
const
net_type
&
net
const
net_type
&
net
);
);
/*!
ensures
- #get_net() == net
- #get_solvers() == a set of default initialized solvers.
!*/
dnn_trainer
(
dnn_trainer
(
const
net_type
&
net
,
const
net_type
&
net
,
const
solver_type
&
solver
const
solver_type
&
solver
);
);
/*!
ensures
- #get_net() == net
- #get_solvers() == a set of solvers that are all initialized with the
provided solver instance.
!*/
const
net_type
&
get_net
(
const
net_type
&
get_net
(
)
const
;
)
const
;
/*!
ensures
- returns the neural network object in this trainer. This is the network
that is optimized when you call train().
!*/
void
set_net
(
void
set_net
(
const
net_type
&
net
const
net_type
&
net
);
);
/*!
ensures
- #get_net() == net
!*/
void
set_solver
(
void
set_solver
(
const
solver_type
&
solver
_
const
solver_type
&
solver
);
);
/*!
ensures
- assigns solver to all the solvers in this object. I.e. solver will be
assigned to each element in get_solvers().
!*/
const
sstack
<
solver_type
,
net_type
::
num_layers
>&
get_solvers
(
const
sstack
<
solver_type
,
net_type
::
num_layers
>&
get_solvers
(
)
const
;
)
const
;
/*!
ensures
- returns the solvers used to optimize each layer of the neural network
get_net(). In particular, the first layer's solver is
get_solvers().top(), the second layer's solver is
get_solvers().pop().top(), and so on.
!*/
sstack
<
solver_type
,
net_type
::
num_layers
>&
get_solvers
(
sstack
<
solver_type
,
net_type
::
num_layers
>&
get_solvers
(
);
);
/*!
ensures
- returns the solvers used to optimize each layer of the neural network
get_net(). In particular, the first layer's solver is
get_solvers().top(), the second layer's solver is
get_solvers().pop().top(), and so on.
!*/
unsigned
long
get_mini_batch_size
(
unsigned
long
get_mini_batch_size
(
)
const
;
)
const
;
/*!
ensures
- During training, we call the network's update() routine over and over
with training data. The number of training samples we give to each call
to update is the "mini-batch size", which is defined by
get_mini_batch_size().
!*/
void
set_mini_batch_size
(
void
set_mini_batch_size
(
unsigned
long
batch_size
unsigned
long
batch_size
);
);
/*!
requires
- batch_size > 0
ensures
- #get_mini_batch_size() == batch_size
!*/
unsigned
long
get_num_epochs
(
unsigned
long
get_num_epochs
(
)
const
;
)
const
;
/*!
ensures
- Returns the number of passes over the training data we will execute when
train() is called.
!*/
void
set_num_epochs
(
void
set_num_epochs
(
unsigned
long
num
unsigned
long
num
)
const
;
)
const
;
/*!
requires
- num > 0
ensures
- @get_num_epochs() == num
!*/
void
be_verbose
(
);
/*!
ensures
- This object will print status messages to standard out so that a
user can observe the progress of the algorithm.
!*/
void
be_quiet
(
);
/*!
ensures
- This object will not print anything to standard out
!*/
const
net_type
&
train
(
const
net_type
&
train
(
const
std
::
vector
<
input_type
>&
data
,
const
std
::
vector
<
input_type
>&
data
,
...
@@ -86,7 +173,25 @@ namespace dlib
...
@@ -86,7 +173,25 @@ namespace dlib
/*!
/*!
requires
requires
- data.size() == labels.size()
- data.size() == labels.size()
- TODO: the net has a supervised loss layer.
- net_type uses a supervised loss.
i.e. net_type::label_type != no_label_type.
ensures
- Trains a supervised neural network based on the given training data.
The goal of training is to find the network parameters that minimize
get_net().compute_loss(data.begin(), data.end(), labels.begin()).
- The optimizer will run for get_num_epochs() epochs and each layer in the
network will be optimized by its corresponding solver in get_solvers().
- returns #get_net()
(i.e. the trained network can also be accessed by calling get_net() after
train() finishes executing)
- Each call to train DOES NOT reinitialize the state of get_net() or
get_solvers(). That is, the state of the solvers and network contained
inside this trainer is the starting point for the optimization each time
train() is called. For example, calling train() 1 time and having it
execute 100 epochs of training is equivalent to calling train() 10 times
and having it execute 10 epochs of training during each call. This also
means you can serialize a trainer to disk and then, at a later date,
deserialize it and resume training your network where you left off.
!*/
!*/
const
net_type
&
train
(
const
net_type
&
train
(
...
@@ -94,9 +199,25 @@ namespace dlib
...
@@ -94,9 +199,25 @@ namespace dlib
);
);
/*!
/*!
requires
requires
- TODO: the net has an unsupervised loss layer.
- net_type uses an unsupervised loss.
i.e. net_type::label_type == no_label_type.
ensures
ensures
- trains an auto-encoder
- Trains an unsupervised neural network based on the given training data.
The goal of training is to find the network parameters that minimize
get_net().compute_loss(data.begin(), data.end()).
- The optimizer will run for get_num_epochs() epochs and each layer in the
network will be optimized by its corresponding solver in get_solvers().
- returns #get_net()
(i.e. the trained network can also be accessed by calling get_net() after
train() finishes executing)
- Each call to train DOES NOT reinitialize the state of get_net() or
get_solvers(). That is, the state of the solvers and network contained
inside this trainer is the starting point for the optimization each time
train() is called. For example, calling train() 1 time and having it
execute 100 epochs of training is equivalent to calling train() 10 times
and having it execute 10 epochs of training during each call. This also
means you can serialize a trainer to disk and then, at a later date,
deserialize it and resume training your network where you left off.
!*/
!*/
};
};
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment