Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
af82bc40
Commit
af82bc40
authored
Dec 11, 2014
by
Patrick Snape
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Sort out PEP8 issues in the examples
parent
32ad0ffa
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
446 additions
and
418 deletions
+446
-418
LICENSE_FOR_EXAMPLE_PROGRAMS.txt
python_examples/LICENSE_FOR_EXAMPLE_PROGRAMS.txt
+0
-2
face_detector.py
python_examples/face_detector.py
+16
-12
max_cost_assignment.py
python_examples/max_cost_assignment.py
+29
-31
sequence_segmenter.py
python_examples/sequence_segmenter.py
+104
-102
svm_rank.py
python_examples/svm_rank.py
+21
-26
svm_struct.py
python_examples/svm_struct.py
+228
-202
train_object_detector.py
python_examples/train_object_detector.py
+48
-43
No files found.
python_examples/LICENSE_FOR_EXAMPLE_PROGRAMS.txt
View file @
af82bc40
...
@@ -14,9 +14,7 @@ letter to
...
@@ -14,9 +14,7 @@ letter to
San Francisco, California, 94105, USA.
San Francisco, California, 94105, USA.
Public domain dedications are not recognized by some countries. So
Public domain dedications are not recognized by some countries. So
if you live in an area where the above dedication isn't valid then
if you live in an area where the above dedication isn't valid then
you can consider the example programs to be licensed under the Boost
you can consider the example programs to be licensed under the Boost
Software License.
Software License.
python_examples/face_detector.py
View file @
af82bc40
...
@@ -7,7 +7,8 @@
...
@@ -7,7 +7,8 @@
# face.
# face.
#
#
# The examples/faces folder contains some jpg images of people. You can run
# The examples/faces folder contains some jpg images of people. You can run
# this program on them and see the detections by executing the following command:
# this program on them and see the detections by executing the
# following command:
# ./face_detector.py ../examples/faces/*.jpg
# ./face_detector.py ../examples/faces/*.jpg
#
#
# This face detector is made using the now classic Histogram of Oriented
# This face detector is made using the now classic Histogram of Oriented
...
@@ -20,14 +21,17 @@
...
@@ -20,14 +21,17 @@
#
#
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
,
sys
import
sys
import
dlib
from
skimage
import
io
from
skimage
import
io
...
@@ -35,18 +39,18 @@ detector = dlib.get_frontal_face_detector()
...
@@ -35,18 +39,18 @@ detector = dlib.get_frontal_face_detector()
win
=
dlib
.
image_window
()
win
=
dlib
.
image_window
()
for
f
in
sys
.
argv
[
1
:]:
for
f
in
sys
.
argv
[
1
:]:
print
(
"
processing file: "
,
f
)
print
(
"
Processing file: {}"
.
format
(
f
)
)
img
=
io
.
imread
(
f
)
img
=
io
.
imread
(
f
)
# The 1 in the second argument indicates that we should upsample the image
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# 1 time. This will make everything bigger and allow us to detect more
# faces.
# faces.
dets
=
detector
(
img
,
1
)
dets
=
detector
(
img
,
1
)
print
(
"number of faces detected: "
,
len
(
dets
))
print
(
"Number of faces detected: {}"
.
format
(
len
(
dets
)))
for
d
in
dets
:
for
k
,
d
in
enumerate
(
dets
):
print
(
" detection position left,top,right,bottom:"
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
())
print
(
"Detection {}: Left: {} Top: {} Right: {} Bottom: {}"
.
format
(
k
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
()))
win
.
clear_overlay
()
win
.
clear_overlay
()
win
.
set_image
(
img
)
win
.
set_image
(
img
)
win
.
add_overlay
(
dets
)
win
.
add_overlay
(
dets
)
raw_input
(
"Hit enter to continue"
)
raw_input
(
"Hit enter to continue"
)
python_examples/max_cost_assignment.py
View file @
af82bc40
#!/usr/bin/python
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
#
#
# This simple example shows how to call dlib's optimal linear assignment problem solver.
# This simple example shows how to call dlib's optimal linear assignment
# It is an implementation of the famous Hungarian algorithm and is quite fast, operating in
# problem solver.
# O(N^3) time.
# It is an implementation of the famous Hungarian algorithm and is quite fast,
# operating in O(N^3) time.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
import
dlib
# Let's imagine you need to assign N people to N jobs. Additionally, each person will make
# Let's imagine you need to assign N people to N jobs. Additionally, each
# your company a certain amount of money at each job, but each person has different skills
# person will make your company a certain amount of money at each job, but each
# so they are better at some jobs and worse at others. You would like to find the best way
# person has different skills so they are better at some jobs and worse at
# to assign people to these jobs. In particular, you would like to maximize the amount of
# others. You would like to find the best way to assign people to these jobs.
# money the group makes as a whole. This is an example of an assignment problem and is
# In particular, you would like to maximize the amount of money the group makes
# what is solved by the dlib.max_cost_assignment() routine.
# as a whole. This is an example of an assignment problem and is what is solved
# by the dlib.max_cost_assignment() routine.
# So in this example, let's imagine we have 3 people and 3 jobs. We represent the amount of
# money each person will produce at each job with a cost matrix. Each row corresponds to a
# So in this example, let's imagine we have 3 people and 3 jobs. We represent
# person and each column corresponds to a job. So for example, below we are saying that
# the amount of money each person will produce at each job with a cost matrix.
# person 0 will make $1 at job 0, $2 at job 1, and $6 at job 2.
# Each row corresponds to a person and each column corresponds to a job. So for
# example, below we are saying that person 0 will make $1 at job 0, $2 at job 1,
# and $6 at job 2.
cost
=
dlib
.
matrix
([[
1
,
2
,
6
],
cost
=
dlib
.
matrix
([[
1
,
2
,
6
],
[
5
,
3
,
6
],
[
5
,
3
,
6
],
[
4
,
5
,
0
]])
[
4
,
5
,
0
]])
# To find out the best assignment of people to jobs we just need to call this
#
To find out the best assignment of people to jobs we just need to call this
function.
# function.
assignment
=
dlib
.
max_cost_assignment
(
cost
)
assignment
=
dlib
.
max_cost_assignment
(
cost
)
# This prints optimal assignments: [2, 0, 1]
# This prints optimal assignments: [2, 0, 1]
# which indicates that we should assign the person from the first row of the
cost matrix to
# which indicates that we should assign the person from the first row of the
#
job 2, the middle row person to job 0, and the bottom row person to job 1.
#
cost matrix to job 2, the middle row person to job 0, and the bottom row
print
(
"optimal assignments: "
,
assignment
)
# person to job 1.
print
(
"Optimal assignments: {}"
.
format
(
assignment
))
# This prints optimal cost: 16.0
# This prints optimal cost: 16.0
# which is correct since our optimal assignment is 6+5+5.
# which is correct since our optimal assignment is 6+5+5.
print
(
"optimal cost: "
,
dlib
.
assignment_cost
(
cost
,
assignment
))
print
(
"Optimal cost: {}"
.
format
(
dlib
.
assignment_cost
(
cost
,
assignment
)))
python_examples/sequence_segmenter.py
View file @
af82bc40
#!/usr/bin/python
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
#
#
#
This example shows how to use dlib to learn to do sequence segmentation. In
#
This example shows how to use dlib to learn to do sequence segmentation. In a sequence
#
a sequence segmentation task we are given a sequence of objects (e.g. words in
#
segmentation task we are given a sequence of objects (e.g. words in a sentence) and we
#
a sentence) and we are supposed to detect certain subsequences (e.g. the names
#
are supposed to detect certain subsequences (e.g. the names of people). Therefore, in
#
of people). Therefore, in the code below we create some very simple training
#
the code below we create some very simple training sequences and use them to learn a
#
sequences and use them to learn a sequence segmentation model. In particular,
#
sequence segmentation model. In particular, our sequences will be sentences represented
#
our sequences will be sentences represented as arrays of words and our task
#
as arrays of words and our task will be to learn to identify person names. Once we have
#
will be to learn to identify person names. Once we have our segmentation
#
our segmentation
model we can use it to find names in new sentences, as we will show.
# model we can use it to find names in new sentences, as we will show.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
import
sys
import
sys
import
dlib
# The sequence segmentation models we work with in this example are chain structured
# The sequence segmentation models we work with in this example are chain
# conditional random field style models. Therefore, central to a sequence segmentation
# structured conditional random field style models. Therefore, central to a
# model is some method for converting the elements of a sequence into feature vectors.
# sequence segmentation model is some method for converting the elements of a
# That is, while you might start out representing your sequence as an array of strings, the
# sequence into feature vectors. That is, while you might start out representing
# dlib interface works in terms of arrays of feature vectors. Each feature vector should
# your sequence as an array of strings, the dlib interface works in terms of
# capture important information about its corresponding element in the original raw
# arrays of feature vectors. Each feature vector should capture important
# sequence. So in this example, since we work with sequences of words and want to identify
# information about its corresponding element in the original raw sequence. So
# names, we will create feature vectors that tell us if the word is capitalized or not. In
# in this example, since we work with sequences of words and want to identify
# our simple data, this will be enough to identify names. Therefore, we define
# names, we will create feature vectors that tell us if the word is capitalized
# sentence_to_vectors() which takes a sentence represented as a string and converts it into
# or not. In our simple data, this will be enough to identify names.
# an array of words and then associates a feature vector with each word.
# Therefore, we define sentence_to_vectors() which takes a sentence represented
# as a string and converts it into an array of words and then associates a
# feature vector with each word.
def
sentence_to_vectors
(
sentence
):
def
sentence_to_vectors
(
sentence
):
# Create an empty array of vectors
# Create an empty array of vectors
vects
=
dlib
.
vectors
()
vects
=
dlib
.
vectors
()
for
word
in
sentence
.
split
():
for
word
in
sentence
.
split
():
# Our vectors are very simple 1-dimensional vectors. The value of the single
# Our vectors are very simple 1-dimensional vectors. The value of the
# feature is 1 if the first letter of the word is capitalized and 0 otherwise.
# single feature is 1 if the first letter of the word is capitalized and
if
(
word
[
0
]
.
isupper
()):
# 0 otherwise.
if
word
[
0
]
.
isupper
():
vects
.
append
(
dlib
.
vector
([
1
]))
vects
.
append
(
dlib
.
vector
([
1
]))
else
:
else
:
vects
.
append
(
dlib
.
vector
([
0
]))
vects
.
append
(
dlib
.
vector
([
0
]))
return
vects
return
vects
# Dlib also supports the use of a sparse vector representation. This is more efficient
# than the above form when you have very high dimensional vectors that are mostly full of
# Dlib also supports the use of a sparse vector representation. This is more
# zeros. In dlib, each sparse vector is represented as an array of pair objects. Each
# efficient than the above form when you have very high dimensional vectors that
# pair contains an index and value. Any index not listed in the vector is implicitly
# are mostly full of zeros. In dlib, each sparse vector is represented as an
# associated with a value of zero. Additionally, when using sparse vectors with
# array of pair objects. Each pair contains an index and value. Any index not
# dlib.train_sequence_segmenter() you can use "unsorted" sparse vectors. This means you
# listed in the vector is implicitly associated with a value of zero.
# can add the index/value pairs into your sparse vectors in any order you want and don't
# Additionally, when using sparse vectors with dlib.train_sequence_segmenter()
# need to worry about them being in sorted order.
# you can use "unsorted" sparse vectors. This means you can add the index/value
# pairs into your sparse vectors in any order you want and don't need to worry
# about them being in sorted order.
def
sentence_to_sparse_vectors
(
sentence
):
def
sentence_to_sparse_vectors
(
sentence
):
vects
=
dlib
.
sparse_vectors
()
vects
=
dlib
.
sparse_vectors
()
has_cap
=
dlib
.
sparse_vector
()
has_cap
=
dlib
.
sparse_vector
()
no_cap
=
dlib
.
sparse_vector
()
no_cap
=
dlib
.
sparse_vector
()
# make has_cap equivalent to dlib.vector([1])
# make has_cap equivalent to dlib.vector([1])
has_cap
.
append
(
dlib
.
pair
(
0
,
1
))
has_cap
.
append
(
dlib
.
pair
(
0
,
1
))
# Since we didn't add anything to no_cap it is equivalent to dlib.vector([0])
# Since we didn't add anything to no_cap it is equivalent to
# dlib.vector([0])
for
word
in
sentence
.
split
():
for
word
in
sentence
.
split
():
if
(
word
[
0
]
.
isupper
()
):
if
word
[
0
]
.
isupper
(
):
vects
.
append
(
has_cap
)
vects
.
append
(
has_cap
)
else
:
else
:
vects
.
append
(
no_cap
)
vects
.
append
(
no_cap
)
...
@@ -77,57 +83,50 @@ def print_segment(sentence, names):
...
@@ -77,57 +83,50 @@ def print_segment(sentence, names):
sys
.
stdout
.
write
(
"
\n
"
)
sys
.
stdout
.
write
(
"
\n
"
)
# Now let's make some training data. Each example is a sentence as well as a
# Now let's make some training data. Each example is a sentence as well as a set of ranges
# set of ranges which indicate the locations of any names.
# which indicate the locations of any names.
names
=
dlib
.
ranges
()
# make an array of dlib.range objects.
names
=
dlib
.
ranges
()
# make an array of dlib.range objects.
segments
=
dlib
.
rangess
()
# make an array of arrays of dlib.range objects.
segments
=
dlib
.
rangess
()
# make an array of arrays of dlib.range objects.
sentences
=
[
"The other day I saw a man named Jim Smith"
,
sentences
=
[]
"Davis King is the main author of the dlib Library"
,
"Bob Jones is a name and so is George Clinton"
,
"My dog is named Bob Barker"
,
sentences
.
append
(
"The other day I saw a man named Jim Smith"
)
"ABC is an acronym but John James Smith is a name"
,
# We want to detect person names. So we note that the name is located within the
"No names in this sentence at all"
]
# range [8, 10). Note that we use half open ranges to identify segments. So in
# this case, the segment identifies the string "Jim Smith".
# We want to detect person names. So we note that the name is located within
# the range [8, 10). Note that we use half open ranges to identify segments.
# So in this case, the segment identifies the string "Jim Smith".
names
.
append
(
dlib
.
range
(
8
,
10
))
names
.
append
(
dlib
.
range
(
8
,
10
))
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
# make names empty for use again below
# make names empty for use again below
names
.
clear
()
sentences
.
append
(
"Davis King is the main author of the dlib Library"
)
names
.
append
(
dlib
.
range
(
0
,
2
))
names
.
append
(
dlib
.
range
(
0
,
2
))
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
names
.
clear
()
sentences
.
append
(
"Bob Jones is a name and so is George Clinton"
)
names
.
append
(
dlib
.
range
(
0
,
2
))
names
.
append
(
dlib
.
range
(
0
,
2
))
names
.
append
(
dlib
.
range
(
8
,
10
))
names
.
append
(
dlib
.
range
(
8
,
10
))
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
names
.
clear
()
sentences
.
append
(
"My dog is named Bob Barker"
)
names
.
append
(
dlib
.
range
(
4
,
6
))
names
.
append
(
dlib
.
range
(
4
,
6
))
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
names
.
clear
()
sentences
.
append
(
"ABC is an acronym but John James Smith is a name"
)
names
.
append
(
dlib
.
range
(
5
,
8
))
names
.
append
(
dlib
.
range
(
5
,
8
))
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
names
.
clear
()
sentences
.
append
(
"No names in this sentence at all"
)
segments
.
append
(
names
)
segments
.
append
(
names
)
names
.
clear
()
names
.
clear
()
# Now before we can pass these training sentences to the dlib tools we need to
#
Now before we can pass these training sentences to the dlib tools we need to convert them
#
convert them into arrays of vectors as discussed above. We can use either a
#
into arrays of vectors as discussed above. We can use either a sparse or dens
e
#
sparse or dense representation depending on our needs. In this example, w
e
#
representation depending on our needs. In this example, we
show how to do it both ways.
# show how to do it both ways.
use_sparse_vects
=
False
use_sparse_vects
=
False
if
use_sparse_vects
:
if
use_sparse_vects
:
# Make an array of arrays of dlib.sparse_vector objects.
# Make an array of arrays of dlib.sparse_vector objects.
training_sequences
=
dlib
.
sparse_vectorss
()
training_sequences
=
dlib
.
sparse_vectorss
()
...
@@ -139,46 +138,49 @@ else:
...
@@ -139,46 +138,49 @@ else:
for
s
in
sentences
:
for
s
in
sentences
:
training_sequences
.
append
(
sentence_to_vectors
(
s
))
training_sequences
.
append
(
sentence_to_vectors
(
s
))
# Now that we have a simple training set we can train a sequence segmenter.
# However, the sequence segmentation trainer has some optional parameters we can
# Now that we have a simple training set we can train a sequence segmenter. However, the
# set. These parameters determine properties of the segmentation model we will
# sequence segmentation trainer has some optional parameters we can set. These parameters
# learn. See the dlib documentation for the sequence_segmenter object for a
# determine properties of the segmentation model we will learn. See the dlib documentation
# full discussion of their meanings.
# for the sequence_segmenter object for a full discussion of their meanings.
params
=
dlib
.
segmenter_params
()
params
=
dlib
.
segmenter_params
()
params
.
window_size
=
3
params
.
window_size
=
3
params
.
use_high_order_features
=
True
params
.
use_high_order_features
=
True
params
.
use_BIO_model
=
True
params
.
use_BIO_model
=
True
# This is the common SVM C parameter. Larger values encourage the trainer to
attempt to
# This is the common SVM C parameter. Larger values encourage the trainer to
#
fit the data exactly but might overfit. In general, you determine this parameter by
#
attempt to fit the data exactly but might overfit. In general, you determine
# cross-validation.
#
this parameter by
cross-validation.
params
.
C
=
10
params
.
C
=
10
# Train a model. The model object is responsible for predicting the locations
of names in
# Train a model. The model object is responsible for predicting the locations
# new sentences.
#
of names in
new sentences.
model
=
dlib
.
train_sequence_segmenter
(
training_sequences
,
segments
,
params
)
model
=
dlib
.
train_sequence_segmenter
(
training_sequences
,
segments
,
params
)
# Let's print out the things the model thinks are names. The output is a set
# Let's print out the things the model thinks are names. The output is a set of ranges
# of ranges which are predicted to contain names. If you run this example
# which are predicted to contain names. If you run this example program you will see that
# program you will see that it gets them all correct.
# it gets them all correct.
for
i
,
s
in
enumerate
(
sentences
):
for
i
in
range
(
len
(
sentences
)):
print_segment
(
s
,
model
(
training_sequences
[
i
]))
print_segment
(
sentences
[
i
],
model
(
training_sequences
[
i
]))
# Let's also try segmenting a new sentence. This will print out "Bob Bucket".
# Let's also try segmenting a new sentence. This will print out "Bob Bucket". Note that we
# Note that we need to remember to use the same vector representation as we used
# need to remember to use the same vector representation as we used during training.
# during training.
test_sentence
=
"There once was a man from Nantucket whose name rhymed with Bob Bucket"
test_sentence
=
"There once was a man from Nantucket "
\
"whose name rhymed with Bob Bucket"
if
use_sparse_vects
:
if
use_sparse_vects
:
print_segment
(
test_sentence
,
model
(
sentence_to_sparse_vectors
(
test_sentence
)))
print_segment
(
test_sentence
,
model
(
sentence_to_sparse_vectors
(
test_sentence
)))
else
:
else
:
print_segment
(
test_sentence
,
model
(
sentence_to_vectors
(
test_sentence
)))
print_segment
(
test_sentence
,
model
(
sentence_to_vectors
(
test_sentence
)))
# We can also measure the accuracy of a model relative to some labeled data. This
# We can also measure the accuracy of a model relative to some labeled data.
# statement prints the precision, recall, and F1-score of the model relative to the data in
# This statement prints the precision, recall, and F1-score of the model
# training_sequences/segments.
# relative to the data in training_sequences/segments.
print
(
"Test on training data:"
,
dlib
.
test_sequence_segmenter
(
model
,
training_sequences
,
segments
))
print
(
"Test on training data: {}"
.
format
(
dlib
.
test_sequence_segmenter
(
model
,
training_sequences
,
segments
)))
# We can also do 5-fold cross-validation and print the resulting precision, recall, and F1-score.
print
(
"cross validation:"
,
dlib
.
cross_validate_sequence_segmenter
(
training_sequences
,
segments
,
5
,
params
))
# We can also do 5-fold cross-validation and print the resulting precision,
# recall, and F1-score.
print
(
"Cross validation: {}"
.
format
(
dlib
.
cross_validate_sequence_segmenter
(
training_sequences
,
segments
,
5
,
params
)))
python_examples/svm_rank.py
View file @
af82bc40
...
@@ -14,23 +14,21 @@
...
@@ -14,23 +14,21 @@
# come to the top of the ranked list.
# come to the top of the ranked list.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
import
dlib
# Now let's make some testing data. To make it really simple, let's suppose
that
# Now let's make some testing data. To make it really simple, let's suppose
#
we are ranking 2D vectors and that vectors with positive values in the first
#
that we are ranking 2D vectors and that vectors with positive values in the
# dimension should rank higher than other vectors. So what we do is make
#
first
dimension should rank higher than other vectors. So what we do is make
# examples of relevant (i.e. high ranking) and non-relevant (i.e. low ranking)
# examples of relevant (i.e. high ranking) and non-relevant (i.e. low ranking)
# vectors and store them into a ranking_pair object like so:
# vectors and store them into a ranking_pair object like so:
data
=
dlib
.
ranking_pair
()
data
=
dlib
.
ranking_pair
()
# Here we add two examples. In real applications, you would want lots of
# Here we add two examples. In real applications, you would want lots of
# examples of relevant and non-relevant vectors.
# examples of relevant and non-relevant vectors.
...
@@ -53,8 +51,10 @@ rank = trainer.train(data)
...
@@ -53,8 +51,10 @@ rank = trainer.train(data)
# Now if you call rank on a vector it will output a ranking score. In
# Now if you call rank on a vector it will output a ranking score. In
# particular, the ranking score for relevant vectors should be larger than the
# particular, the ranking score for relevant vectors should be larger than the
# score for non-relevant vectors.
# score for non-relevant vectors.
print
(
"ranking score for a relevant vector: "
,
rank
(
data
.
relevant
[
0
]))
print
(
"Ranking score for a relevant vector: {}"
.
format
(
print
(
"ranking score for a non-relevant vector: "
,
rank
(
data
.
nonrelevant
[
0
]))
rank
(
data
.
relevant
[
0
])))
print
(
"Ranking score for a non-relevant vector: {}"
.
format
(
rank
(
data
.
nonrelevant
[
0
])))
# The output is the following:
# The output is the following:
# ranking score for a relevant vector: 0.5
# ranking score for a relevant vector: 0.5
# ranking score for a non-relevant vector: -0.5
# ranking score for a non-relevant vector: -0.5
...
@@ -70,14 +70,11 @@ print(dlib.test_ranking_function(rank, data))
...
@@ -70,14 +70,11 @@ print(dlib.test_ranking_function(rank, data))
# The ranking scores are computed by taking the dot product between a learned
# The ranking scores are computed by taking the dot product between a learned
# weight vector and a data vector. If you want to see the learned weight vector
# weight vector and a data vector. If you want to see the learned weight vector
# you can display it like so:
# you can display it like so:
print
(
"
weights:
\n
"
,
rank
.
weights
)
print
(
"
Weights: {}"
.
format
(
rank
.
weights
)
)
# In this case the weights are:
# In this case the weights are:
# 0.5
# 0.5
# -0.5
# -0.5
# In the above example, our data contains just two sets of objects. The
# In the above example, our data contains just two sets of objects. The
# relevant set and non-relevant set. The trainer is attempting to find a
# relevant set and non-relevant set. The trainer is attempting to find a
# ranking function that gives every relevant vector a higher score than every
# ranking function that gives every relevant vector a higher score than every
...
@@ -94,7 +91,6 @@ print("weights: \n", rank.weights)
...
@@ -94,7 +91,6 @@ print("weights: \n", rank.weights)
# to the trainer. Therefore, each ranking_pair would represent the
# to the trainer. Therefore, each ranking_pair would represent the
# relevant/non-relevant sets for a particular query. An example is shown below
# relevant/non-relevant sets for a particular query. An example is shown below
# (for simplicity, we reuse our data from above to make 4 identical "queries").
# (for simplicity, we reuse our data from above to make 4 identical "queries").
queries
=
dlib
.
ranking_pairs
()
queries
=
dlib
.
ranking_pairs
()
queries
.
append
(
data
)
queries
.
append
(
data
)
queries
.
append
(
data
)
queries
.
append
(
data
)
...
@@ -104,7 +100,6 @@ queries.append(data)
...
@@ -104,7 +100,6 @@ queries.append(data)
# We can train just as before.
# We can train just as before.
rank
=
trainer
.
train
(
queries
)
rank
=
trainer
.
train
(
queries
)
# Now that we have multiple ranking_pair instances, we can also use
# Now that we have multiple ranking_pair instances, we can also use
# cross_validate_ranking_trainer(). This performs cross-validation by splitting
# cross_validate_ranking_trainer(). This performs cross-validation by splitting
# the queries up into folds. That is, it lets the trainer train on a subset of
# the queries up into folds. That is, it lets the trainer train on a subset of
...
@@ -112,9 +107,8 @@ rank = trainer.train(queries)
...
@@ -112,9 +107,8 @@ rank = trainer.train(queries)
# splits and returns the overall ranking accuracy based on the held out data.
# splits and returns the overall ranking accuracy based on the held out data.
# Just like test_ranking_function(), it reports both the ordering accuracy and
# Just like test_ranking_function(), it reports both the ordering accuracy and
# mean average precision.
# mean average precision.
print
(
"cross validation results: "
,
dlib
.
cross_validate_ranking_trainer
(
trainer
,
queries
,
4
))
print
(
"Cross validation results: {}"
.
format
(
dlib
.
cross_validate_ranking_trainer
(
trainer
,
queries
,
4
)))
# Finally, note that the ranking tools also support the use of sparse vectors in
# Finally, note that the ranking tools also support the use of sparse vectors in
# addition to dense vectors (which we used above). So if we wanted to do
# addition to dense vectors (which we used above). So if we wanted to do
...
@@ -131,19 +125,20 @@ samp = dlib.sparse_vector()
...
@@ -131,19 +125,20 @@ samp = dlib.sparse_vector()
# increasing order and no index value shows up more than once. If necessary,
# increasing order and no index value shows up more than once. If necessary,
# you can use the dlib.make_sparse_vector() routine to make a sparse vector
# you can use the dlib.make_sparse_vector() routine to make a sparse vector
# object properly sorted and contain unique indices.
# object properly sorted and contain unique indices.
samp
.
append
(
dlib
.
pair
(
0
,
1
))
samp
.
append
(
dlib
.
pair
(
0
,
1
))
data
.
relevant
.
append
(
samp
)
data
.
relevant
.
append
(
samp
)
# Now make samp represent the same vector as dlib.vector([0, 1])
# Now make samp represent the same vector as dlib.vector([0, 1])
samp
.
clear
()
samp
.
clear
()
samp
.
append
(
dlib
.
pair
(
1
,
1
))
samp
.
append
(
dlib
.
pair
(
1
,
1
))
data
.
nonrelevant
.
append
(
samp
)
data
.
nonrelevant
.
append
(
samp
)
trainer
=
dlib
.
svm_rank_trainer_sparse
()
trainer
=
dlib
.
svm_rank_trainer_sparse
()
rank
=
trainer
.
train
(
data
)
rank
=
trainer
.
train
(
data
)
print
(
"ranking score for a relevant vector: "
,
rank
(
data
.
relevant
[
0
]))
print
(
"Ranking score for a relevant vector: {}"
.
format
(
print
(
"ranking score for a non-relevant vector: "
,
rank
(
data
.
nonrelevant
[
0
]))
rank
(
data
.
relevant
[
0
])))
print
(
"Ranking score for a non-relevant vector: {}"
.
format
(
rank
(
data
.
nonrelevant
[
0
])))
# Just as before, the output is the following:
# Just as before, the output is the following:
# ranking score for a relevant vector: 0.5
# ranking score for a relevant vector: 0.5
# ranking score for a non-relevant vector: -0.5
# ranking score for a non-relevant vector: -0.5
python_examples/svm_struct.py
View file @
af82bc40
#!/usr/bin/python
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
#
# This is an example illustrating the use of the structural SVM solver from the dlib C++
# This is an example illustrating the use of the structural SVM solver from
# Library. Therefore, this example teaches you the central ideas needed to setup a
# the dlib C++ Library. Therefore, this example teaches you the central ideas
# structural SVM model for your machine learning problems. To illustrate the process, we
# needed to setup a structural SVM model for your machine learning problems. To
# use dlib's structural SVM solver to learn the parameters of a simple multi-class
# illustrate the process, we use dlib's structural SVM solver to learn the
# classifier. We first discuss the multi-class classifier model and then walk through
# parameters of a simple multi-class classifier. We first discuss the
# using the structural SVM tools to find the parameters of this classification model.
# multi-class classifier model and then walk through using the structural SVM
#
# tools to find the parameters of this classification model. As an aside,
# As an aside, dlib's C++ interface to the structural SVM solver is threaded. So on a
# dlib's C++ interface to the structural SVM solver is threaded. So on a
# multi-core computer it is significantly faster than using the python interface. So
# multi-core computer it is significantly faster than using the python
# consider using the C++ interface instead if you find that running it in python is slow.
# interface. So consider using the C++ interface instead if you find that
# running it in python is slow.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
import
dlib
def
main
():
def
main
():
# In this example, we have three types of samples: class 0, 1, or 2. That is, each of
# In this example, we have three types of samples: class 0, 1, or 2. That
# our sample vectors falls into one of three classes. To keep this example very
# is, each of our sample vectors falls into one of three classes. To keep
# simple, each sample vector is zero everywhere except at one place. The non-zero
# this example very simple, each sample vector is zero everywhere except at
# dimension of each vector determines the class of the vector. So for example, the
# one place. The non-zero dimension of each vector determines the class of
# first element of samples has a class of 1 because samples[0][1] is the only non-zero
# the vector. So for example, the first element of samples has a class of 1
# element of samples[0].
# because samples[0][1] is the only non-zero element of samples[0].
samples
=
[[
0
,
2
,
0
],
[
1
,
0
,
0
],
[
0
,
4
,
0
],
[
0
,
0
,
3
]];
samples
=
[[
0
,
2
,
0
],
[
1
,
0
,
0
],
[
0
,
4
,
0
],
[
0
,
0
,
3
]]
# Since we want to use a machine learning method to learn a 3-class classifier we need
# Since we want to use a machine learning method to learn a 3-class
# to record the labels of our samples. Here samples[i] has a class label of labels[i].
# classifier we need to record the labels of our samples. Here samples[i]
labels
=
[
1
,
0
,
1
,
2
]
# has a class label of labels[i].
labels
=
[
1
,
0
,
1
,
2
]
# Now that we have some training data we can tell the structural SVM to learn the
# parameters of our 3-class classifier model. The details of this will be explained
# Now that we have some training data we can tell the structural SVM to
# later. For now, just note that it finds the weights (i.e. a vector of real valued
# learn the parameters of our 3-class classifier model. The details of this
# parameters) such that predict_label(weights, sample) always returns the correct label
# will be explained later. For now, just note that it finds the weights
# for a sample vector.
# (i.e. a vector of real valued parameters) such that predict_label(weights,
problem
=
three_class_classifier_problem
(
samples
,
labels
)
# sample) always returns the correct label for a sample vector.
problem
=
ThreeClassClassifierProblem
(
samples
,
labels
)
weights
=
dlib
.
solve_structural_svm_problem
(
problem
)
weights
=
dlib
.
solve_structural_svm_problem
(
problem
)
# Print the weights and then evaluate predict_label() on each of our training samples.
# Print the weights and then evaluate predict_label() on each of our
# Note that the correct label is predicted for each sample.
# training samples. Note that the correct label is predicted for each
# sample.
print
(
weights
)
print
(
weights
)
for
i
in
range
(
len
(
samples
)):
for
k
,
s
in
enumerate
(
samples
):
print
(
"predicted label for sample[{0}]: {1}"
.
format
(
i
,
predict_label
(
weights
,
samples
[
i
])))
print
(
"Predicted label for sample[{0}]: {1}"
.
format
(
k
,
predict_label
(
weights
,
s
)))
def
predict_label
(
weights
,
sample
):
"""Given the 9-dimensional weight vector which defines a 3 class classifier, predict the
class of the given 3-dimensional sample vector. Therefore, the output of this
function is either 0, 1, or 2 (i.e. one of the three possible labels)."""
# Our 3-class classifier model can be thought of as containing 3 separate linear
def
predict_label
(
weights
,
sample
):
# classifiers. So to predict the class of a sample vector we evaluate each of these
"""Given the 9-dimensional weight vector which defines a 3 class classifier,
# three classifiers and then whatever classifier has the largest output "wins" and
predict the class of the given 3-dimensional sample vector. Therefore, the
# predicts the label of the sample. This is the popular one-vs-all multi-class
output of this function is either 0, 1, or 2 (i.e. one of the three possible
# classifier model.
labels)."""
#
# Keeping this in mind, the code below simply pulls the three separate weight vectors
# Our 3-class classifier model can be thought of as containing 3 separate
# out of weights and then evaluates each against sample. The individual classifier
# linear classifiers. So to predict the class of a sample vector we
# scores are stored in scores and the highest scoring index is returned as the label.
# evaluate each of these three classifiers and then whatever classifier has
# the largest output "wins" and predicts the label of the sample. This is
# the popular one-vs-all multi-class classifier model.
# Keeping this in mind, the code below simply pulls the three separate
# weight vectors out of weights and then evaluates each against sample. The
# individual classifier scores are stored in scores and the highest scoring
# index is returned as the label.
w0
=
weights
[
0
:
3
]
w0
=
weights
[
0
:
3
]
w1
=
weights
[
3
:
6
]
w1
=
weights
[
3
:
6
]
w2
=
weights
[
6
:
9
]
w2
=
weights
[
6
:
9
]
scores
=
[
dot
(
w0
,
sample
),
dot
(
w1
,
sample
),
dot
(
w2
,
sample
)]
scores
=
[
dot
(
w0
,
sample
),
dot
(
w1
,
sample
),
dot
(
w2
,
sample
)]
max_scoring_label
=
scores
.
index
(
max
(
scores
))
max_scoring_label
=
scores
.
index
(
max
(
scores
))
return
max_scoring_label
return
max_scoring_label
def
dot
(
a
,
b
):
def
dot
(
a
,
b
):
"Compute the dot product between the two vectors a and b."
"""Compute the dot product between the two vectors a and b."""
return
sum
(
i
*
j
for
i
,
j
in
zip
(
a
,
b
))
return
sum
(
i
*
j
for
i
,
j
in
zip
(
a
,
b
))
################################################################################
###########################################################################################
class
three_class_classifier_p
roblem
:
class
ThreeClassClassifierP
roblem
:
# Now we arrive at the meat of this example program. To use the
# Now we arrive at the meat of this example program. To use the
# dlib.solve_structural_svm_problem() routine you need to define an object which tells
# dlib.solve_structural_svm_problem() routine you need to define an object
# the structural SVM solver what to do for your problem. In this example, this is done
# which tells the structural SVM solver what to do for your problem. In
# by defining the three_class_classifier_problem object. Before we get into the
# this example, this is done by defining the ThreeClassClassifierProblem
# details, we first discuss some background information on structural SVMs.
# object. Before we get into the details, we first discuss some background
#
# information on structural SVMs.
# A structural SVM is a supervised machine learning method for learning to predict
# complex outputs. This is contrasted with a binary classifier which makes only simple
# yes/no predictions. A structural SVM, on the other hand, can learn to predict
# complex outputs such as entire parse trees or DNA sequence alignments. To do this,
# it learns a function F(x,y) which measures how well a particular data sample x
# matches a label y, where a label is potentially a complex thing like a parse tree.
# However, to keep this example program simple we use only a 3 category label output.
#
#
# At test time, the best label for a new x is given by the y which maximizes F(x,y).
# A structural SVM is a supervised machine learning method for learning to
# To put this into the context of the current example, F(x,y) computes the score for a
# predict complex outputs. This is contrasted with a binary classifier
# given sample and class label. The predicted class label is therefore whatever value
# which makes only simple yes/no predictions. A structural SVM, on the
# of y which makes F(x,y) the biggest. This is exactly what predict_label() does.
# other hand, can learn to predict complex outputs such as entire parse
# That is, it computes F(x,0), F(x,1), and F(x,2) and then reports which label has the
# trees or DNA sequence alignments. To do this, it learns a function F(x,y)
# which measures how well a particular data sample x matches a label y,
# where a label is potentially a complex thing like a parse tree. However,
# to keep this example program simple we use only a 3 category label output.
#
# At test time, the best label for a new x is given by the y which
# maximizes F(x,y). To put this into the context of the current example,
# F(x,y) computes the score for a given sample and class label. The
# predicted class label is therefore whatever value of y which makes F(x,y)
# the biggest. This is exactly what predict_label() does. That is, it
# computes F(x,0), F(x,1), and F(x,2) and then reports which label has the
# biggest value.
# biggest value.
#
#
# At a high level, a structural SVM can be thought of as searching the
parameter space
# At a high level, a structural SVM can be thought of as searching the
#
of F(x,y) for the set of parameters that make the following inequality true as often
#
parameter space of F(x,y) for the set of parameters that make the
# as possible:
#
following inequality true as often
as possible:
# F(x_i,y_i) > max{over all incorrect labels of x_i} F(x_i, y_incorrect)
# F(x_i,y_i) > max{over all incorrect labels of x_i} F(x_i, y_incorrect)
# That is, it seeks to find the parameter vector such that F(x,y) always gives the
# That is, it seeks to find the parameter vector such that F(x,y) always
# highest score to the correct output. To define the structural SVM optimization
# gives the highest score to the correct output. To define the structural
# problem precisely, we first introduce some notation:
# SVM optimization problem precisely, we first introduce some notation:
# - let PSI(x,y) == the joint feature vector for input x and a label y.
# - let PSI(x,y) == the joint feature vector for input x and a label y
# - let F(x,y|w) == dot(w,PSI(x,y)).
# - let F(x,y|w) == dot(w,PSI(x,y)).
# (we use the | notation to emphasize that F() has the parameter vector of
# (we use the | notation to emphasize that F() has the parameter vector
# weights called w)
# of weights called w)
# - let LOSS(idx,y) == the loss incurred for predicting that the idx-th training
# - let LOSS(idx,y) == the loss incurred for predicting that the
# sample has a label of y. Note that LOSS() should always be >= 0 and should
# idx-th training sample has a label of y. Note that LOSS()
# become exactly 0 when y is the correct label for the idx-th sample. Moreover,
# should always be >= 0 and should become exactly 0 when y is the
# it should notionally indicate how bad it is to predict y for the idx'th sample.
# correct label for the idx-th sample. Moreover, it should notionally
# - let x_i == the i-th training sample.
# indicate how bad it is to predict y for the idx'th sample.
# - let y_i == the correct label for the i-th training sample.
# - let x_i == the i-th training sample.
# - The number of data samples is N.
# - let y_i == the correct label for the i-th training sample.
# - The number of data samples is N.
#
#
# Then the optimization problem solved by a structural SVM using
# Then the optimization problem solved by a structural SVM using
# dlib.solve_structural_svm_problem() is the following:
# dlib.solve_structural_svm_problem() is the following:
# Minimize: h(w) == 0.5*dot(w,w) + C*R(w)
# Minimize: h(w) == 0.5*dot(w,w) + C*R(w)
#
#
# Where R(w) == sum from i=1 to N: 1/N * sample_risk(i,w)
# Where R(w) == sum from i=1 to N: 1/N * sample_risk(i,w)
and
#
and sample_risk(i,w) == max over all Y: LOSS(i,Y) + F(x_i,Y|w) - F(x_i,y_i|w)
#
sample_risk(i,w) == max over all
# and C > 0
#
Y: LOSS(i,Y) + F(x_i,Y|w) - F(x_i,y_i|w)
and C > 0
#
#
# You can think of the sample_risk(i,w) as measuring the degree of error you would make
# You can think of the sample_risk(i,w) as measuring the degree of error
# when predicting the label of the i-th sample using parameters w. That is, it is zero
# you would make when predicting the label of the i-th sample using
# only when the correct label would be predicted and grows larger the more "wrong" the
# parameters w. That is, it is zero only when the correct label would be
# predicted output becomes. Therefore, the objective function is minimizing a balance
# predicted and grows larger the more "wrong" the predicted output becomes.
# between making the weights small (typically this reduces overfitting) and fitting the
# Therefore, the objective function is minimizing a balance between making
# training data. The degree to which you try to fit the data is controlled by the C
# the weights small (typically this reduces overfitting) and fitting the
# parameter.
# training data. The degree to which you try to fit the data is controlled
# by the C parameter.
#
#
# For a more detailed introduction to structured support vector machines
you should
# For a more detailed introduction to structured support vector machines
#
consult the following paper:
#
you should consult the following paper:
# Predicting Structured Objects with Support Vector Machines by
# Predicting Structured Objects with Support Vector Machines by
# Thorsten Joachims, Thomas Hofmann, Yisong Yue, and Chun-nam Yu
# Thorsten Joachims, Thomas Hofmann, Yisong Yue, and Chun-nam Yu
#
#
# Finally, we come back to the code. To use dlib.solve_structural_svm_problem() you
# Finally, we come back to the code. To use
# need to provide the things discussed above. This is the value of C, the number of
# dlib.solve_structural_svm_problem() you need to provide the things
# training samples, the dimensionality of PSI(), as well as methods for calculating the
# discussed above. This is the value of C, the number of training samples,
# loss values and PSI() vectors. You will also need to write code that can compute:
# the dimensionality of PSI(), as well as methods for calculating the loss
# values and PSI() vectors. You will also need to write code that can
# compute:
# max over all Y: LOSS(i,Y) + F(x_i,Y|w). To summarize, the
# max over all Y: LOSS(i,Y) + F(x_i,Y|w). To summarize, the
# three_class_classifier_problem class is required to have the following fields:
# ThreeClassClassifierProblem class is required to have the following
# fields:
# - C
# - C
# - num_samples
# - num_samples
# - num_dimensions
# - num_dimensions
...
@@ -155,152 +171,162 @@ class three_class_classifier_problem:
...
@@ -155,152 +171,162 @@ class three_class_classifier_problem:
C
=
1
C
=
1
# There are also a number of optional arguments:
# There are also a number of optional arguments:
# epsilon is the stopping tolerance. The optimizer will run until R(w) is within
# epsilon is the stopping tolerance. The optimizer will run until R(w) is
# epsilon of its optimal value. If you don't set this then it defaults to 0.001.
# within epsilon of its optimal value. If you don't set this then it
#epsilon = 1e-13
# defaults to 0.001.
# epsilon = 1e-13
# Uncomment this and the optimizer will print its progress to standard out. You will
# be able to see things like the current risk gap. The optimizer continues until the
# Uncomment this and the optimizer will print its progress to standard
# out. You will be able to see things like the current risk gap. The
# optimizer continues until the
# risk gap is below epsilon.
# risk gap is below epsilon.
#be_verbose = True
#
be_verbose = True
# If you want to require that the learned weights are all non-negative
then set this
# If you want to require that the learned weights are all non-negative
# field to True.
#
then set this
field to True.
#learns_nonnegative_weights = True
#
learns_nonnegative_weights = True
# The optimizer uses an internal cache to avoid unnecessary calls to your
# The optimizer uses an internal cache to avoid unnecessary calls to your
# separation_oracle() routine. This parameter controls the size of that
cache. Bigger
# separation_oracle() routine. This parameter controls the size of that
#
values use more RAM and might make the optimizer run faster. You can also disable it
#
cache. Bigger values use more RAM and might make the optimizer run
#
by setting it to 0 which is good to do when your separation_oracle is very fast. If
#
faster. You can also disable it by setting it to 0 which is good to do
#
If you don't call this function it defaults to a value of 5.
#
when your separation_oracle is very fast. If If you don't call this
#
max_cache_size = 20
#
function it defaults to a value of 5.
# max_cache_size = 20
def
__init__
(
self
,
samples
,
labels
):
def
__init__
(
self
,
samples
,
labels
):
# dlib.solve_structural_svm_problem() expects the class to have num_samples and
# dlib.solve_structural_svm_problem() expects the class to have
# num_dimensions fields. These fields should contain the number of training
# num_samples and num_dimensions fields. These fields should contain
# samples and the dimensionality of the PSI feature vector respectively.
# the number of training samples and the dimensionality of the PSI
# feature vector respectively.
self
.
num_samples
=
len
(
samples
)
self
.
num_samples
=
len
(
samples
)
self
.
num_dimensions
=
len
(
samples
[
0
])
*
3
self
.
num_dimensions
=
len
(
samples
[
0
])
*
3
self
.
samples
=
samples
self
.
samples
=
samples
self
.
labels
=
labels
self
.
labels
=
labels
def
make_psi
(
self
,
x
,
label
):
def
make_psi
(
self
,
x
,
label
):
"""Compute PSI(x,label)."""
"""Compute PSI(x,label)."""
# All we are doing here is taking x, which is a 3 dimensional sample vector in this
# All we are doing here is taking x, which is a 3 dimensional sample
# example program, and putting it into one of 3 places in a 9 dimensional PSI
# vector in this example program, and putting it into one of 3 places in
# vector, which we then return. So this function returns PSI(x,label). To see why
# a 9 dimensional PSI vector, which we then return. So this function
# we setup PSI like this, recall how predict_label() works. It takes in a 9
# returns PSI(x,label). To see why we setup PSI like this, recall how
# dimensional weight vector and breaks the vector into 3 pieces. Each piece then
# predict_label() works. It takes in a 9 dimensional weight vector and
# defines a different classifier and we use them in a one-vs-all manner to predict
# breaks the vector into 3 pieces. Each piece then defines a different
# the label. So now that we are in the structural SVM code we have to define the
# classifier and we use them in a one-vs-all manner to predict the
# PSI vector to correspond to this usage. That is, we need to setup PSI so that
# label. So now that we are in the structural SVM code we have to
# argmax_y dot(weights,PSI(x,y)) == predict_label(weights,x). This is how we tell
# define the PSI vector to correspond to this usage. That is, we need
# the structural SVM solver what kind of problem we are trying to solve.
# to setup PSI so that argmax_y dot(weights,PSI(x,y)) ==
# predict_label(weights,x). This is how we tell the structural SVM
# solver what kind of problem we are trying to solve.
#
#
# It's worth emphasizing that the single biggest step in using a structural SVM is
# It's worth emphasizing that the single biggest step in using a
# deciding how you want to represent PSI(x,label). It is always a vector, but
# structural SVM is deciding how you want to represent PSI(x,label). It
# deciding what to put into it to solve your problem is often not a trivial task.
# is always a vector, but deciding what to put into it to solve your
# Part of the difficulty is that you need an efficient method for finding the label
# problem is often not a trivial task. Part of the difficulty is that
# that makes dot(w,PSI(x,label)) the biggest. Sometimes this is easy, but often
# you need an efficient method for finding the label that makes
# finding the max scoring label turns into a difficult combinatorial optimization
# dot(w,PSI(x,label)) the biggest. Sometimes this is easy, but often
# problem. So you need to pick a PSI that doesn't make the label maximization step
# finding the max scoring label turns into a difficult combinatorial
# intractable but also still well models your problem.
# optimization problem. So you need to pick a PSI that doesn't make the
# label maximization step intractable but also still well models your
# Create a dense vector object (note that you can also use unsorted sparse vectors
# problem.
# (i.e. dlib.sparse_vector objects) to represent your PSI vector. This is useful
#
# if you have very high dimensional PSI vectors that are mostly zeros. In the
# Create a dense vector object (note that you can also use unsorted
# context of this example, you would simply return a dlib.sparse_vector at the end
# sparse vectors (i.e. dlib.sparse_vector objects) to represent your
# of make_psi() and the rest of the example would still work properly. ).
# PSI vector. This is useful if you have very high dimensional PSI
# vectors that are mostly zeros. In the context of this example, you
# would simply return a dlib.sparse_vector at the end of make_psi() and
# the rest of the example would still work properly. ).
psi
=
dlib
.
vector
()
psi
=
dlib
.
vector
()
# Set it to have 9 dimensions. Note that the elements of the vector
are 0
# Set it to have 9 dimensions. Note that the elements of the vector
# initialized.
#
are 0
initialized.
psi
.
resize
(
self
.
num_dimensions
)
psi
.
resize
(
self
.
num_dimensions
)
dims
=
len
(
x
)
dims
=
len
(
x
)
if
(
label
==
0
)
:
if
label
==
0
:
for
i
in
range
(
0
,
dims
):
for
i
in
range
(
0
,
dims
):
psi
[
i
]
=
x
[
i
]
psi
[
i
]
=
x
[
i
]
elif
(
label
==
1
)
:
elif
label
==
1
:
for
i
in
range
(
dims
,
2
*
dims
):
for
i
in
range
(
dims
,
2
*
dims
):
psi
[
i
]
=
x
[
i
-
dims
]
psi
[
i
]
=
x
[
i
-
dims
]
else
:
# the label must be 2
else
:
# the label must be 2
for
i
in
range
(
2
*
dims
,
3
*
dims
):
for
i
in
range
(
2
*
dims
,
3
*
dims
):
psi
[
i
]
=
x
[
i
-
2
*
dims
]
psi
[
i
]
=
x
[
i
-
2
*
dims
]
return
psi
return
psi
# Now we get to the two member functions that are directly called by
# Now we get to the two member functions that are directly called by
# dlib.solve_structural_svm_problem().
# dlib.solve_structural_svm_problem().
#
#
# In get_truth_joint_feature_vector(), all you have to do is return the PSI() vector
# In get_truth_joint_feature_vector(), all you have to do is return the
# for the idx-th training sample when it has its true label. So here it returns
# PSI() vector for the idx-th training sample when it has its true label.
# So here it returns
# PSI(self.samples[idx], self.labels[idx]).
# PSI(self.samples[idx], self.labels[idx]).
def
get_truth_joint_feature_vector
(
self
,
idx
):
def
get_truth_joint_feature_vector
(
self
,
idx
):
return
self
.
make_psi
(
self
.
samples
[
idx
],
self
.
labels
[
idx
])
return
self
.
make_psi
(
self
.
samples
[
idx
],
self
.
labels
[
idx
])
# separation_oracle() is more interesting.
#
separation_oracle() is more interesting. dlib.solve_structural_svm_problem() will
#
dlib.solve_structural_svm_problem() will call separation_oracle() many
#
call separation_oracle() many times during the optimization. Each time it will give
#
times during the optimization. Each time it will give it the current
#
it the current value of the parameter weights and the separation_oracle() is supposed
#
value of the parameter weights and the separation_oracle() is supposed to
#
to find the label that most violates the structural SVM objective function for the
#
find the label that most violates the structural SVM objective function
#
idx-th sample. Then the separation oracle reports the corresponding PSI vector and
#
for the idx-th sample. Then the separation oracle reports the
#
loss value. To state this more precisely, the separation_oracle() member function
#
corresponding PSI vector and loss value. To state this more precisely,
# has the following contract:
#
the separation_oracle() member function
has the following contract:
# requires
# requires
#
- 0 <= idx < self.num_samples
#
- 0 <= idx < self.num_samples
#
- len(current_solution) == self.num_dimensions
#
- len(current_solution) == self.num_dimensions
# ensures
# ensures
# - runs the separation oracle on the idx-th sample. We define this as follows:
# - runs the separation oracle on the idx-th sample.
# - let X == the idx-th training sample.
# We define this as follows:
# - let PSI(X,y) == the joint feature vector for input X and an arbitrary label y.
# - let X == the idx-th training sample.
# - let F(X,y) == dot(current_solution,PSI(X,y)).
# - let PSI(X,y) == the joint feature vector for input X
# - let LOSS(idx,y) == the loss incurred for predicting that the idx-th sample
# and an arbitrary label y.
# has a label of y. Note that LOSS() should always be >= 0 and should
# - let F(X,y) == dot(current_solution,PSI(X,y)).
# become exactly 0 when y is the correct label for the idx-th sample.
# - let LOSS(idx,y) == the loss incurred for predicting that the
# idx-th sample has a label of y. Note that LOSS()
# should always be >= 0 and should become exactly 0 when y is the
# correct label for the idx-th sample.
#
#
#
Then the separation oracle finds a Y such that:
#
Then the separation oracle finds a Y such that:
#
Y = argmax over all y: LOSS(idx,y) + F(X,y)
#
Y = argmax over all y: LOSS(idx,y) + F(X,y)
#
(i.e. It finds the label which maximizes the above expression.)
# (i.e. It finds the label which maximizes the above expression.)
#
#
#
Finally, separation_oracle() returns LOSS(idx,Y),PSI(X,Y)
#
Finally, separation_oracle() returns LOSS(idx,Y),PSI(X,Y)
def
separation_oracle
(
self
,
idx
,
current_solution
):
def
separation_oracle
(
self
,
idx
,
current_solution
):
samp
=
self
.
samples
[
idx
]
samp
=
self
.
samples
[
idx
]
dims
=
len
(
samp
)
dims
=
len
(
samp
)
scores
=
[
0
,
0
,
0
]
scores
=
[
0
,
0
,
0
]
# compute scores for each of the three classifiers
# compute scores for each of the three classifiers
scores
[
0
]
=
dot
(
current_solution
[
0
:
dims
],
samp
)
scores
[
0
]
=
dot
(
current_solution
[
0
:
dims
],
samp
)
scores
[
1
]
=
dot
(
current_solution
[
dims
:
2
*
dims
],
samp
)
scores
[
1
]
=
dot
(
current_solution
[
dims
:
2
*
dims
],
samp
)
scores
[
2
]
=
dot
(
current_solution
[
2
*
dims
:
3
*
dims
],
samp
)
scores
[
2
]
=
dot
(
current_solution
[
2
*
dims
:
3
*
dims
],
samp
)
# Add in the loss-augmentation. Recall that we maximize LOSS(idx,y) + F(X,y) in
# Add in the loss-augmentation. Recall that we maximize
# the separate oracle, not just F(X,y) as we normally would in predict_label().
# LOSS(idx,y) + F(X,y) in the separate oracle, not just F(X,y) as we
# Therefore, we must add in this extra amount to account for the loss-augmentation.
# normally would in predict_label(). Therefore, we must add in this
# For our simple multi-class classifier, we incur a loss of 1 if we don't predict
# extra amount to account for the loss-augmentation. For our simple
# the correct label and a loss of 0 if we get the right label.
# multi-class classifier, we incur a loss of 1 if we don't predict the
if
(
self
.
labels
[
idx
]
!=
0
):
# correct label and a loss of 0 if we get the right label.
if
self
.
labels
[
idx
]
!=
0
:
scores
[
0
]
+=
1
scores
[
0
]
+=
1
if
(
self
.
labels
[
idx
]
!=
1
):
if
self
.
labels
[
idx
]
!=
1
:
scores
[
1
]
+=
1
scores
[
1
]
+=
1
if
(
self
.
labels
[
idx
]
!=
2
):
if
self
.
labels
[
idx
]
!=
2
:
scores
[
2
]
+=
1
scores
[
2
]
+=
1
# Now figure out which classifier has the largest loss-augmented score.
# Now figure out which classifier has the largest loss-augmented score.
max_scoring_label
=
scores
.
index
(
max
(
scores
))
max_scoring_label
=
scores
.
index
(
max
(
scores
))
# And finally record the loss that was associated with that predicted
label.
# And finally record the loss that was associated with that predicted
# Again, the loss is 1 if the label is incorrect and 0 otherwise.
#
label.
Again, the loss is 1 if the label is incorrect and 0 otherwise.
if
(
max_scoring_label
==
self
.
labels
[
idx
])
:
if
max_scoring_label
==
self
.
labels
[
idx
]
:
loss
=
0
loss
=
0
else
:
else
:
loss
=
1
loss
=
1
# Finally, return the loss and PSI vector corresponding to the label we just found.
# Finally, return the loss and PSI vector corresponding to the label
# we just found.
psi
=
self
.
make_psi
(
samp
,
max_scoring_label
)
psi
=
self
.
make_psi
(
samp
,
max_scoring_label
)
return
loss
,
psi
return
loss
,
psi
if
__name__
==
"__main__"
:
if
__name__
==
"__main__"
:
main
()
main
()
python_examples/train_object_detector.py
View file @
af82bc40
#!/usr/bin/python
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
#
# This example program shows how you can use dlib to make an object detector
# This example program shows how you can use dlib to make an object
# for things like faces, pedestrians, and any other semi-rigid object. In
# detector for things like faces, pedestrians, and any other semi-rigid
# particular, we go though the steps to train the kind of sliding window
# object. In particular, we go though the steps to train the kind of sliding
# object detector first published by Dalal and Triggs in 2005 in the paper
# window object detector first published by Dalal and Triggs in 2005 in the
# Histograms of Oriented Gradients for Human Detection.
# paper Histograms of Oriented Gradients for Human Detection.
#
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows.
If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# run compile_dlib_python_module.bat. This should work on any operating
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# system so long as you have CMake and boost-python installed.
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
# On Ubuntu, this can be done easily by running the command:
# sudo apt-get install libboost-python-dev cmake
import
dlib
,
sys
,
glob
import
os
import
sys
import
glob
import
dlib
from
skimage
import
io
from
skimage
import
io
# In this example we are going to train a face detector based on the small
# In this example we are going to train a face detector based on the small
# faces dataset in the examples/faces directory. This means you need to supply
# faces dataset in the examples/faces directory. This means you need to supply
# the path to this faces folder as a command line argument so we will know
# the path to this faces folder as a command line argument so we will know
# where it is.
# where it is.
if
(
len
(
sys
.
argv
)
!=
2
):
if
len
(
sys
.
argv
)
!=
2
:
print
(
"Give the path to the examples/faces directory as the argument to this"
)
print
(
print
(
"program. For example, if you are in the python_examples folder then "
)
"Give the path to the examples/faces directory as the argument to this "
print
(
"execute this program by running:"
)
"program. For example, if you are in the python_examples folder then "
print
(
" ./train_object_detector.py ../examples/faces"
)
"execute this program by running:
\n
"
" ./train_object_detector.py ../examples/faces"
)
exit
()
exit
()
faces_folder
=
sys
.
argv
[
1
]
faces_folder
=
sys
.
argv
[
1
]
# Now let's do the training. The train_simple_object_detector() function has a
# Now let's do the training. The train_simple_object_detector() function has a
# bunch of options, all of which come with reasonable default values. The next
# bunch of options, all of which come with reasonable default values. The next
# few lines goes over some of these options.
# few lines goes over some of these options.
...
@@ -46,10 +50,10 @@ options.add_left_right_image_flips = True
...
@@ -46,10 +50,10 @@ options.add_left_right_image_flips = True
# empirically by checking how well the trained detector works on a test set of
# empirically by checking how well the trained detector works on a test set of
# images you haven't trained on. Don't just leave the value set at 5. Try a
# images you haven't trained on. Don't just leave the value set at 5. Try a
# few different C values and see what works best for your data.
# few different C values and see what works best for your data.
options
.
C
=
5
options
.
C
=
5
# Tell the code how many CPU cores your computer has for the fastest training.
# Tell the code how many CPU cores your computer has for the fastest training.
options
.
num_threads
=
4
options
.
num_threads
=
4
options
.
be_verbose
=
True
options
.
be_verbose
=
True
# This function does the actual training. It will save the final detector to
# This function does the actual training. It will save the final detector to
# detector.svm. The input is an XML file that lists the images in the training
# detector.svm. The input is an XML file that lists the images in the training
...
@@ -59,20 +63,22 @@ options.be_verbose = True
...
@@ -59,20 +63,22 @@ options.be_verbose = True
# images with boxes. To see how to use it read the tools/imglab/README.txt
# images with boxes. To see how to use it read the tools/imglab/README.txt
# file. But for this example, we just use the training.xml file included with
# file. But for this example, we just use the training.xml file included with
# dlib.
# dlib.
dlib
.
train_simple_object_detector
(
faces_folder
+
"/training.xml"
,
"detector.svm"
,
options
)
training_xml_path
=
os
.
path
.
join
(
faces_folder
,
"training.xml"
)
testing_xml_path
=
os
.
path
.
join
(
faces_folder
,
"testing.xml"
)
dlib
.
train_simple_object_detector
(
training_xml_path
,
"detector.svm"
,
options
)
# Now that we have a face detector we can test it. The first statement tests
# Now that we have a face detector we can test it. The first statement tests
# it on the training data. It will print(the precision, recall, and then)
# it on the training data. It will print(the precision, recall, and then)
# average precision.
# average precision.
print
(
"
\n
training accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
faces_folder
+
"/training.xml"
,
"detector.svm"
)))
print
(
""
)
# Print blank line to create gap from previous output
print
(
"Training accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
training_xml_path
,
"detector.svm"
)))
# However, to get an idea if it really worked without overfitting we need to
# However, to get an idea if it really worked without overfitting we need to
# run it on images it wasn't trained on. The next line does this. Happily, we
# run it on images it wasn't trained on. The next line does this. Happily, we
# see that the object detector works perfectly on the testing images.
# see that the object detector works perfectly on the testing images.
print
(
"testing accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
faces_folder
+
"/testing.xml"
,
"detector.svm"
)))
print
(
"Testing accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
testing_xml_path
,
"detector.svm"
)))
# Now let's use the detector as you would in a normal application. First we
# Now let's use the detector as you would in a normal application. First we
# will load it from disk.
# will load it from disk.
...
@@ -84,39 +90,37 @@ win_det.set_image(detector)
...
@@ -84,39 +90,37 @@ win_det.set_image(detector)
# Now let's run the detector over the images in the faces folder and display the
# Now let's run the detector over the images in the faces folder and display the
# results.
# results.
print
(
"
\n
Showing detections on the images in the faces folder..."
)
print
(
"Showing detections on the images in the faces folder..."
)
win
=
dlib
.
image_window
()
win
=
dlib
.
image_window
()
for
f
in
glob
.
glob
(
faces_folder
+
"/*.jpg"
):
for
f
in
glob
.
glob
(
faces_folder
+
"/*.jpg"
):
print
(
"
processing file:"
,
f
)
print
(
"
Processing file: {}"
.
format
(
f
)
)
img
=
io
.
imread
(
f
)
img
=
io
.
imread
(
f
)
dets
=
detector
(
img
)
dets
=
detector
(
img
)
print
(
"number of faces detected:"
,
len
(
dets
))
print
(
"Number of faces detected: {}"
.
format
(
len
(
dets
)))
for
d
in
dets
:
for
k
,
d
in
enumerate
(
dets
):
print
(
" detection position left,top,right,bottom:"
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
())
print
(
"Detection {}: Left: {} Top: {} Right: {} Bottom: {}"
.
format
(
k
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
()))
win
.
clear_overlay
()
win
.
clear_overlay
()
win
.
set_image
(
img
)
win
.
set_image
(
img
)
win
.
add_overlay
(
dets
)
win
.
add_overlay
(
dets
)
raw_input
(
"Hit enter to continue"
)
raw_input
(
"Hit enter to continue"
)
# Finally, note that you don't have to use the XML based input to
# Finally, note that you don't have to use the XML based input to
# train_simple_object_detector(). If you have already loaded your training
# train_simple_object_detector(). If you have already loaded your training
# images and bounding boxes for the objects then you can call it as shown
# images and bounding boxes for the objects then you can call it as shown
# below.
# below.
# You just need to put your images into a list.
# You just need to put your images into a list.
images
=
[
io
.
imread
(
faces_folder
+
'/2008_002506.jpg'
),
io
.
imread
(
faces_folder
+
'/2009_004587.jpg'
)
]
images
=
[
io
.
imread
(
faces_folder
+
'/2008_002506.jpg'
),
io
.
imread
(
faces_folder
+
'/2009_004587.jpg'
)]
# Then for each image you make a list of rectangles which give the pixel
# Then for each image you make a list of rectangles which give the pixel
# locations of the edges of the boxes.
# locations of the edges of the boxes.
boxes_img1
=
([
dlib
.
rectangle
(
left
=
329
,
top
=
78
,
right
=
437
,
bottom
=
186
),
boxes_img1
=
([
dlib
.
rectangle
(
left
=
329
,
top
=
78
,
right
=
437
,
bottom
=
186
),
dlib
.
rectangle
(
left
=
224
,
top
=
95
,
right
=
314
,
bottom
=
185
),
dlib
.
rectangle
(
left
=
224
,
top
=
95
,
right
=
314
,
bottom
=
185
),
dlib
.
rectangle
(
left
=
125
,
top
=
65
,
right
=
214
,
bottom
=
155
)
]
)
dlib
.
rectangle
(
left
=
125
,
top
=
65
,
right
=
214
,
bottom
=
155
)]
)
boxes_img2
=
([
dlib
.
rectangle
(
left
=
154
,
top
=
46
,
right
=
228
,
bottom
=
121
),
boxes_img2
=
([
dlib
.
rectangle
(
left
=
154
,
top
=
46
,
right
=
228
,
bottom
=
121
),
dlib
.
rectangle
(
left
=
266
,
top
=
280
,
right
=
328
,
bottom
=
342
)
]
)
dlib
.
rectangle
(
left
=
266
,
top
=
280
,
right
=
328
,
bottom
=
342
)
]
)
# And then you aggregate those lists of boxes into one big list and then call
# And then you aggregate those lists of boxes into one big list and then call
# train_simple_object_detector().
# train_simple_object_detector().
boxes
=
[
boxes_img1
,
boxes_img2
]
boxes
=
[
boxes_img1
,
boxes_img2
]
...
@@ -132,4 +136,5 @@ raw_input("Hit enter to continue")
...
@@ -132,4 +136,5 @@ raw_input("Hit enter to continue")
# test_simple_object_detector(). If you have already loaded your training
# test_simple_object_detector(). If you have already loaded your training
# images and bounding boxes for the objects then you can call it as shown
# images and bounding boxes for the objects then you can call it as shown
# below.
# below.
print
(
"Training accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
images
,
boxes
,
"detector.svm"
)))
print
(
"Training accuracy: {}"
.
format
(
dlib
.
test_simple_object_detector
(
images
,
boxes
,
"detector.svm"
)))
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment