Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
D
dlib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
钟尚武
dlib
Commits
99e948db
Commit
99e948db
authored
Oct 21, 2015
by
Davis King
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Added tensor spec.
parent
37d493e2
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
10 additions
and
9 deletions
+10
-9
gpu_data_abstract.h
dlib/dnn/gpu_data_abstract.h
+9
-9
tensor.h
dlib/dnn/tensor.h
+1
-0
tensor_abstract.h
dlib/dnn/tensor_abstract.h
+0
-0
No files found.
dlib/dnn/gpu_data_abstract.h
View file @
99e948db
...
...
@@ -16,23 +16,23 @@ namespace dlib
/*!
WHAT THIS OBJECT REPRESENTS
This object is a block of size() floats, all stored contiguously in memory.
I
n particular
, it keeps two copies of the floats, one on the host CPU side
I
mportantly
, it keeps two copies of the floats, one on the host CPU side
and another on the GPU device side. It automatically performs the necessary
host/device transfers to keep these two copies of the data in sync.
All transfers to the device happen asynchronously
so that CUDA kernel
computations can overlap with data transfers. However, any transfers from
t
he device to the host happen synchronously in the default CUDA stream.
Therefore, you should perform all your CUDA kernel launches on the default
stream so that transfers back to the host do not happen before the
computations have completed.
All transfers to the device happen asynchronously
with respect to the
default CUDA stream so that CUDA kernel computations can overlap with data
t
ransfers. However, any transfers from the device to the host happen
synchronously in the default CUDA stream. Therefore, you should perform
all your CUDA kernel launches on the default stream so that transfers back
to the host do not happen before the relevant
computations have completed.
If DLIB_USE_CUDA is not #defined then this object will not use CUDA at all.
Instead, it will simply store one host side memory block of floats.
THREAD SAFETY
This object is not thread-safe. Don't touch it from multiple threads at
the same time.
Instances of this object are not thread-safe. So don't touch one from
multiple threads at
the same time.
!*/
public
:
...
...
dlib/dnn/tensor.h
View file @
99e948db
...
...
@@ -367,6 +367,7 @@ namespace dlib
const
tensor
&
b
)
{
// TODO, do on GPU?
DLIB_CASSERT
(
a
.
size
()
==
b
.
size
(),
""
);
const
float
*
da
=
a
.
host
();
const
float
*
db
=
b
.
host
();
...
...
dlib/dnn/tensor_abstract.h
0 → 100644
View file @
99e948db
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment