Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
F
ffm-baseline
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ML
ffm-baseline
Commits
0a4607e1
Commit
0a4607e1
authored
Jun 11, 2019
by
Your Name
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
test
parent
57b4a3cc
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
13 additions
and
13 deletions
+13
-13
dist_predict.py
eda/esmm/Model_pipline/dist_predict.py
+13
-13
No files found.
eda/esmm/Model_pipline/dist_predict.py
View file @
0a4607e1
...
@@ -162,14 +162,14 @@ def main(te_file):
...
@@ -162,14 +162,14 @@ def main(te_file):
preds
=
Estimator
.
predict
(
input_fn
=
lambda
:
input_fn
(
te_file
,
num_epochs
=
1
,
batch_size
=
10000
),
predict_keys
=
[
"pctcvr"
,
"pctr"
,
"pcvr"
])
preds
=
Estimator
.
predict
(
input_fn
=
lambda
:
input_fn
(
te_file
,
num_epochs
=
1
,
batch_size
=
10000
),
predict_keys
=
[
"pctcvr"
,
"pctr"
,
"pcvr"
])
with
open
(
"/home/gmuser/esmm/nearby/pred.txt"
,
"w"
)
as
fo
:
#
with open("/home/gmuser/esmm/nearby/pred.txt", "w") as fo:
for
prob
in
preds
:
#
for prob in preds:
fo
.
write
(
"
%
f
\t
%
f
\t
%
f
\n
"
%
(
prob
[
'pctr'
],
prob
[
'pcvr'
],
prob
[
'pctcvr'
]))
#
fo.write("%f\t%f\t%f\n" % (prob['pctr'], prob['pcvr'], prob['pctcvr']))
#
indices = []
indices
=
[]
#
for prob in preds:
for
prob
in
preds
:
#
indices.append([prob['pctr'], prob['pcvr'], prob['pctcvr']])
indices
.
append
([
prob
[
'pctr'
],
prob
[
'pcvr'
],
prob
[
'pctcvr'
]])
#
return indices
return
indices
def
test_map
(
x
):
def
test_map
(
x
):
return
x
*
x
return
x
*
x
...
@@ -198,12 +198,12 @@ if __name__ == "__main__":
...
@@ -198,12 +198,12 @@ if __name__ == "__main__":
tf
.
logging
.
set_verbosity
(
tf
.
logging
.
INFO
)
tf
.
logging
.
set_verbosity
(
tf
.
logging
.
INFO
)
te_files
=
[
path
+
"nearby/part-r-00000"
]
#
te_files = [path + "nearby/part-r-00000"]
main
(
te_files
)
#
main(te_files)
#
te_files = [[path+"nearby/part-r-00000"],[path+"native/part-r-00000"]]
te_files
=
[[
path
+
"nearby/part-r-00000"
],[
path
+
"native/part-r-00000"
]]
#
rdd_te_files = spark.sparkContext.parallelize(te_files)
rdd_te_files
=
spark
.
sparkContext
.
parallelize
(
te_files
)
#
indices = rdd_te_files.repartition(2).map(lambda x: main(x))
indices
=
rdd_te_files
.
repartition
(
2
)
.
map
(
lambda
x
:
main
(
x
))
#
print(indices.collect())
print
(
indices
.
collect
())
b
=
time
.
time
()
b
=
time
.
time
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment