Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
F
ffm-baseline
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ML
ffm-baseline
Commits
3c51e7b1
Commit
3c51e7b1
authored
6 years ago
by
Your Name
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
dist predict test
parent
fff075d7
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
6 deletions
+8
-6
dist_predict.py
eda/esmm/Model_pipline/dist_predict.py
+8
-6
No files found.
eda/esmm/Model_pipline/dist_predict.py
View file @
3c51e7b1
...
@@ -55,7 +55,6 @@ def input_fn(filenames, batch_size=32, num_epochs=1, perform_shuffle=False):
...
@@ -55,7 +55,6 @@ def input_fn(filenames, batch_size=32, num_epochs=1, perform_shuffle=False):
return
batch_features
,
batch_labels
return
batch_features
,
batch_labels
def
model_fn
(
features
,
labels
,
mode
,
params
):
def
model_fn
(
features
,
labels
,
mode
,
params
):
"""Bulid Model function f(x) for Estimator."""
"""Bulid Model function f(x) for Estimator."""
#------hyperparameters----
#------hyperparameters----
...
@@ -268,14 +267,17 @@ if __name__ == "__main__":
...
@@ -268,14 +267,17 @@ if __name__ == "__main__":
rdd_te_files
=
spark
.
sparkContext
.
parallelize
(
te_files
)
rdd_te_files
=
spark
.
sparkContext
.
parallelize
(
te_files
)
print
(
"-"
*
100
)
print
(
"-"
*
100
)
indices
=
rdd_te_files
.
repartition
(
40
)
.
map
(
lambda
x
:
main
(
x
))
#
indices = rdd_te_files.repartition(40).map(lambda x: main(x))
print
(
indices
.
take
(
1
))
#
print(indices.take(1))
print
(
"dist predict nearby"
)
print
(
"dist predict nearby"
)
te
_result_dataframe
=
spark
.
createDataFrame
(
indices
.
flatMap
(
lambda
x
:
x
.
split
(
";"
))
.
map
(
te
st
=
main
(
te_files
[
0
])
lambda
l
:
Row
(
uid
=
l
.
split
(
":"
)[
0
],
city
=
l
.
split
(
":"
)[
1
],
cid_id
=
l
.
split
(
":"
)[
2
],
ctcvr
=
l
.
split
(
":"
)[
3
]))
)
print
(
test
[:
50
]
)
te_result_dataframe
.
show
()
# te_result_dataframe = spark.createDataFrame(indices.flatMap(lambda x: x.split(";")).map(
# lambda l: Row(uid=l.split(":")[0],city=l.split(":")[1],cid_id=l.split(":")[2],ctcvr=l.split(":")[3])))
#
# te_result_dataframe.show()
# print("nearby rdd data")
# print("nearby rdd data")
# te_result_dataframe.show()
# te_result_dataframe.show()
...
...
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment