Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
F
ffm-baseline
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ML
ffm-baseline
Commits
67921e0f
Commit
67921e0f
authored
Oct 08, 2019
by
张彦钊
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
change
parent
7cf96872
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
8 deletions
+8
-8
meigou.py
local/meigou.py
+8
-8
No files found.
local/meigou.py
View file @
67921e0f
...
@@ -127,7 +127,7 @@ if __name__ == '__main__':
...
@@ -127,7 +127,7 @@ if __name__ == '__main__':
spark
=
SparkSession
.
builder
.
config
(
conf
=
sparkConf
)
.
enableHiveSupport
()
.
getOrCreate
()
spark
=
SparkSession
.
builder
.
config
(
conf
=
sparkConf
)
.
enableHiveSupport
()
.
getOrCreate
()
clicks
=
[]
clicks
=
[]
cpcs
=
[]
cpcs
=
[]
for
i
in
range
(
1
,
3
):
for
i
in
range
(
1
,
26
):
clicks
.
append
(
all_click
(
i
))
clicks
.
append
(
all_click
(
i
))
cpcs
.
append
(
cpc_click
(
i
))
cpcs
.
append
(
cpc_click
(
i
))
print
(
"clicks"
)
print
(
"clicks"
)
...
@@ -135,13 +135,13 @@ if __name__ == '__main__':
...
@@ -135,13 +135,13 @@ if __name__ == '__main__':
print
(
"cpcs"
)
print
(
"cpcs"
)
print
(
cpcs
)
print
(
cpcs
)
rdd
=
spark
.
sparkContext
.
parallelize
(
cpcs
)
#
rdd = spark.sparkContext.parallelize(cpcs)
df
=
spark
.
createDataFrame
(
rdd
)
.
toDF
.
toPandas
()
#
df = spark.createDataFrame(rdd).toDF.toPandas()
df
.
to_csv
(
'/home/gmuser/cpc.csv'
,
index
=
False
)
#
df.to_csv('/home/gmuser/cpc.csv',index=False)
#
rdd
=
spark
.
sparkContext
.
parallelize
(
clicks
)
#
rdd = spark.sparkContext.parallelize(clicks)
df
=
spark
.
createDataFrame
(
rdd
)
.
toDF
.
toPandas
()
#
df = spark.createDataFrame(rdd).toDF.toPandas()
df
.
to_csv
(
'/home/gmuser/clicks.csv'
,
index
=
False
)
#
df.to_csv('/home/gmuser/clicks.csv', index=False)
spark
.
stop
()
spark
.
stop
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment