Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in
Toggle navigation
F
ffm-baseline
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
ML
ffm-baseline
Commits
b3da204d
Commit
b3da204d
authored
Nov 05, 2019
by
高雅喆
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
删除部分轻医美标签词
parent
b404a648
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
21 additions
and
0 deletions
+21
-0
dist_update_user_history_order_tags.py
eda/smart_rank/dist_update_user_history_order_tags.py
+21
-0
No files found.
eda/smart_rank/dist_update_user_history_order_tags.py
View file @
b3da204d
...
...
@@ -17,6 +17,7 @@ import numpy as np
import
pandas
as
pd
from
pyspark.sql.functions
import
lit
from
pyspark.sql.functions
import
concat_ws
from
tool
import
*
def
send_email
(
app
,
id
,
e
):
...
...
@@ -52,6 +53,25 @@ def send_email(app,id,e):
print
(
'error'
)
def
get_hot_search_words_tag
():
try
:
hot_search
=
"""
SELECT a.keywords,
b.id,
b.tag_type
FROM api_hot_search_words a
LEFT JOIN api_tag b ON a.keywords=b.name
WHERE a.is_delete=0
AND b.tag_type+0<'4'+0
AND b.is_online=1
ORDER BY a.sorted DESC LIMIT 10
"""
mysql_results
=
get_data_by_mysql
(
'172.16.30.141'
,
3306
,
'work'
,
'BJQaT9VzDcuPBqkd'
,
'zhengxing'
,
hot_search
)
return
mysql_results
except
Exception
as
e
:
print
(
e
)
return
[]
def
get_user_history_order_service_tag
(
user_id
):
try
:
if
user_id
:
...
...
@@ -139,6 +159,7 @@ if __name__ == '__main__':
redis_client
.
hmset
(
hot_search_words_portrait_portrait_key3
,
hot_search_words_portrait3_dict
)
# rdd
spark
.
sparkContext
.
addPyFile
(
"/srv/apps/ffm-baseline_git/eda/smart_rank/tool.py"
)
sparkConf
=
SparkConf
()
.
set
(
"spark.hive.mapred.supports.subdirectories"
,
"true"
)
\
.
set
(
"spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive"
,
"true"
)
\
.
set
(
"spark.tispark.plan.allow_index_double_read"
,
"false"
)
\
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment