1、首先把redis包引入工程,这样就不需要在集群里每台机器上安装redis客户端。
$pip install redis
$cd /usr/local/lib/python3.6/dist-packages/ 找到自己环境的路径
$zip -r redis.zip redis/*
$hdfs dfs -put redis.zip /user/data/
2、在代码里使用 addPyFile加载redis.zip
sc = SparkContext(conf=conf)
sc.addPyFile("hdfs:///user/data/redis.zip")
#定义一个写入redis函数
def DataToRedis(data):
r = redis.StrictRedis(host='IP', port=6379, password='passwd')
for i in data:
r.set(str(i[0]), str(i[1]))
#读取Hive数据
sqlContext = HiveContext(sc)
read_hive_score = sqlContext.sql("Select id,item from recom.result limit 10")
hiveRDD_score =read_hive_score.rdd
result_dataSet = hiveRDD_score.map(lambda x: (x['id'], x['item'])).collect()
#调用函数
DataToRedis(result_dataSet)
参考:
Write data to Redis from PySpark
https://www.e-learn.cn/content/wangluowenzhang/1347480
https://stackoverflow.com/questions/32274540/write-data-to-redis-from-pyspark
spark-redis
https://spark-packages.org/package/RedisLabs/spark-redis?spm=a2c6h.12873639.0.0.4d1e16a3g7Ml18
Pyspark实例-Spark On YARN将HDFS的数据写入Redis
http://www.gdjzkj.com/?m=home&c=View&a=index&aid=117
Python redis.ConnectionError() 例子
https://www.programcreek.com/python/example/36966/redis.ConnectionError
redis操作 + StrictRedis使用
https://www.cnblogs.com/szhangli/p/9979600.html
作者:levy_cui