我需要计算 pyspark 中电影的平均评分
I need to calculate the average ratings of films in pyspark
我有一组电影data/ratings,我需要按电影计算收视率的平均值。这就像 SQL 中按 movieId 分组的评分总和。
非常感谢您的帮助
我尝试过使用 aggregateBYKey,但我不知道如何使用 seqOp 和 CombOp 函数。我是 PySpark 的新手。
这是我的 RDD 的一部分:[movieId, userId, rating, film]
[('1', '1', 4.0, 'Toy Story (1995)'),
('1', '5', 4.0, 'Toy Story (1995)'),
('1', '7', 4.5, 'Toy Story (1995)'),
('1', '15', 2.5, 'Toy Story (1995)'),
('1', '17', 4.5, 'Toy Story (1995)'),
('1', '18', 3.5, 'Toy Story (1995)'),
('1', '19', 4.0, 'Toy Story (1995)'),
('1', '21', 3.5, 'Toy Story (1995)'),
('1', '27', 3.0, 'Toy Story (1995)'),
('1', '31', 5.0, 'Toy Story (1995)'),
('1', '32', 3.0, 'Toy Story (1995)'),
('1', '33', 3.0, 'Toy Story (1995)'),
('1', '40', 5.0, 'Toy Story (1995)'),
('1', '43', 5.0, 'Toy Story (1995)'),
('1', '44', 3.0, 'Toy Story (1995)'),
('1', '45', 4.0, 'Toy Story (1995)'),
('1', '46', 5.0, 'Toy Story (1995)'),
('1', '50', 3.0, 'Toy Story (1995)'),
('1', '54', 3.0, 'Toy Story (1995)'),
('1', '57', 5.0, 'Toy Story (1995)')]
我需要计算每部电影的平均评分,例如:
[('1', average_ratings_of_film_1, film_name_1),
('2', average_ratings_of_film_2, film_name_2)]
非常感谢您的帮助
您可以使用以下方法将列表转换为 DF,然后使用 groupby().avg()
data = spark.sparkContext.parallelize(
[('1', '1', 4.0, 'Toy Story (1995)'),
('1', '5', 4.0, 'Toy Story (1995)'),
('1', '7', 4.5, 'Toy Story (1995)'),
('1', '15', 2.5, 'Toy Story (1995)'),
('1', '17', 4.5, 'Toy Story (1995)'),
('1', '18', 3.5, 'Toy Story (1995)'),
('1', '19', 4.0, 'Toy Story (1995)'),
('1', '21', 3.5, 'Toy Story (1995)'),
('1', '27', 3.0, 'Toy Story (1995)'),
('1', '31', 5.0, 'Toy Story (1995)'),
('1', '32', 3.0, 'Toy Story (1995)'),
('1', '33', 3.0, 'Toy Story (1995)'),
('1', '40', 5.0, 'Toy Story (1995)'),
('1', '43', 5.0, 'Toy Story (1995)'),
('1', '44', 3.0, 'Toy Story (1995)'),
('1', '45', 4.0, 'Toy Story (1995)'),
('1', '46', 5.0, 'Toy Story (1995)'),
('1', '50', 3.0, 'Toy Story (1995)'),
('1', '54', 3.0, 'Toy Story (1995)'),
('1', '57', 5.0, 'Toy Story (1995)')])
df = data.toDF(schema=["movie_id", "user_id", "rating", "movie"])
group = df.groupby("movie").avg("rating")
group.show()
#+----------------+-----------+
#| movie|avg(rating)|
#+----------------+-----------+
#|Toy Story (1995)| 3.875|
#+----------------+-----------+
我有一组电影data/ratings,我需要按电影计算收视率的平均值。这就像 SQL 中按 movieId 分组的评分总和。 非常感谢您的帮助
我尝试过使用 aggregateBYKey,但我不知道如何使用 seqOp 和 CombOp 函数。我是 PySpark 的新手。
这是我的 RDD 的一部分:[movieId, userId, rating, film]
[('1', '1', 4.0, 'Toy Story (1995)'),
('1', '5', 4.0, 'Toy Story (1995)'),
('1', '7', 4.5, 'Toy Story (1995)'),
('1', '15', 2.5, 'Toy Story (1995)'),
('1', '17', 4.5, 'Toy Story (1995)'),
('1', '18', 3.5, 'Toy Story (1995)'),
('1', '19', 4.0, 'Toy Story (1995)'),
('1', '21', 3.5, 'Toy Story (1995)'),
('1', '27', 3.0, 'Toy Story (1995)'),
('1', '31', 5.0, 'Toy Story (1995)'),
('1', '32', 3.0, 'Toy Story (1995)'),
('1', '33', 3.0, 'Toy Story (1995)'),
('1', '40', 5.0, 'Toy Story (1995)'),
('1', '43', 5.0, 'Toy Story (1995)'),
('1', '44', 3.0, 'Toy Story (1995)'),
('1', '45', 4.0, 'Toy Story (1995)'),
('1', '46', 5.0, 'Toy Story (1995)'),
('1', '50', 3.0, 'Toy Story (1995)'),
('1', '54', 3.0, 'Toy Story (1995)'),
('1', '57', 5.0, 'Toy Story (1995)')]
我需要计算每部电影的平均评分,例如:
[('1', average_ratings_of_film_1, film_name_1),
('2', average_ratings_of_film_2, film_name_2)]
非常感谢您的帮助
您可以使用以下方法将列表转换为 DF,然后使用 groupby().avg()
data = spark.sparkContext.parallelize(
[('1', '1', 4.0, 'Toy Story (1995)'),
('1', '5', 4.0, 'Toy Story (1995)'),
('1', '7', 4.5, 'Toy Story (1995)'),
('1', '15', 2.5, 'Toy Story (1995)'),
('1', '17', 4.5, 'Toy Story (1995)'),
('1', '18', 3.5, 'Toy Story (1995)'),
('1', '19', 4.0, 'Toy Story (1995)'),
('1', '21', 3.5, 'Toy Story (1995)'),
('1', '27', 3.0, 'Toy Story (1995)'),
('1', '31', 5.0, 'Toy Story (1995)'),
('1', '32', 3.0, 'Toy Story (1995)'),
('1', '33', 3.0, 'Toy Story (1995)'),
('1', '40', 5.0, 'Toy Story (1995)'),
('1', '43', 5.0, 'Toy Story (1995)'),
('1', '44', 3.0, 'Toy Story (1995)'),
('1', '45', 4.0, 'Toy Story (1995)'),
('1', '46', 5.0, 'Toy Story (1995)'),
('1', '50', 3.0, 'Toy Story (1995)'),
('1', '54', 3.0, 'Toy Story (1995)'),
('1', '57', 5.0, 'Toy Story (1995)')])
df = data.toDF(schema=["movie_id", "user_id", "rating", "movie"])
group = df.groupby("movie").avg("rating")
group.show()
#+----------------+-----------+
#| movie|avg(rating)|
#+----------------+-----------+
#|Toy Story (1995)| 3.875|
#+----------------+-----------+