pyspark sql 有计数

pyspark sql with having count

我正在尝试 select 表 WarehousesBoxes 中的所有仓库代码 这样 Warehouse.capacity 小于 Boxes.count_of_boxes.

SQL 在 postgresql 中有效的查询

select w.code
from Warehouses w
join Boxes b
on w.code = b.warehouse
group by w.code
having count(b.code) > w.capacity

但是相同的查询在 pyspark 中不起作用

spark.sql("""
select w.code
from Warehouses w
join Boxes b
on w.code = b.warehouse
group by w.code
having count(b.code) > w.capacity

""").show()

如何修复代码?

设置

import numpy as np
import pandas as pd


# pyspark
import pyspark
from pyspark.sql import functions as F 
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext, SQLContext


spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sc.setLogLevel("INFO")


# warehouse
dfw = pd.DataFrame({'code': [1, 2, 3, 4, 5],
          'location': ['Chicago', 'Chicago', 'New York', 'Los Angeles', 'San Francisco'],
          'capacity': [3, 4, 7, 2, 8]})

schema = StructType([
    StructField('code',IntegerType(),True),
    StructField('location',StringType(),True),
    StructField('capacity',IntegerType(),True),
    ])

sdfw = sqlContext.createDataFrame(dfw, schema)
sdfw.createOrReplaceTempView("Warehouses")


# box
dfb = pd.DataFrame({'code': ['0MN7', '4H8P', '4RT3', '7G3H', '8JN6', '8Y6U', '9J6F', 'LL08', 'P0H6', 'P2T6', 'TU55'],
          'contents': ['Rocks', 'Rocks', 'Scissors', 'Rocks', 'Papers', 'Papers', 'Papers', 'Rocks', 'Scissors', 'Scissors', 'Papers'],
          'value': [180.0, 250.0, 190.0, 200.0, 75.0, 50.0, 175.0, 140.0, 125.0, 150.0, 90.0],
          'warehouse': [3, 1, 4, 1, 1, 3, 2, 4, 1, 2, 5]})

schema = StructType([
    StructField('code',StringType(),True),
    StructField('contents',StringType(),True),
    StructField('value',FloatType(),True),
    StructField('warehouse',IntegerType(),True),

    ])

sdfb = sqlContext.createDataFrame(dfb, schema)
sdfb.createOrReplaceTempView("Boxes")

spark.sql("""
select w.code
from Warehouses w
join Boxes b
on w.code = b.warehouse
group by w.code
having count(b.code) > w.capacity

""").show()

试试这个。错误是 spark 找不到容量,因为它没有包含在聚合函数中。 First 应该为你做到这一点。:

spark.sql("""
select w.code
from Warehouses w
join Boxes b
on w.code = b.warehouse
group by w.code
having count(b.code) > first(w.capacity)

""").show()

如何修复代码?

问题不在于你的代码,也许。

检查您使用的 Java JDK 的版本。我所知道的是 spark.sql().show() 与 Java JDK 版本 11 不兼容。如果您使用的是此版本,只需降级到版本 8(同时正确配置环境变量JDK 8).