需要在 spark sql 中查找复合键的最新记录
need to find the latest records for a composite keys in spark sql
我需要根据日期查找full_national_number
的最新记录。有人可以提出解决方案吗?
我的数据是
+--------------------+-----------------------+----------+
|full_national_number|derived_sequence_number| ts|
+--------------------+-----------------------+----------+
| A00000001 | 0000|1111-11-11|
| A00000001 | 0001|1111-11-11|
| A00000001 | 0002|1111-11-11|
| A00000002 | 0000|1111-11-11|
| A00000002 | 0001|1111-11-11|
| A00000002 | 0002|1111-11-11|
| A00000003 | 0000|1111-11-11|
| A00000003 | 0001|1111-11-11|
| A00000004 | 0000|1111-11-11|
| A000000010 | 0000|1111-11-11|
| A000000011 | 0000|1111-11-11|
| A00000008 | 0000|2018-11-16|
| A00000008 | 0001|2018-11-16|
| A00000008 | 0002|2018-11-16|
| A00000002 | 0000|2018-11-16|
| A00000003 | 0000|2018-11-16|
| A00000004 | 0000|2018-11-16|
| A00000005 | 0000|2018-11-16|
+--------------------+-----------------------+----------+
我的预期输出应该是
+--------------------+-----------------------+----------+
|full_national_number|derived_sequence_number| ts|
+--------------------+-----------------------+----------+
| A00000001 | 0000|1111-11-11|
| A00000001 | 0001|1111-11-11|
| A00000001 | 0002|1111-11-11|
| A00000002 | 0000|2018-11-16|
| A00000003 | 0000|2018-11-16|
| A00000004 | 0000|2018-11-16|
| A00000005 | 0000|2018-11-16|
| A00000008 | 0000|2018-11-16|
| A00000008 | 0001|2018-11-16|
| A00000008 | 0002|2018-11-16|
| A000000010 | 0000|1111-11-11|
| A000000011 | 0000|1111-11-11|
+--------------------+-----------------------+----------+
我尝试了以下选项但出现错误。
sqlContext.sql("select full_national_number, derived_sequence_number,
max(ts) from (select *,to_date('1111-11-11') as ts from t1 union all
select *,current_date from t2) unioned group by
full_national_number").show()
我得到的错误是
Traceback (most recent call last):
File "", line 1, in (module) File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/pyspark/sql/context.py",
line 580, in sql
return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call
File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/pyspark/sql/utils.py",
line 51, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.AnalysisException: u"expression
'derived_sequence_number' is neither present in the group by, nor is
it an aggregate function. Add to group by or wrap in first() (or
first_value) if you don't care which value you get.;"
请给我一个解决方案。
我想这会让您得到想要的结果。只需粘贴 SQL 查询:
Select full_national_number, derived_sequence_number, ts
FROM
(
select full_national_number, derived_sequence_number, ts,
RANK() OVER(Partition by full_national_number ORDER by ts desc) as rnk
from table
)a
WHERE a.rnk = 1;
如果有帮助请告诉我。
我需要根据日期查找full_national_number
的最新记录。有人可以提出解决方案吗?
我的数据是
+--------------------+-----------------------+----------+
|full_national_number|derived_sequence_number| ts|
+--------------------+-----------------------+----------+
| A00000001 | 0000|1111-11-11|
| A00000001 | 0001|1111-11-11|
| A00000001 | 0002|1111-11-11|
| A00000002 | 0000|1111-11-11|
| A00000002 | 0001|1111-11-11|
| A00000002 | 0002|1111-11-11|
| A00000003 | 0000|1111-11-11|
| A00000003 | 0001|1111-11-11|
| A00000004 | 0000|1111-11-11|
| A000000010 | 0000|1111-11-11|
| A000000011 | 0000|1111-11-11|
| A00000008 | 0000|2018-11-16|
| A00000008 | 0001|2018-11-16|
| A00000008 | 0002|2018-11-16|
| A00000002 | 0000|2018-11-16|
| A00000003 | 0000|2018-11-16|
| A00000004 | 0000|2018-11-16|
| A00000005 | 0000|2018-11-16|
+--------------------+-----------------------+----------+
我的预期输出应该是
+--------------------+-----------------------+----------+
|full_national_number|derived_sequence_number| ts|
+--------------------+-----------------------+----------+
| A00000001 | 0000|1111-11-11|
| A00000001 | 0001|1111-11-11|
| A00000001 | 0002|1111-11-11|
| A00000002 | 0000|2018-11-16|
| A00000003 | 0000|2018-11-16|
| A00000004 | 0000|2018-11-16|
| A00000005 | 0000|2018-11-16|
| A00000008 | 0000|2018-11-16|
| A00000008 | 0001|2018-11-16|
| A00000008 | 0002|2018-11-16|
| A000000010 | 0000|1111-11-11|
| A000000011 | 0000|1111-11-11|
+--------------------+-----------------------+----------+
我尝试了以下选项但出现错误。
sqlContext.sql("select full_national_number, derived_sequence_number, max(ts) from (select *,to_date('1111-11-11') as ts from t1 union all select *,current_date from t2) unioned group by full_national_number").show()
我得到的错误是
Traceback (most recent call last):
File "", line 1, in (module) File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/pyspark/sql/context.py", line 580, in sql return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call
File "/opt/cloudera/parcels/CDH-5.14.4-1.cdh5.14.4.p0.3/lib/spark/python/pyspark/sql/utils.py", line 51, in deco raise AnalysisException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.AnalysisException: u"expression 'derived_sequence_number' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;"
请给我一个解决方案。
我想这会让您得到想要的结果。只需粘贴 SQL 查询:
Select full_national_number, derived_sequence_number, ts
FROM
(
select full_national_number, derived_sequence_number, ts,
RANK() OVER(Partition by full_national_number ORDER by ts desc) as rnk
from table
)a
WHERE a.rnk = 1;
如果有帮助请告诉我。