Pyspark:将 pyspark.sql.row 转换为 Dataframe

Pyspark: Convert pyspark.sql.row into Dataframe

我在 pyspark 中有以下行。我想基本上将它与 pandas 数据框合并。

Row(Banked_Date_Calc__c=0   NaN
Name: Banked_Date_Calc__c, dtype: float64, CloseDate=0    2018-06-13T00:00:00.000Z
Name: CloseDate, dtype: object, CourseGEV__c=0    2990
Name: CourseGEV__c, dtype: int64, Id=0    0060h0000169NWLAA2
Name: Id, dtype: object, OwnerId=0    0050L000008Z30mQAC
Name: OwnerId, dtype: object, timestamp=0   2018-06-13 17:02:30.017566
Name: timestamp, dtype: datetime64[ns])

现在,当我将上面的行放在 pd.DataFrame(msg)

中时,我收到错误消息,指出 DataFrame 未正确调用
msg = Row(.....) #Row is from above
pd.DataFrame(msg)

您不能将 pyspark 行直接传递给 Pandas Dataframe 构造函数。你可以用一个中介字典来做。

row_d = Row(...).asDict()
pd_df = pd.DataFrame.from_dict(row_d)