json4s scala.MatchError(class scala.Tuple2)
json4s scala.MatchError (of class scala.Tuple2)
我有一个自定义 class,我想将其转换为 JSON,但我在此处发现了一个奇怪的错误:
Exception in thread "main" scala.MatchError: (23,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@5f275ae4) (of class scala.Tuple2)
代码在这里:
implicit val formats = org.json4s.DefaultFormats
val A = Serialization.write(resultsMap)
println(A)
现在如果我做一个foreach:
resultsMap.foreach(x => println(Serialization.write(x)))
我得到了一些结果,但它们看起来不正确:
{"_1":23,"_2":{}}
{"_1":32,"_2":{}}
元组缺少核心信息。我假设是因为我们使用的自定义 class 导致了某种问题?有什么办法吗?
如果我拉出地图的第二个元素并将其转换为 JSON,它将如下所示:
{"errorCode":null,"id":null,"fieldType":"STRING","fieldIndex":0,"datasetFieldName":"RECORD_ID","datasetFieldSum":0.0,"datasetFieldMin":0.0,"datasetFieldMax":0.0,"datasetFieldMean":0.0,"datasetFieldSigma":0.0,"datasetFieldNullCount":0.0,"datasetFieldObsCount":0.0,"datasetFieldKurtosis":0.0,"datasetFieldSkewness":0.0,"frequencyDistribution":"(D,4488)","runStatusId":null,"lakeHdfsPath":"/user/jvy234/20140817_011500_zoot_kohls_offer_init.dat"}
另外在旁注中,class 写在 java 中,如果这可能是罪魁祸首?
完整堆栈跟踪:
Exception in thread "main" scala.MatchError: (0,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@315a29f4) (of class scala.Tuple2)
at org.json4s.Extraction$.internalDecomposeWithBuilder(Extraction.scala:132)
at org.json4s.Extraction$.decomposeWithBuilder(Extraction.scala:67)
at org.json4s.Extraction$.decompose(Extraction.scala:194)
at org.json4s.jackson.Serialization$.write(Serialization.scala:22)
at com.xxx.dts.toolset.jsonWrite$.jsonClob(jsonWrite.scala:16)
at com.xxx.dts.dq.profiling.DQProfilingEngine.profile(DQProfilingEngine.scala:255)
at com.xxx.dts.dq.profiling.Profiler$.main(DQProfilingEngine.scala:64)
at com.xxx.dts.dq.profiling.Profiler.main(DQProfilingEngine.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我想你只有两个办法:
为 tuple2 编写序列化
或
将其转换为列表地图,例如:resultsMap.map(Map(_)).foreach(...)
更新:
对于序列化,你可以使用这样的东西:
class Tuple2Serializer extends CustomSerializer[(String, Int)]( format => (
{
case JObject(JField(k, JInt(v))) => (k, v)
},
{
case (s: String, t: Int) => (s -> t)
} ) )
implicit val formats = org.json4s.DefaultFormats + new Tuple2Serializer
我有一个自定义 class,我想将其转换为 JSON,但我在此处发现了一个奇怪的错误:
Exception in thread "main" scala.MatchError: (23,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@5f275ae4) (of class scala.Tuple2)
代码在这里:
implicit val formats = org.json4s.DefaultFormats
val A = Serialization.write(resultsMap)
println(A)
现在如果我做一个foreach:
resultsMap.foreach(x => println(Serialization.write(x)))
我得到了一些结果,但它们看起来不正确:
{"_1":23,"_2":{}}
{"_1":32,"_2":{}}
元组缺少核心信息。我假设是因为我们使用的自定义 class 导致了某种问题?有什么办法吗?
如果我拉出地图的第二个元素并将其转换为 JSON,它将如下所示:
{"errorCode":null,"id":null,"fieldType":"STRING","fieldIndex":0,"datasetFieldName":"RECORD_ID","datasetFieldSum":0.0,"datasetFieldMin":0.0,"datasetFieldMax":0.0,"datasetFieldMean":0.0,"datasetFieldSigma":0.0,"datasetFieldNullCount":0.0,"datasetFieldObsCount":0.0,"datasetFieldKurtosis":0.0,"datasetFieldSkewness":0.0,"frequencyDistribution":"(D,4488)","runStatusId":null,"lakeHdfsPath":"/user/jvy234/20140817_011500_zoot_kohls_offer_init.dat"}
另外在旁注中,class 写在 java 中,如果这可能是罪魁祸首?
完整堆栈跟踪:
Exception in thread "main" scala.MatchError: (0,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@315a29f4) (of class scala.Tuple2)
at org.json4s.Extraction$.internalDecomposeWithBuilder(Extraction.scala:132)
at org.json4s.Extraction$.decomposeWithBuilder(Extraction.scala:67)
at org.json4s.Extraction$.decompose(Extraction.scala:194)
at org.json4s.jackson.Serialization$.write(Serialization.scala:22)
at com.xxx.dts.toolset.jsonWrite$.jsonClob(jsonWrite.scala:16)
at com.xxx.dts.dq.profiling.DQProfilingEngine.profile(DQProfilingEngine.scala:255)
at com.xxx.dts.dq.profiling.Profiler$.main(DQProfilingEngine.scala:64)
at com.xxx.dts.dq.profiling.Profiler.main(DQProfilingEngine.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我想你只有两个办法:
为 tuple2 编写序列化
或
将其转换为列表地图,例如:
resultsMap.map(Map(_)).foreach(...)
更新: 对于序列化,你可以使用这样的东西:
class Tuple2Serializer extends CustomSerializer[(String, Int)]( format => (
{
case JObject(JField(k, JInt(v))) => (k, v)
},
{
case (s: String, t: Int) => (s -> t)
} ) )
implicit val formats = org.json4s.DefaultFormats + new Tuple2Serializer