spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "鹰" <980548...@qq.com>
Subject 回复:Nullpointer when saving as table with a timestamp column type
Date Fri, 17 Jul 2015 09:31:56 GMT
df: [name:String, Place:String, time: time:timestamp]
why not df: [name:String, Place:String,  time:timestamp]



------------------ 原始邮件 ------------------
发件人: "Brandon White";<bwwinthehouse@gmail.com>;
发送时间: 2015年7月17日(星期五) 下午2:18
收件人: "user"<user@spark.apache.org>; 

主题: Nullpointer when saving as table with a timestamp column type



So I have a very simple dataframe that looks like

df: [name:String, Place:String, time: time:timestamp]


I build this java.sql.Timestamp from a string and it works really well expect when I call
saveAsTable("tableName") on this df. Without the timestamp, it saves fine but with the timestamp,
it throws 


java.lang.NullPointerException Driver stacktrace: 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1230)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1219)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1218)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1218) 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:719)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:719)
	at scala.Option.foreach(Option.scala:236) 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:719)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1419)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1380)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)





Any ideas how I can get around this?
Mime
View raw message