spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erik LaBianca <erik.labia...@gmail.com>
Subject ability to provide custom serializers
Date Sat, 03 Dec 2016 00:03:48 GMT
Hi All,

Apologies in advance for any confusing terminology, I’m still pretty new to Spark.

I’ve got a bunch of Scala case class “domain objects” from an existing application.
Many of them contain simple, but unsupported-by-spark types in them, such as case class Foo(timestamp:
java.time.Instant). I’d like to be able to use these case classes directly in a DataSet,
but can’t, since there’s no encoder available for java.time.Instant. I’d like to resolve
that.

I asked around on the gitter channel, and was pointed to the ScalaReflections class, which
handles creating Encoder[T] for a variety of things, including case classes and their members.
Barring a better solution, what I’d like is to be able to add some additional case statements
to the serializerFor and deserializeFor methods, dispatching to something along the lines
of the Slick MappedColumnType[1]. In an ideal scenario, I could provide these mappings via
implicit search, but I’d be happy to settle for a registry of some sort too.

Does this idea make sense, in general? I’m interested in taking a stab at the implementation,
but Jakob recommended I surface it here first to see if there were any plans around this sort
of functionality already.

Thanks!

—erik

1. http://slick.lightbend.com/doc/3.0.0/userdefined.html#using-custom-scalar-types-in-queries


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Mime
View raw message