spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sun, Rui" <>
Subject RE: unserialize error in sparkR
Date Mon, 27 Jul 2015 09:07:48 GMT

Do you mean you are running the script with and
spark 1.2? I am afraid that currently there is no development effort and support on  the SparkR-pkg
since it has been integrated into Spark since Spark 1.4.

Unfortunately, the RDD API and RDD-like API of DataFrame of SparkR is not exposed in Spark
1.4 for some considerations. Although not exposed, some RDD-like API of DataFrame are actually
implemented which you can find in the SparkR source code, including lapply/lapplyPartition/flatMap/foreach/foreachPartition.
Though not recommended, but if you really want to use them, you can use SparkR::: to access
them as a temporary workaround.

There is on-going investigation and discussion on whether to expose a subset of RDD API or
not, you can refer to if you are interested.

-----Original Message-----
From: Jennifer15 [] 
Sent: Monday, July 27, 2015 1:47 PM
Subject: unserialize error in sparkR

I have a newbie question; I get the following error by increasing the number of samples in
my sample script  samplescript.R <>
, which is written in Spark1.2 (no error for small sample of error):

Error in unserialize(obj) : 
ReadItem: unknown type 0, perhaps written by later version of R
Calls: assetForecast ... convertJListToRList -> lapply -> lapply -> FUN   ->
Execution halted

I tried using Spark1.4 though I could not find lapply or any similar functions for dataframes.
I am not sure if this error is because of using spark1.2 though if it is, what is the equivalent
of lapply/map to work on dataframes?

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail: For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message