spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Armbrust <>
Subject Re: advantages of SparkSQL?
Date Mon, 24 Nov 2014 21:19:32 GMT
Akshat is correct about the benefits of parquet as a columnar format, but
I'll add that some of this is lost if you just use a lambda function to
process the data.  Since your lambda function is a black box Spark SQL does
not know which columns it is going to use and thus will do a full
tablescan.  I'd suggest writing a very simple SQL query that pulls out just
the columns you need and does any filtering before dropping back into
standard spark operations.  The result of SQL queries is an RDD of rows so
you can do any normal spark processing you want on them.

Either way though it will often be faster than a text filed due to better

On Mon, Nov 24, 2014 at 8:54 AM, Akshat Aranya <> wrote:

> Parquet is a column-oriented format, which means that you need to read in
> less data from the file system if you're only interested in a subset of
> your columns.  Also, Parquet pushes down selection predicates, which can
> eliminate needless deserialization of rows that don't match a selection
> criterion.  Other than that, you would also get compression, and likely
> save processor cycles when parsing lines from text files.
> On Mon, Nov 24, 2014 at 8:20 AM, mrm <> wrote:
>> Hi,
>> Is there any advantage to storing data as a parquet format, loading it
>> using
>> the sparkSQL context, but never registering as a table/using sql on it?
>> Something like:
>> Something like:
>> data = sqc.parquetFile(path)
>> results = x: applyfunc(x.field))
>> Is this faster/more optimised than having the data stored as a text file
>> and
>> using Spark (non-SQL) to process it?
>> --
>> View this message in context:
>> Sent from the Apache Spark User List mailing list archive at
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:

View raw message