drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Rogers <par0...@gmail.com>
Subject Re: What is the most memory-efficient technique for selecting several million records from a CSV file
Date Fri, 23 Oct 2020 07:08:02 GMT
Hi Gareth,

As it turns out, SELECT * by itself should use a fixed amount of memory
regardless of table size. (With two caveats.) Drill, as with most query
engines, reads data in batches, then returns each batch to the client. So,
if you do SELECT * FROM yourfile.csv, the execution engine will use only
enough memory for one batch of data (which is likely to be in the 10s of
meg in size.)

The first caveat is if you do a "buffering" operation, such as a sort.
SELECT * FROM yourfile.csv ORDER BY someCol will need to hold all data.
But, Drill spills to disk to relieve memory pressure.

The other caveat is if you use the REST API to fetch data. Drill's REST API
is not scalable. It buffers all data in memory in an extremely inefficient
manner. If you use the JDBC, ODBC or native APIs, then you won't have this
problem. (There is a pending fix we can do for a future release.) Are you
using the REST API?

Note that the above is just as true of Parquet as it is with CSV. However,
as Nitin notes, Parquet is more efficient to read.

Thanks,

- Paul


On Thu, Oct 22, 2020 at 11:30 PM Nitin Pawar <nitinpawar432@gmail.com>
wrote:

> Please convert CSV to parquet first and while doing so make sure you cast
> each column to correct datatype
>
> once you have in paraquet, your queries should be bit faster.
>
> On Fri, Oct 23, 2020, 11:57 AM Gareth Western <gareth@garethwestern.com>
> wrote:
>
> > I have a very large CSV file (nearly 13 million records) stored in Azure
> > Storage and read via the Azure Storage plugin. The drillbit configuration
> > has a modest 4GB heap size. Is there an effective way to select all the
> > records from the file without running out of resources in Drill?
> >
> > SELECT * … is too big
> >
> > SELECT * with OFFSET and LIMIT sounds like the right approach, but OFFSET
> > still requires scanning through the offset records, and this seems to hit
> > the same memory issues even with small LIMITs once the offset is large
> > enough.
> >
> > Would it help to switch the format to something other than CSV? Or move
> it
> > to a different storage mechanism? Or something else?
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message