drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Boaz Ben-Zvi <bben-...@mapr.com>
Subject Re: CTAS memory leak
Date Thu, 30 Aug 2018 00:10:46 GMT
  Hi Scott,

1.  "swaps and then crashes" - do you mean an Out-Of-Memory error ?

2. Version 1.14 is available now, with several memory control 
improvements (e.g., Hash Join spilling, output batch sizing)

3. Direct memory is only 10G - why not go higher ? This is where most of 
Drill's in-memory data is held (not so much the stack and heap).

4. May want to increase the memory available to each query on each node; 
the default ( 2GB ) is too conservative (i.e. low).

     E.g., to go to 8GB, do

       alter session set `planner.memory.max_query_memory_per_node` = 
8589934592;

   Thanks,

        Boaz

On 8/29/18 4:09 PM, scott wrote:
> Hi all,
> I've got a problem using the create table as option I was hoping someone
> could help with. I am trying to create parquet files from existing json
> files using this method. It works on smaller datasets, but when I try this
> on a large dataset, drill will take up all memory on my servers until it
> swaps and then crashes. I'm running version 1.12 on centos 7. I've got my
> drillbits set to xmx 8G, which seems to work for most queries and it does
> not exceed that limit by much, but when I do the CTAS, it just keeps
> growing without bounds.
> I run 4 drillbits on each server with these settings: -Xms8G -Xmx8G
> -XX:MaxDirectMemorySize=10G on a server that has 48G RAM.
> Has anyone else experienced this? Are there any workarounds you can suggest?
>
> Thanks for your time,
> Scott
>


Mime
View raw message