drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ashish Pancholi <apanch...@chambal.com>
Subject Re: How to resolve error- "One or more nodes ran out of memory while executing the query"?
Date Fri, 26 Oct 2018 11:30:22 GMT
Hi Gautam,

Can you please share the query profile?


Yes, but I don't know how to share it, therefore,  I am attaching the whole
JSON profile. Hope that will work for you.

did you start with hash_join disabled or did it for troubleshooting?

Yes, I also tried with disabling hash join but the issue remains the same.

On Fri, Oct 26, 2018 at 3:17 AM Gautam Parai <gparai@mapr.com> wrote:

> Hi Ashish,
>
>  I am curious - did you start with
> hash_join disabled or did it for troubleshooting?
>
> Gautam
>
> On Thu, Oct 25, 2018 at 12:01 AM Ashish Pancholi <apancholi@chambal.com>
> wrote:
>
> > ---------- Forwarded message ---------
> > From: Ashish Pancholi <apancholi@chambal.com>
> > Date: Thu, Oct 25, 2018 at 12:21 PM
> > Subject: How to resolve error- "One or more nodes ran out of memory while
> > executing the query"?
> > To: <user-subscribe@drill.apache.org>, <user@drill.apache.org>
> >
> >
> >
> > I am using `Apache Drill` version `1.14` on `windows` system and running
> > Drill using the command:
> >
> >     sqlline.bat -u "jdbc:drill:zk=local"
> >
> >
> >  I am trying to `execute` a `join query` on two `compressed` and
> `archived`
> > `CSV` files.
> >
> > Query:
> >
> >     SELECT  *  FROM
> > >
> >
> dfs.`C:\Users\admin\Desktop\DRILL_FILES\csvFileParquet\TBL_MOREDATA-20180924181406.tar.gz`
> > > AS Table0 INNER JOIN
> > >
> >
> dfs.`C:\Users\admin\Desktop\DRILL_FILES\csvFileParquet\TBL_MOREDATA1-20180924181406.tar.gz`
> > > AS Table1  ON  Table0.columns[0]=Table1.columns[0]
> >
> >
> > But an out of memory, error occurred:
> >
> >     org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
> ERROR:
> > > One or more nodes ran out of memory while executing the query. Unable
> to
> > > allocate buffer of size 131072 (rounded from 86104) due to memory limit
> > > (630194176). Current allocation: 630108434 Fragment 0:0 [Error Id:
> > > 585c0644-5fd5-446e-b9b3-d48e0771eb2a on DESKTOP-SM3E3KM:31010]
> >
> >
> > To resolve the issue, I tried to update `config\drill-env.sh` file but
> the
> > issue remains the same and it looks like updating the script file does
> not
> > reflect the changes because I am trying to increase the DIRECT MEMORY
> > beyond the system memory (RAM), every time drill starts up peacefully.
> Not
> > even complaining that you have exceeded the memory, therefore, it looks
> > like the changes are not reflecting.
> >
> >     export DRILLBIT_MAX_PROC_MEM=12G
> > >     export DRILL_HEAP=2G
> > >     export DRILL_MAX_DIRECT_MEMORY=10G
> >
> >
> > whereas my system's main memory is only 8 GB.
> >
> > *Please help me to resolve the out of memory error*. I had even run the
> > below queries, in order to follow the troubleshooting instructions but
> the
> > issue remains the same.
> >
> >
> >    -     alter session set `planner.enable_hashagg` = false;
> >    -     alter session set `planner.enable_hashjoin` = false;
> >    -     alter session set planner.width.max_per_node=3;
> >    -     alter system set planner.width.max_per_query = 100;
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> > Thanks
> > Ashish Pancholi
> >
> >
> > --
> > Thanks
> > Ashish Pancholi
> >
>


-- 
Thanks
Ashish Pancholi

Mime
View raw message