drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kunal Khatua <kkha...@mapr.com>
Subject Re: User client timeout with results > 2M rows
Date Wed, 20 Sep 2017 18:40:54 GMT
Do you know in how much time does this timeout occur? There might be some tuning needed to
increase a timeout. Also, I think this (S3 specifically) has been seen before... So you might
find a solution within the mailing list archives. Did you try looking there?



From: Alan Höng
Sent: Wednesday, September 20, 8:46 AM
Subject: User client timeout with results > 2M rows
To: user@drill.apache.org


Hello,

I'm getting errors when trying to fetch results from drill with queries that evaluate to bigger
tables. Surprisingly it works like a charm if the returned table has less than 2M rows. It
also seems like the query is executed and finishes successfully....

I'm querying parquet files with GZIP compression on S3. I'm running drill in distributed mode
with zookeeper. I use version 1.9 from the container available on dockerhub "harisekhon/apache-drill:1.9".
I'm using the pydrill package which uses the rest api to submit queries and gather results.

I get the following error message from the client:

TransportError(500, '{\n  "errorMessage" : "CONNECTION ERROR: Connection /172.19.0.3:52382<http://172.19.0.3:52382>
<--> ef53daab0ef8/172.19.0.6:31010<http://172.19.0.6:31010> (user client) closed
unexpectedly. Drillbit down?\\n\\n\\n[Error Id: 6a19835b-2325-431e-9bad-dde8f1d3c192 ]"\n}'

I would appreciate any help with this.

Best
Alan Höng



Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message