drill-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jinfengni <...@git.apache.org>
Subject [GitHub] drill pull request #597: DRILL-4905: Push down the LIMIT to the parquet read...
Date Tue, 27 Sep 2016 21:10:36 GMT
Github user jinfengni commented on a diff in the pull request:

    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
    @@ -115,6 +115,8 @@
       private List<RowGroupInfo> rowGroupInfos;
       private Metadata.ParquetTableMetadataBase parquetTableMetadata = null;
       private String cacheFileRoot = null;
    +  private int batchSize;
    +  private static final int DEFAULT_BATCH_LENGTH = 256 * 1024;
    --- End diff --
    Are you referring to code here:
        // Pick the minimum of recordsPerBatch calculated above, batchSize we got from rowGroupScan
(based on limit)
        // and user configured batchSize value.
        recordsPerBatch = (int) Math.min(Math.min(recordsPerBatch, batchSize),
    If I understand correctly, batchSize in ParquetRecordReader comes from ParquetRowGroupScan,
which comes from ParquetGroupScan, which is set to DEFAULT_BATCH_LENGTH.  If I have a RG with
512K rows, and I set "store.parquet.record_batch_size" to be 512K, will your code honor this
512 batch size, or will it use DEFAULT_BATCH_LENGTH since it's smallest? 
    Also, if "store.parquet.record_batch_size" is set to be different from DEFAULT_BATCH_LENGTH,
why would we still use DEFAULT_BATCH_LENGTH in ParquetGroupScan / ParquetRowGroupScan?  People
might be confused if they look at the serialized physical plan, which shows "batchSize = DEFAULT_BATCH_LENGTH.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message