trafodion-codereview mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From selvaganesang <...@git.apache.org>
Subject [GitHub] trafodion pull request #1557: Changes for [TRAFODION-3065] and [TRAFODION-29...
Date Wed, 09 May 2018 18:57:45 GMT
GitHub user selvaganesang opened a pull request:

    https://github.com/apache/trafodion/pull/1557

    Changes for [TRAFODION-3065] and [TRAFODION-2982]

    [TRAFODION-3065] Trafodion to support compressed Hive Text formatted tables
    
    Compressed text files are now supported via the new implementation using
    HDFS java APIs. When  Hadoop is not configured to support a particular type
    of compression, an error is thrown.
    
    [TRAFODION-2982] JNI HDFS interface should support varied sized large buffers for read/write
    A new CQD HDFS_IO_INTERIM_BYTEARRAY_SIZE_IN_KB is introduced to chunk
    the read and write when byteArray is involved.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/selvaganesang/trafodion hdfs_compression_support

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/trafodion/pull/1557.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1557
    
----
commit f216cdb31b89cb6a8b80717c113a6dcdfd3a24e1
Author: selvaganesang <selva.govindarajan@...>
Date:   2018-05-09T00:13:42Z

    [TRAFODION-2917] Refactor Trafodion implementation of hdfs scan for text formatted hive
tables
    
    Fix for compGeneral/TEST045
    When a hive scan was prepared and executed many times, incorrect array containing runtime
ranges
    were deallocated leading to memory corruption. Also, fixed a memory leak in the JNI layer.

commit 96cab4ddd086a59ebc0eab8ac4a93ee3cf315aac
Author: selvaganesang <selva.govindarajan@...>
Date:   2018-05-09T00:36:04Z

    [TRAFODION-3065] Trafodion to support compressed Hive Text formatted tables
    
    Compressed text files are now supported via the new implementation using
    HDFS java APIs. When the hadoop is not configured to support a particular type
    of compression, an error is thrown.
    
    [TRAFODION-2982] JNI HDFS interface should support varied sized large buffers for read/write
    A new CQD HDFS_IO_INTERIM_BYTEARRAY_SIZE_IN_KB is introduced to chunk
    the read and write when byteArray is involved.

----


---

Mime
View raw message