Hi,
+1.
It's happy to see 2.3.0 get released. Actually, I did something dev on
zero-copy processing tuples based on that version. But haven't finished
yet, I can create a ticket for that. Anyway, currently the zero-copy
decompressors in hadoop have only two, default and snappy. Do you guys
decide to support lzo or any other type of compressions?
Regards,
Min
On Mon, Feb 24, 2014 at 5:23 PM, JaeHwa Jung <jhjung@gruter.com> wrote:
> +1.
>
> Thanks Hyunsik, I also agree with you.
> We need to bump up hadoop to 2.3.0.
>
> Cheers
>
>
> 2014-02-25 10:19 GMT+09:00 Hyunsik Choi <hyunsik@apache.org>:
>
> > Hi folks,
> >
> > As you already know, Hadoop 2.3.0 release. While I'm reading the
> changes, I
> > noted some new features that Tajo should consider.
> >
> > Centralized cache management in HDFS
> > - https://issues.apache.org/jira/browse/HDFS-4949
> >
> > Ealier, Min mentioned cached table. In offline, I discussed HDFS-4949
> with
> > him. It may be a candidate feature for our goal.
> >
> > Enable support for heterogeneous storages in HDFS - DN as a collection of
> > storages
> > - https://issues.apache.org/jira/browse/HDFS-2832
> >
> > It's for different storage medias like SSD and HDD.
> >
> > Add a directbuffer Decompressor API to hadoop
> > - https://issues.apache.org/jira/browse/HADOOP-10047
> >
> > We already use compression/decompression in text file. We also should
> adopt
> > comp/decomp to other file formats. For that, HDFS-10047 may be a nice
> > candidate feature to be used.
> >
> > - hyunsik
> >
>
>
>
> --
> Thanks,
> Jaehwa Jung
> Bigdata Platform Team
> Gruter
>
--
My research interests are distributed systems, parallel computing and
bytecode based virtual machine.
My profile:
http://www.linkedin.com/in/coderplay
My blog:
http://coderplay.javaeye.com
|