jclouds-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Gaul <g...@apache.org>
Subject Re: Content Length with compression with Jclouds PutBlob
Date Thu, 13 Oct 2016 04:32:44 GMT
What you desire is not possible with single-part uploads.
Content-Encoding is not magic; adding it to an HTTP PUT is just an
instruction for subsequent HTTP GET to decode using that filter.  You
still need to calculate the Content-Length before the HTTP PUT for most
providers, e.g., AWS S3.

You may accomplish what you want via multi-part uploads, if your
provider's MPU restrictions allow it.  For example, AWS S3 requires that
all parts except the final one contain at least 5 MB.  You must slice
your data and ensure compression yields at least this part size.  Note
that you can individually gzip each part; the format allows
concatenation of multiple streams as the following shell code

$ (echo aaaa | gzip -c ; echo bbbb | gzip -c) | gunzip -c

You must use the new Multipart API introduced in jclouds 2.0 for this

On Thu, Oct 06, 2016 at 08:29:23PM +0530, Dileep Dixith wrote:
> Hi,
> We are planning to enable compression before we send data to Cloud as part 
> of put blob. We have written a payload and ByteSource.
> For larger files Once we opened a stream with cloud, we read slice from 
> local file and send it to cloud as part of putblob method.
> During this, we have a compression module, which will compress each slice 
> of data  rather than complete stream in one shot and we want to change the 
> content length after compression module performs its operation.
> But, looks like once we open a stream we can not be able to change the 
> content length of blob.
> I want to know is there a way to change the content length of MataData 
> after compression of all the slices completes.
> Regards,
> Dileep

Andrew Gaul

View raw message