libcloud-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tomaz Muraus (JIRA)" <>
Subject [jira] [Commented] (LIBCLOUD-490) Zero-byte uploads to S3 fail
Date Mon, 06 Jan 2014 01:44:50 GMT


Tomaz Muraus commented on LIBCLOUD-490:

I just checked and I think we could make it work for objects which are <= MIN_PART_SIZE.

The way the S3 multipart upload code currently works means we already need to buffer 5 MB
in memory to fill the whole chunk and this change doesn't require us to buffer any more data
in memory.

We could do something along those lines:

- Read 5 MB from the iterator and if:
 - returned data is > 5 MB and iterator is not exhausted yet - perform a multipart upload
 - returned data is <= 5 MB and iterator is exhausted - perform a regular upload

> Zero-byte uploads to S3 fail
> ----------------------------
>                 Key: LIBCLOUD-490
>                 URL:
>             Project: Libcloud
>          Issue Type: Bug
>          Components: Storage
>    Affects Versions: 0.13.3
>            Reporter: Noah Kantrowitz
> Calling storage.upload_object_via_stream(iter(('',)), path) fails with:
> {{libcloud.common.types.LibcloudError: <LibcloudError in <
object at 0x10b786610> 'Error in multipart commit'>}}
> A workaround is temporarily monkeypatch {{S3StorageDriver.supports_s3_multipart_upload
= False}}. It would be nice if I could just call put_object directly in some useful way, for
data that is small enough to fit in RAM (which in the case of an empty file is a bit of a

This message was sent by Atlassian JIRA

View raw message