beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vikas Kedigehalli (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BEAM-991) DatastoreIO Write should flush early for large batches
Date Sat, 19 Nov 2016 17:51:58 GMT

    [ https://issues.apache.org/jira/browse/BEAM-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15679617#comment-15679617
] 

Vikas Kedigehalli commented on BEAM-991:
----------------------------------------

Joshua, all good solutions. 

I would prefer 3rd one, using 'getSerializedSize' to measure the approximate byte size and
flush when it reaches ~10MB (https://github.com/apache/incubator-beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1.java#L863)

Computing getSerializedSize shouldn't be problem because that value is memoized by protobuf
and protobuf will anyway compute that later for serializing, so we shouldn't hit any additional
performance penalty. 

PS: If you more than welcome to submit a Pull Request to Apache Beam if you are interested
to contribute. :)

> DatastoreIO Write should flush early for large batches
> ------------------------------------------------------
>
>                 Key: BEAM-991
>                 URL: https://issues.apache.org/jira/browse/BEAM-991
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-gcp
>            Reporter: Vikas Kedigehalli
>            Assignee: Vikas Kedigehalli
>
> If entities are large (avg size > 20KB) then the a single batched write (500 entities)
would exceed the Datastore size limit of a single request (10MB) from https://cloud.google.com/datastore/docs/concepts/limits.
> First reported in: http://stackoverflow.com/questions/40156400/why-does-dataflow-erratically-fail-in-datastore-access



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message