beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eugene Kirpichov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BEAM-2879) Implement and use an Avro coder rather than the JSON one for intermediary files to be loaded in BigQuery
Date Fri, 15 Sep 2017 23:24:00 GMT

    [ https://issues.apache.org/jira/browse/BEAM-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16168669#comment-16168669
] 

Eugene Kirpichov commented on BEAM-2879:
----------------------------------------

Actually, we already go through a GenericRecord -> TableRow translation when reading from
BigQuery (BigQuerySourceBase.createSources() uses BigQueryAvroUtils.convertGenericRecordToTableRow()),
even though we suffer the associated performance impact. Perhaps we could do the same when
writing, if it's actually faster.

> Implement and use an Avro coder rather than the JSON one for intermediary files to be
loaded in BigQuery
> --------------------------------------------------------------------------------------------------------
>
>                 Key: BEAM-2879
>                 URL: https://issues.apache.org/jira/browse/BEAM-2879
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-java-gcp
>            Reporter: Black Phoenix
>            Priority: Minor
>              Labels: starter
>
> Before being loaded in BigQuery, temporary files are created and encoded in JSON. Which
is a costly solution compared to an Avro alternative 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message