beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Work logged] (BEAM-5434) Issue with BigQueryIO in Template
Date Thu, 20 Sep 2018 23:26:00 GMT


ASF GitHub Bot logged work on BEAM-5434:

                Author: ASF GitHub Bot
            Created on: 20/Sep/18 23:25
            Start Date: 20/Sep/18 23:25
    Worklog Time Spent: 10m 
      Work Description: axelmagn commented on issue #6457: [BEAM-5434] Improve error handling
in the artifact staging service
   @angoenka PTAL

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 146170)
    Time Spent: 20m  (was: 10m)

> Issue with BigQueryIO in Template
> ---------------------------------
>                 Key: BEAM-5434
>                 URL:
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-core
>    Affects Versions: 2.5.0
>            Reporter: Amarendra Kumar
>            Assignee: Kenneth Knowles
>            Priority: Blocker
>          Time Spent: 20m
>  Remaining Estimate: 0h
> I am trying to build a google Dataflow template to be run from a cloud function.
> The issue is with BigQueryIO trying execute a SQL.
> The opening step for my Dataflow Template is
> {code:java}
> BigQueryIO.readTableRows().withQueryLocation("US").withoutValidation().fromQuery(options.getSql()).usingStandardSql()
> {code}
> When the template is triggered for the first time its running fine.
> But when its triggered for the second time, it fails with the following error.
> {code}
> // Some comments here
> No files matched spec: gs://test-notification/temp/Notification/BigQueryExtractTemp/34d42a122600416c9ea748a6e325f87a/000000000000.avro
> 	at
> 	at
> 	at
> 	at$1.iterator(
> 	at
> 	at
> 	at
> 	at
> 	at
> 	at
> 	at$WorkerThread.doWork(
> 	at$
> 	at$
> 	at
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(
> 	at java.util.concurrent.ThreadPoolExecutor$
> 	at
> {code}
> In the second run, why is the process expecting a file in the GCS location?
> This file does get created while the job is running at the first run, but it also gets
deleted after the job is complete. 
> How are the two jobs related?
>  Could you please let me know if I am missing something or this is a bug?

This message was sent by Atlassian JIRA

View raw message