beam-builds mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: beam_LoadTests_Go_GBK_Flink_Batch #64
Date Wed, 27 Jan 2021 11:12:21 GMT
See <https://ci-beam.apache.org/job/beam_LoadTests_Go_GBK_Flink_Batch/64/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-10925] Load SQL UDFs from jar.

[Kyle Weaver] Move JavaUdfLoader from zetasql -> sql.

[Kyle Weaver] Make JavaUdfDefinitions a subclass of JavaUdfLoader.

[Kyle Weaver] [BEAM-9541] Push all docker images from RC instead of hard-coding them.

[randomstep] [BEAM-9369] bump mockito-core to 3.7.7

[Fokko Driesprong] BEAM-4986: Bump to Apache Parquet 1.11.1

[Kyle Weaver] Don't set context classloader.

[Pablo Estrada] Revert "Revert "Merge pull request #12647 from [BEAM-10378]

[Pablo Estrada] Fixing checker framework checks

[Kyle Weaver] [BEAM-9541] Update Python SDK's Flink version list.

[Kyle Weaver] [BEAM-9541] Create a Gradle task to push all docker images, and use it

[Kyle Weaver] [BEAM-11689] Use public.nexus.pentaho.org for pentaho dependencies

[noreply] [BEAM-11272] Remove combiner label constructor arg (#13355)

[noreply] [BEAM-11691] Skip JavaUdfLoaderTest instead of failing when jar path

[noreply] Merge pull request #13757: [BEAM-11640] Linkage Checker version upgrade


------------------------------------------
[...truncated 116.55 KB...]
  >
  coders: <
    key: "c3"
    value: <
      spec: <
        urn: "beam:coder:iterable:v1"
      >
      component_coder_ids: "c0"
    >
  >
  coders: <
    key: "c4"
    value: <
      spec: <
        urn: "beam:coder:kv:v1"
      >
      component_coder_ids: "c0"
      component_coder_ids: "c3"
    >
  >
  coders: <
    key: "c5"
    value: <
      spec: <
        urn: "beam:go:coder:custom:v1"
        payload: "CgRqc29uEosCCBUaKAoLTnVtRWxlbWVudHMaAggCIhJqc29uOiJudW1fcmVjb3JkcyIyAQAaLwoNSW5pdGlhbFNwbGl0cxoCCAIiFWpzb246ImluaXRpYWxfc3BsaXRzIigIMgEBGiMKB0tleVNpemUaAggCIg9qc29uOiJrZXlfc2l6ZSIoEDIBAhonCglWYWx1ZVNpemUaAggCIhFqc29uOiJ2YWx1ZV9zaXplIigYMgEDGioKCk51bUhvdEtleXMaAggCIhNqc29uOiJudW1faG90X2tleXMiKCAyAQQaMgoOSG90S2V5RnJhY3Rpb24aAggOIhdqc29uOiJob3Rfa2V5X2ZyYWN0aW9uIigoMgEFGnkKX2dpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby90ZXN0L2xvYWQvdmVuZG9yL2dpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby9wa2cvYmVhbS5qc29uRW5jEhYIFiIECBlADyoGCBQSAggIKgQIGUABIn8KX2dpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby90ZXN0L2xvYWQvdmVuZG9yL2dpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby9wa2cvYmVhbS5qc29uRGVjEhwIFiIECBlAAyIGCBQSAggIKgQIGUAPKgQIGUAB"
      >
    >
  >
  coders: <
    key: "c6"
    value: <
      spec: <
        urn: "beam:coder:length_prefix:v1"
      >
      component_coder_ids: "c5"
    >
  >
  coders: <
    key: "c7"
    value: <
      spec: <
        urn: "beam:go:coder:custom:v1"
        payload: "ChdvZmZzZXRyYW5nZS5SZXN0cmljdGlvbhKAAQgaSnxnaXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vdGVzdC9sb2FkL3ZlbmRvci9naXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vcGtnL2JlYW0vaW8vcnRyYWNrZXJzL29mZnNldHJhbmdlLlJlc3RyaWN0aW9uGpACCnhnaXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vdGVzdC9sb2FkL3ZlbmRvci9naXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vcGtnL2JlYW0vaW8vcnRyYWNrZXJzL29mZnNldHJhbmdlLnJlc3RFbmMSkwEIFiKAAQgaSnxnaXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vdGVzdC9sb2FkL3ZlbmRvci9naXRodWIuY29tL2FwYWNoZS9iZWFtL3Nka3MvZ28vcGtnL2JlYW0vaW8vcnRyYWNrZXJzL29mZnNldHJhbmdlLlJlc3RyaWN0aW9uKgYIFBICCAgqBAgZQAEikAIKeGdpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby90ZXN0L2xvYWQvdmVuZG9yL2dpdGh1Yi5jb20vYXBhY2hlL2JlYW0vc2Rrcy9nby9wa2cvYmVhbS9pby9ydHJhY2tlcnMvb2Zmc2V0cmFuZ2UucmVzdERlYxKTAQgWIgYIFBICCAgqgAEIGkp8Z2l0aHViLmNvbS9hcGFjaGUvYmVhbS9zZGtzL2dvL3Rlc3QvbG9hZC92ZW5kb3IvZ2l0aHViLmNvbS9hcGFjaGUvYmVhbS9zZGtzL2dvL3BrZy9iZWFtL2lvL3J0cmFja2Vycy9vZmZzZXRyYW5nZS5SZXN0cmljdGlvbioECBlAAQ=="
      >
    >
  >
  coders: <
    key: "c8"
    value: <
      spec: <
        urn: "beam:coder:length_prefix:v1"
      >
      component_coder_ids: "c7"
    >
  >
  environments: <
    key: "go"
    value: <
      urn: "beam:env:docker:v1"
      payload: "\n>gcr.io/apache-beam-testing/beam_portability/beam_go_sdk:latest"
      capabilities: "beam:protocol:progress_reporting:v0"
      capabilities: "beam:protocol:multi_core_bundle_processing:v1"
      capabilities: "beam:version:sdk_base:go"
      capabilities: "beam:coder:bytes:v1"
      capabilities: "beam:coder:bool:v1"
      capabilities: "beam:coder:varint:v1"
      capabilities: "beam:coder:double:v1"
      capabilities: "beam:coder:string_utf8:v1"
      capabilities: "beam:coder:length_prefix:v1"
      capabilities: "beam:coder:kv:v1"
      capabilities: "beam:coder:iterable:v1"
      capabilities: "beam:coder:state_backed_iterable:v1"
      capabilities: "beam:coder:windowed_value:v1"
      capabilities: "beam:coder:global_window:v1"
      capabilities: "beam:coder:interval_window:v1"
      dependencies: <
        type_urn: "beam:artifact:type:go_****_binary:v1"
        role_urn: "beam:artifact:role:staging_to:v1"
        role_payload: "\n\006****"
      >
    >
  >
>
root_transform_ids: "s1"
root_transform_ids: "e4"
root_transform_ids: "e5"
root_transform_ids: "e6"
root_transform_ids: "e7"
requirements: "beam:requirement:pardo:splittable_dofn:v1"
2021/01/27 11:11:40 Prepared job with id: load-tests-go-flink-batch-gbk-3-0127065404_a7aa3ca2-8667-4297-9aef-8f5d8ea4d1c6
and staging token: load-tests-go-flink-batch-gbk-3-0127065404_a7aa3ca2-8667-4297-9aef-8f5d8ea4d1c6
2021/01/27 11:11:40 Using specified **** binary: 'linux_amd64/group_by_key'
2021/01/27 11:11:42 Staged binary artifact with token: 
2021/01/27 11:11:42 Submitted job: load0tests0go0flink0batch0gbk0300127065404-root-0127111142-16232cac_ecc563dc-1ea0-4b6f-b14a-62c28b6da7b4
2021/01/27 11:11:42 Job state: STOPPED
2021/01/27 11:11:42 Job state: STARTING
2021/01/27 11:11:42 Job state: RUNNING
2021/01/27 11:12:20  (): java.util.concurrent.ExecutionException: org.apache.flink.client.program.ProgramInvocationException:
Job failed (JobID: 10f6783b295c8cef0179e84f59270bf1)
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
	at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:864)
	at org.apache.beam.runners.flink.FlinkBatchPortablePipelineTranslator$BatchTranslationContext.execute(FlinkBatchPortablePipelineTranslator.java:199)
	at org.apache.beam.runners.flink.FlinkPipelineRunner.runPipelineWithTranslator(FlinkPipelineRunner.java:118)
	at org.apache.beam.runners.flink.FlinkPipelineRunner.run(FlinkPipelineRunner.java:85)
	at org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:86)
	at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
	at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
	at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID:
10f6783b295c8cef0179e84f59270bf1)
	at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:112)
	at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
	at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
	at org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$21(RestClusterClient.java:565)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
	at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:291)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
	... 3 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
	at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
	at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:110)
	... 19 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
	at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:496)
	at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
	at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
	at akka.actor.ActorCell.invoke(ActorCell.scala:561)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
	at akka.dispatch.Mailbox.run(Mailbox.scala:225)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'GroupReduce (GroupReduce at
CoGBK)' , caused an error: Error obtaining the sorted input: Thread 'SortMerger Reading Thread'
terminated due to an exception: Lost connection to task manager 'beam-loadtests-go-gbk-flink-batch-64-w-3.c.apache-beam-testing.internal/10.128.0.46:33435'.
This indicates that the remote task manager was lost.
	at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:480)
	at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:708)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:533)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Error obtaining the sorted input: Thread 'SortMerger
Reading Thread' terminated due to an exception: Lost connection to task manager 'beam-loadtests-go-gbk-flink-batch-64-w-3.c.apache-beam-testing.internal/10.128.0.46:33435'.
This indicates that the remote task manager was lost.
	at org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
	at org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1110)
	at org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:99)
	at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:474)
	... 4 more
Caused by: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception:
Lost connection to task manager 'beam-loadtests-go-gbk-flink-batch-64-w-3.c.apache-beam-testing.internal/10.128.0.46:33435'.
This indicates that the remote task manager was lost.
	at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
Caused by: org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Lost
connection to task manager 'beam-loadtests-go-gbk-flink-batch-64-w-3.c.apache-beam-testing.internal/10.128.0.46:33435'.
This indicates that the remote task manager was lost.
	at org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.exceptionCaught(CreditBasedPartitionRequestClientHandler.java:160)
	at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
	at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1388)
	at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:918)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511)
	at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
	at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377)
	at org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:247)
	at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1140)
	at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
	at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
	... 7 more
2021/01/27 11:12:20  (): java.io.IOException: Connection reset by peer
2021/01/27 11:12:20 Job state: FAILED
2021/01/27 11:12:20 Failed to execute job: job load0tests0go0flink0batch0gbk0300127065404-root-0127111142-16232cac_ecc563dc-1ea0-4b6f-b14a-62c28b6da7b4
failed
panic: Failed to execute job: job load0tests0go0flink0batch0gbk0300127065404-root-0127111142-16232cac_ecc563dc-1ea0-4b6f-b14a-62c28b6da7b4
failed

goroutine 1 [running]:
github.com/apache/beam/sdks/go/test/load/vendor/github.com/apache/beam/sdks/go/pkg/beam/log.Fatalf(0x11997e0,
0xc00003e0c0, 0x1057ca0, 0x19, 0xc0000ddee8, 0x1, 0x1)
	<https://ci-beam.apache.org/job/beam_LoadTests_Go_GBK_Flink_Batch/ws/src/sdks/go/test/load/.gogradle/project_gopath/src/github.com/apache/beam/sdks/go/test/load/vendor/github.com/apache/beam/sdks/go/pkg/beam/log/log.go>:153
+0xec
main.main()
	<https://ci-beam.apache.org/job/beam_LoadTests_Go_GBK_Flink_Batch/ws/src/sdks/go/test/load/.gogradle/project_gopath/src/github.com/apache/beam/sdks/go/test/load/group_by_key/group_by_key.go>:82
+0x47e

> Task :sdks:go:test:load:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file '<https://ci-beam.apache.org/job/beam_LoadTests_Go_GBK_Flink_Batch/ws/src/sdks/go/test/load/build.gradle'>
line: 65

* What went wrong:
Execution failed for task ':sdks:go:test:load:run'.
> Process 'command 'sh'' finished with non-zero exit value 2

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to
get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 46s
6 actionable tasks: 6 executed

Publishing build scan...
https://gradle.com/s/2ylp77w2xtees

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Mime
View raw message