spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joel D <>
Subject No Tasks have reported metrics yet
Date Wed, 10 Jan 2018 18:00:16 GMT

I've a job which takes a HiveQL joining 2 tables (2.5 TB, 45GB),
repartitions to 100  and then does some other transformations. This
executed fine earlier.

Job stages:
Stage 0: hive table 1 scan
Stage 1: Hive table 2 scan
Stage 2: Tungsten exchange for the join
Stage 3: Tungsten exchange for the reparation

Today the job is stuck in Stage 2. Out of 200 tasks which are supposed to
be executed none of them have started but 290 have failed due to preempted

Any inputs on how to resolve this issue? I'll try reducing the executor
memory to see if resource allocation is the issue.


View raw message