spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hemant singh <hemant2...@gmail.com>
Subject Re: [pyspark2.4+] A lot of tasks failed, but job eventually completes
Date Mon, 06 Jan 2020 04:06:17 GMT
You can try increasing the executor memory, generally this error comes when
there is not enough memory in individual executors.
Job is getting completed may be because when tasks are re-scheduled it
would be going through.

Thanks.

On Mon, 6 Jan 2020 at 5:47 AM, Rishi Shah <rishishah.star@gmail.com> wrote:

> Hello All,
>
> One of my jobs, keep getting into this situation where 100s of tasks keep
> failing with below error but job eventually completes.
>
> org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 16384
> bytes of memory
>
> Could someone advice?
>
> --
> Regards,
>
> Rishi Shah
>

Mime
View raw message