spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ryan Blue (Jira)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-28843) Set OMP_NUM_THREADS to executor cores reduce Python memory consumption
Date Wed, 21 Aug 2019 22:56:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-28843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ryan Blue updated SPARK-28843:
------------------------------
    Description: 
While testing hardware with more cores, we found that the amount of memory required by PySpark
applications increased and tracked the problem to importing numpy. The numpy issue is [https://github.com/numpy/numpy/issues/10455]

NumPy uses OpenMP that starts a thread pool with the number of cores on the machine (and does
not respect cgroups). When we set this lower we see a significant reduction in memory consumption.

This parallelism setting should be set to the number of cores allocated to the executor, not
the number of cores available.

  was:
While testing hardware with more cores, we found that the amount of memory required by PySpark
applications increased and tracked the problem to importing numpy. The numpy issue is [https://github.com/numpy/numpy/issues/10455]

NumPy uses OpenMP that starts a thread pool with the number of cores on the machine (and does
not respect cgroups). When we set this lower we see a reduction in memory consumption.

This parallelism setting should be set to the number of cores allocated to the executor, not
the number of cores available.


> Set OMP_NUM_THREADS to executor cores reduce Python memory consumption
> ----------------------------------------------------------------------
>
>                 Key: SPARK-28843
>                 URL: https://issues.apache.org/jira/browse/SPARK-28843
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 2.3.3, 3.0.0, 2.4.3
>            Reporter: Ryan Blue
>            Priority: Major
>
> While testing hardware with more cores, we found that the amount of memory required by
PySpark applications increased and tracked the problem to importing numpy. The numpy issue
is [https://github.com/numpy/numpy/issues/10455]
> NumPy uses OpenMP that starts a thread pool with the number of cores on the machine (and
does not respect cgroups). When we set this lower we see a significant reduction in memory
consumption.
> This parallelism setting should be set to the number of cores allocated to the executor,
not the number of cores available.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message