spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kayousterhout <...@git.apache.org>
Subject [GitHub] spark pull request: [Proposal] SPARK-1171: simplify the implementa...
Date Mon, 03 Mar 2014 21:42:47 GMT
Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/63#discussion_r10230444
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/WorkerOffer.scala ---
    @@ -21,4 +21,6 @@ package org.apache.spark.scheduler
      * Represents free resources available on an executor.
      */
     private[spark]
    -class WorkerOffer(val executorId: String, val host: String, val cores: Int)
    +class WorkerOffer(val executorId: String, val host: String, var cores: Int) {
    +  @transient val totalcores = cores
    --- End diff --
    
    Actually on second thought, can CoarseGrainedSchedulerBackend just store the total cores
for each worker in a hash map?  I'd prefer that solution since other classes use WorkerOffer
and don't use it to keep track of the total cores on each worker.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message