spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "xiaoyu chen (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-15376) DataFrame write.jdbc() inserts more rows than acutal
Date Wed, 18 May 2016 04:09:12 GMT

     [ https://issues.apache.org/jira/browse/SPARK-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

xiaoyu chen updated SPARK-15376:
--------------------------------
    Description: 
It's a odd bug, occur under this situation:
    
{code:title=Bar.scala}
    val rddRaw = sc.textFile("xxx").map(xxx).sample(false, 0.15)
    println(rddRaw.count())    // the actual rows insert to mysql is more than rdd's record
num. In my case, is 239994 (rdd),  ~241300 (database inserted)

    // iter all rows in another way, if drop the Range for loop, the bug wouldn't occur
    for(some_id <- Range(some_ids_all_range)){
      rddRaw.filter(_._2 == some_id).randomSplit(Array(x, x, x), 1)
      .foreach( rd => {
      // val curCnt = rd.count()  // if invoke count() on rd before write, it would be ok
          rd.map(x => new TestRow(null, xxx)).toDF().write.mode(SaveMode.Append).jdbc(xxx)
        }
      )
    }
{code}



  was:
It's a odd bug, occur under this situation:
    
{code:title=Bar.scala}
    val rddRaw = sc.textFile("xxx").map(xxx).sample(false, 0.15)
    println(rddRaw.count())    // the actual rows insert to mysql is more than rdd's record
num. In my case, is 239994 (rdd),  ~241300 (database inserted)

    // iter all rows in another way, if drop the Range for loop, the bug wouldn't occur
    for(some_id <- Range(some_ids_all_range)){
      rddRaw.filter(_._2 == some_id).randomSplit(Array(x, x, x,))
      .foreach( rd => {
      // val curCnt = rd.count()  // if invoke count() on rd before write, it would be ok
          rd.map(x => new TestRow(null, xxx)).toDF().write.mode(SaveMode.Append).jdbc(xxx)
        }
      )
    }
{code}




> DataFrame write.jdbc() inserts more rows than acutal
> ----------------------------------------------------
>
>                 Key: SPARK-15376
>                 URL: https://issues.apache.org/jira/browse/SPARK-15376
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.4.1
>         Environment: CentOS 6 cluster mode
> Cores: 300 (300 granted, 0 left)
> Executor Memory: 45.0 GB
> Submit Date: Wed May 18 10:26:40 CST 2016
>            Reporter: xiaoyu chen
>              Labels: DataFrame
>
> It's a odd bug, occur under this situation:
>     
> {code:title=Bar.scala}
>     val rddRaw = sc.textFile("xxx").map(xxx).sample(false, 0.15)
>     println(rddRaw.count())    // the actual rows insert to mysql is more than rdd's
record num. In my case, is 239994 (rdd),  ~241300 (database inserted)
>     // iter all rows in another way, if drop the Range for loop, the bug wouldn't occur
>     for(some_id <- Range(some_ids_all_range)){
>       rddRaw.filter(_._2 == some_id).randomSplit(Array(x, x, x), 1)
>       .foreach( rd => {
>       // val curCnt = rd.count()  // if invoke count() on rd before write, it would be
ok
>           rd.map(x => new TestRow(null, xxx)).toDF().write.mode(SaveMode.Append).jdbc(xxx)
>         }
>       )
>     }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message