systemml-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From arijit chakraborty <>
Subject Spark Core
Date Wed, 12 Jul 2017 18:44:59 GMT

Suppose I've this following code:

a = matrix(seq(1,10), 10,1)

for(i in 1:100){

  b = a + 10

  write (b, "path" + ".csv", format="csv")


So what I'm doing is for 100 items, I'm adding a constant to a matrix than outputting it.
And this operation occurs in spark using multiple core of the system.

My question is, after the operation is the value (here b) remains in that core (memory) of
the system, so that it get piled up in the memory. Will this affect the performance of the
process? If it is, how to clean the memory after each execution of loop?

The reason for asking the question is, when I'm testing the code in R the performance is much
better than systemML. Since R to systemML is almost one-to-one mapping, so I'm not sure where
I'm making the mistake. And unfortunately at the stage of progress I can't share the exact

Thanks you!


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message