hi community,

i have build a spark and flink k-means application.
my test case is a clustering on 1 million points on 3node cluster.

in memory bottlenecks begins flink to outsource to disk and work slowly but works.
however spark lose executers if the memory is full and starts again (infinety loop?).

i try to customize the memory setting with the help from the mailing list here, thanks.
but spark not work.

is it necessary to have any configurations to be set? i mean flink work with low memory, spark must also be able to or not?

best regards,
paul