Hi All,

In the interactive shell the spark context remains same. So if run a query multiple times, the RDDs created by previous runs will be reused in the subsequent runs and not recomputed until i exit and restart the shell again right?

Or is there a way to force to reuse/recompute in the presence/absence of RDDs programmatically?

Thanks !