spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <>
Subject Re: How to persist SparkContext?
Date Sun, 28 Aug 2016 08:34:30 GMT

I looked at it. Sounds like some pre-built adapters. Besides very poorly
explained. Unless someone can explain it.

They talk about RSI (Relative Strength Indicator) or SMA (Simple Moving
Averages) but these are calculated on the spot. For example the only way
one can work out the SMA in this model is to keep the track of past 14
pointers (assuming a 14 points SMA) and work out the current one.

Spark streaming can be used for this purpose but not sure what they mean.
One pattern would be to read the incoming topic, choose what needs to be
kept say a predefined price and post it to Hbase and read from Hbase for
the next iteration.


Dr Mich Talebzadeh

LinkedIn *

*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

On 28 August 2016 at 04:32, Taotao.Li <> wrote:

> Hi, there,
>        Did you see the tech talk from Bloomberg on this year's Spark
> Summit?  the link is here :
> dataframes-and-dynamically-composable-analytics-the-
> bloomberg-spark-server/
>        In that talk, on the page 2 of your slide, they say that their
> system would persist the SparkContext?
>        And here is my problem, how to persist SparkContext? if just store
> it in RAM, how to restore it when the server crash and restart?
> here it the slide link:
> t/JenAman/spark-at-bloomberg-dynamically-composable-analytics
> here is the page 2 of the slide:
> --
> *___________________*
> Quant | Engineer | Boy
> *___________________*
> *blog*:
> <>
> *github*:

View raw message