asterixdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Torsten Bergh Moss <>
Subject Re: Micro-batch semantics for UDFs
Date Tue, 31 Mar 2020 00:03:35 GMT
Thanks Dmitry,

I tried it and now I get an error ASX1002: Type mismatch: function lib#getSentimentsInBatch
expects its 1st input parameter to be of type object, but the actual input type is array (in
line 3, at column 8) [TypeMismatchException]

I mean to be fair, JList technically is an object. How would I go about defining the argument
type as an array inside of the library_descriptor.xml and retrieve a JRecord[] array input
in the UDF? 

Best wishes,

From: Dmitry Lychagin <>
Sent: Tuesday, March 31, 2020 1:35 AM
Subject: Re: Micro-batch semantics for UDFs

Hi Torsten,

Can you try the following query:

SELECT lib#getSentimentsInBatch(ARRAY_AGG(t))
FROM Tweets t
GROUP BY nonExistentField

-- Dmitry

On 3/30/20, 1:15 PM, "Torsten Bergh Moss" <> wrote:


    Grab some coffee and prepare for a wall of text because this poor Norwegian student has
gotten lost somewhere far down the rabbit hole.

    Xikui and Michael, I tried my very best to familiarise myself with the PartitionHolder-code,
even read the introductory papers on hyracks and algebricks to complement your paper, but
I can’t seem to pinpoint exactly where the UDF Evaluator is evoked. I find both intakeJobs
and storageJobs referenced as hyracks JobSpecifications in the code, but I would assume what
I am interested in modifying is the computing job in order to control the batch size it pulls
from the intake job as well as how it passes data to the UDF Evaluator (singular vs in batches).
As for modifying the UDF framework to take a batch of records I believe, depending on the
nature of the batch, it would be sufficient to maybe just modify the functionhelper interface
or the java functionhelper implementation to provide easy access to get the batches (as a
[]JRecord or JList or something) and set their results after processing. I mean if it even
has to be modified at all, it sort of bleeds into my second question which I hope should be
more straight-forward:

    Dmitry mentioned that it’s possible to process already stored records in a batch fashion
by using a GROUP BY. And this makes sense, I could I.E. do something like

    SELECT * FROM Tweets GROUP BY nonExistentField;
    Which would return all the tweets in the dataset in a list like
    > { "$1": [ …tweets] }.

    Then to process this list I would assume I would have to do something like

    SELECT lib#getSentimentsInBatch(t) FROM (SELECT * FROM Tweets GROUP BY nonExistentField)
as t;

    right? I know it’s awful to read code in emails but bear with me here: In order to have
my UDF cope with this list input I would assume I’d have to do something along the lines

    JList inputRecords = (JList) functionHelper.getArgument(0);
    for (int i = 0; i < inputRecords.size(); i++){
        JRecord inputRecord = (JRecord) inputRecords.getElement(i);
        ... Process JRecord like a tweet in lib#getSentimentSingular …

    However, compiling this UDF, deploying it and running it produces a type mismatch error,
as it’s not finding the required closed field id in my TweetType. Any ideas what might be
going wrong? Inside of the library_descriptor.xml I’ve tried defining both argument and
return type as TweetType, TweetType[] and JList, but they all produce the same error.

    Also, bringing it back to the partition holders, wouldn’t it be possible to use a UDF
I've tried to make for batch processing on stored records here, on the ingested records? This
is why I raised doubts earlier about whether the UDF Framework would have to be modified at
all, as I would hope it would be possible to modify the compute job to instead of passing
records individually to the UDF Evaluator, bundle them together in I.E. a JList and then pass
that JList to the same UDF, thus making it possible to use a single UDF for batch processing
of both stored and streamed data, which would be nice as opposed to having create two separate
UDF's for the two cases.

    Excited to hear your thoughts on this. Also hope everyone is safe from the virus over

    Best wishes,

    From: Xikui Wang <>
    Sent: Thursday, March 12, 2020 11:59 PM
    Subject: Re: Micro-batch semantics for UDFs

    Hi Torsten,

    I've sent an invitation to your email address for the code repository. It's
    under the "xikui_idea" branch. Let me know if you have any issues accessing
    it. You can find all the classes you need by searching for
    "PartitionHolders". :)

    To fully achieve what you want to do, I think you will probably also need
    to customize the UDF framework in AsterixDB to enable a Java UDF to take a
    batch of records. That would require some additional work. Or you can just
    wrap your GPU driver function as a special operator in AsterixDB and
    connect that to the partition holders. That's just my two cents. You can
    play with the codebase to see which option works best for you.

    Sorry for the late reply. Things are getting hectic recently with the
    Coronavirus situation. I hope everyone can stay safe and healthy during
    this time.


    On Thu, Mar 12, 2020 at 1:27 PM Torsten Bergh Moss <> wrote:

    > Xikui, how can I get started with reusing the PartitionHolder you used for
    > the ingestion project?
    > Best wishes,
    > Torsten
    > ________________________________________
    > From: Torsten Bergh Moss <>
    > Sent: Sunday, March 8, 2020 5:15 PM
    > To:
    > Subject: Re: Micro-batch semantics for UDFs
    > Thanks for the feedback, and sorry for the late response, I've been busy
    > with technical interviews.
    > Xikui, the ingestion framework described in section 5 & 6 of your paper
    > sounds perfect for my project. I could have an intake job receiving a
    > stream of tweets and an insert job pulling batches of say 10k tweets from
    > the intake job, preprocess the batch, run it through the neural network on
    > the GPU to get the sentiments, then write the tweets with their sentiments
    > to a dataset. Unless there are any unforeseen bottlenecks I think I should
    > be able to achieve throughputs of up to 20k tweets per second with my
    > current setup.
    > Is the code related to your project available on a specific branch or in a
    > separate repo maybe?
    > Also, I believe there might be missing a figure, revealed by the line "The
    > decoupled ingestion framework is shown in Figure ??" early on page 8.
    > Best wishes,
    > Torsten
    > ________________________________________
    > From: Xikui Wang <>
    > Sent: Sunday, March 1, 2020 5:41 AM
    > To:
    > Subject: Re: Micro-batch semantics for UDFs
    > Hi Torsten,
    > In case you want to customize the UDF framework to trigger your UDF on a
    > batch of records, you could consider reusing the PartitionHolder that I did
    > for my enrichment for the ingestion project. It takes a number of records,
    > processes them, and returns with the processed results. I used them to
    > enable hash joins on feeds and refreshes reference data per batch. That
    > might be helpful. You can find more information here [1].
    > [1]
    > Best,
    > Xikui
    > On Thu, Feb 27, 2020 at 2:35 PM Dmitry Lychagin
    > <> wrote:
    > > Torsten,
    > >
    > > I see a couple of possible approaches here:
    > >
    > > 1. Make your function operate on arrays of values instead of primitive
    > > values.
    > > You'll probably need to have a GROUP BY in your query to create an array
    > > (using ARRAY_AGG() or GROUP AS variable).
    > > Then pass that array to your function which would process it and would
    > > also return a result array.
    > > Then unnest that output  array to get the cardinality back.
    > >
    > > 2. Alternatively,  you could try creating a new runtime for ASSIGN
    > > operator that'd pass batches of input tuples to a new kind of function
    > > evaluator.
    > > You'll need to provide replacements for
    > > AssignPOperator/AssignRuntimeFactory.
    > > Also you'd need to modify InlineVariablesRule[1] so it doesn't inline
    > > those ASSIGNS.
    > >
    > > [1]
    > >
    > >
    > > Thanks,
    > > -- Dmitry
    > >
    > >
    > > On 2/27/20, 2:02 PM, "Torsten Bergh Moss" <>
    > > wrote:
    > >
    > >     Greetings everyone,
    > >
    > >
    > >     I'm experimenting a lot with UDF's utilizing Neural Network
    > inference,
    > > mainly for classification of tweets. Problem is, running the UDF's in a
    > > one-at-a-time fashion severely under-exploits the capacity of GPU-powered
    > > NN's, as well as there being a certain latency associated with moving
    > data
    > > from the CPU to the GPU and back every time the UDF is called, causing
    > for
    > > poor performance.
    > >
    > >
    > >     Ideally it would be possible use the UDF to process records in a
    > > micro-batch fashion, letting them accumulate until a certain batch-size
    > is
    > > reached (as big as my GPU's memory can handle) before passing the data
    > > along to the neural network to get the outputs.
    > >
    > >
    > >     Is there a way to accomplish this with the current UDF framework
    > > (either in java or python)? If not, where would I have to start to
    > develop
    > > such a feature?
    > >
    > >
    > >     Best wishes,
    > >
    > >     Torsten Bergh Moss
    > >
    > >
    > >

View raw message