pulsar-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Pulsar Slack" <apache.pulsar.sl...@gmail.com>
Subject Slack digest for #general - 2018-12-15
Date Sat, 15 Dec 2018 09:11:02 GMT
2018-12-14 09:50:00 UTC - Julien Plissonneau Duquène: hello, are there any plans/discussions/projects
for a JMS API for Pulsar?
----
2018-12-14 09:52:24 UTC - Julien Plissonneau Duquène: that would maybe allow us to replace
some HornetQs
----
2018-12-14 11:52:37 UTC - Sijie Guo: @Julien Plissonneau Duquène: currently I am not aware
of any discussions about that. If you have such requirements, feel free to raise a discussion
thread in mailing list or create a github issue for that.
----
2018-12-14 16:06:20 UTC - Julien Plissonneau Duquène: that's more like a wishlist item than
a requirement for now, our apps using JMS are very not agile, really critical and thus definitely
not candidates for early adoption
----
2018-12-14 16:11:16 UTC - David Kjerrumgaard: Can you elaborate a bit more on what your requirements
might be? Do you need the entire JMS API supported or just a subset, e.g. Pub/Sub, Point-to-Point,
Others?
----
2018-12-14 16:29:13 UTC - Julien Plissonneau Duquène: I don't know much about these yet.
From what I heard the central part is some commercial J2EE "solution" that needs a JMS provider.
It uses queues, not topics, so it looks like point-to-point to me.
----
2018-12-14 16:30:59 UTC - Matteo Merli: I had started looking into it some time back and had
some primitive code lurking around some place. Supporting the basic stuff is not difficult
at all. Though it requires time to polish and add tests, etc...
----
2018-12-14 16:36:01 UTC - Julien Plissonneau Duquène: and just out of curiosity, anything
about AMQP?
----
2018-12-14 16:45:11 UTC - Matteo Merli: Nope, AMPQ is really a wire protocol, with its own
model, rather than a pure API as JMS
----
2018-12-14 16:45:57 UTC - Matteo Merli: (Still technically doable, just it would be more work
to do it)
----
2018-12-15 00:24:05 UTC - Mike Card: Hey @David Kjerrumgaard @Matteo Merli I wanted to let
you know I found the problem, it was the custom serializer class I posted earlier in this
thread
----
2018-12-15 00:24:59 UTC - Mike Card: I suspect somewhere our inclusion of Pulsar 2.2.0 has
changed a jar somewhere that we depend on and messed up how the old one was working thus causing
the buffer underflow error when we were running full bore
----
2018-12-15 00:24:59 UTC - Matteo Merli: Oh, gotcha!
----
2018-12-15 00:25:15 UTC - Mike Card: I changed the custom serializer class to be string-based
like this
----
2018-12-15 00:25:39 UTC - Mike Card: 
----
2018-12-15 00:26:13 UTC - Mike Card: Now this has obviously not been performance optimized
in any way, I just tried to make something I was sure would work and verify that it was compatible
with Pulsar
+1 : David Kjerrumgaard
----
2018-12-15 00:26:32 UTC - Mike Card: Serializing our messages this way causes no problems
----
2018-12-15 00:27:11 UTC - Mike Card: I am seeing aggregate message consumption from our "input
topic" running at ~13 KHz which is very impressive!
----
2018-12-15 00:27:58 UTC - Matteo Merli: With batching and async publishes it should even get
much higher than that :slightly_smiling_face:
----
2018-12-15 00:28:17 UTC - Mike Card: (5 partitions, 2 consumer tasks pulling from the topic
in parallel in the application)
----
2018-12-15 00:28:58 UTC - Mike Card: we have batching turned on
----
2018-12-15 00:30:01 UTC - Mike Card: and I am using the async API, although my calls are like
using the sync API to wit:

eventProducer.sendAsync(UpdateRefSerializer.toByteBuffer(ref).array()).thenAccept(msgId -&gt;
{});
----
2018-12-15 00:31:26 UTC - Matteo Merli: That’s ok, that’s still async
----
2018-12-15 00:32:17 UTC - Matteo Merli: it’only if you use `send()` that messages will be
sent one by one, with no batching and with throughput determined by the publish latency
----
Mime
View raw message