mina-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel Lecharny <elecha...@apache.org>
Subject Re: [MINA 3.0] Thougts about the selectors
Date Thu, 11 Feb 2010 07:37:06 GMT
I agree. However, I think the current distribution of selectors through
IoProcessor was done on the same basis : random thoughts.

Starting with a single selector - as suggested in Roy Hitchens, book on NIO
- seems to me a good base line from which we can extrapolate and try other
scenarios.

On Thu, Feb 11, 2010 at 7:52 AM, Julien Vermillard
<jvermillard@archean.fr>wrote:

> Le Thu, 04 Feb 2010 13:53:50 +0100,
> Emmanuel Lecharny <elecharny@gmail.com> a écrit :
>
> > I have reviewed the way we use the Selector in MINA 2.0. Here are
> > some of the thoughts I have about teh way we use them for Sockets :
> >
> > We currently have a system built on top of three elements :
> > - IoAcceptor on the server side
> > - IoConnector on the client side
> > - IoProcessor which are processing the messages received or sent
> >
> > IoAcceptor and IoConnector are just two sides of the same coin : a
> > IoService. The only difference is that the Connector initiates the
> > communication.
> >
> > Nio Sockets
> > ----------------
> > In order to deal with incoming connections, the IoAcceptor uses a
> > Selector on which are registered the ServerSocketChannel for the
> > OP_ACCEPT event. On the client side, we have the same Selector but
> > the ServerSocket is registered for the OP_CONNECT event.
> >
> > In both case, once the session is connected/accepted, the associated
> > Channel is attached to another Selector, itself associated with an
> > IoProcessor.
> >
> > Here, I'm questioning the fact that we use more than one Selector to
> > handle connect/accept  and read/write operations. The select()
> > operation is not specially costly, even if it does a lot of things :
> > - deregister the canceled channels
> > - each channel which has had some operation since the last select is
> > put to a set of selected keys
> > - deregister the canceled channels again (for channel which has been
> > canceled while the step 2 was processed)
> > - return the number of keys found ready in step 2
> >
> > but all in all, this is a fast operation, as it just reads some bit
> > fields to determinate if something has changed since the last select.
> > Even if we have one million registered keys in the selector, first
> > the number of active channel will be low, and second the processing
> > time for this step is very minimal compared to the application
> > processing time.
> >
> > Now, wouldn't it be better to have only one selector, and then
> > dispatch the tasks to some processor?
> >
> > On the server side, we have to deal with :
> > - newly added sessions
> > - recently closed sessions
> > - incoming data
> > - outgoing data
> >
> > On the client side, we have to deal with :
> > - newly connected sessions
> > - recently closed sessions
> > - incoming data
> > - outgoing data
> >
> > each of those tasks can be processed by a separate thread selected in
> > a thread pool. IMO, it may be better than the current architecture
> > where we have a pool of IoProcessor, each one of them having its own
> > Selector, and no thread to process the events. For instance, if we
> > have 3 IoProcessor (the default value for a dual core processor),
> > then we can only process 3 events in parallel. Pretty inefficient...
> >
>
> If we can use only 1 selector, I would be pretty happy because it'll
> simplify a lot of code. But I won't accept those concepts without a
> bench :)
>
> What would be the most efficient ? A thread polling and feeding a
> pool of threads in charge of doing the costy operations
> (read/write/accept). Or a pool of threads selecting and doing the costy
> operations ?
>
> On paper and without seeing the NIO/concurrency oddities I think I
> can't answer this question.
>
> --
> Julien Vermillard
>
> Archean Technologies
> http://www.archean.fr
>



-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message