mina-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bjarke B. Blendstrup (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (DIRMINA-865) Mina Server and Client on same machine stop communicating on large amounts of data
Date Wed, 05 Oct 2011 13:38:35 GMT

     [ https://issues.apache.org/jira/browse/DIRMINA-865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Bjarke B. Blendstrup updated DIRMINA-865:
-----------------------------------------


And the client:


import java.net.InetSocketAddress;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.apache.mina.core.future.ConnectFuture;
import org.apache.mina.core.service.IoHandlerAdapter;
import org.apache.mina.core.session.IoSession;
import org.apache.mina.filter.executor.ExecutorFilter;
import org.apache.mina.transport.socket.SocketConnector;
import org.apache.mina.transport.socket.SocketSessionConfig;
import org.apache.mina.transport.socket.nio.NioSocketConnector;

public class MinaTestClient extends IoHandlerAdapter {
	private SocketConnector connector;

	private ExecutorService executor;

	private static final int PORT = 6667;

	public static void main(String[] args) throws Exception {
		new MinaTestClient();
		while (true) {
			Thread.sleep(1000);
		}
	}

	public MinaTestClient() {
		try {
			executor = Executors.newFixedThreadPool(2);
			connector = new NioSocketConnector(1);
			// connector.getFilterChain().addLast("codec",
			// new ProtocolCodecFilter(new TaidCodecFactory()));
			connector.getFilterChain().addLast("executor 1",
					new ExecutorFilter(executor));
			connector.setHandler(this);
			ConnectFuture connectFuture = connector
					.connect(new InetSocketAddress("localhost", PORT));
		} catch (Exception ex) {
			System.out.println("Woops:" + ex);
		}
	}

	@Override
	public void sessionCreated(IoSession ses) throws Exception {
		System.out.println("sessionCreated");
		((SocketSessionConfig) ses.getConfig()).setReceiveBufferSize(1024);
	}

	@Override
	public void sessionOpened(IoSession session) throws Exception {
		System.out.println("sessionOpened");
	}

	@Override
	public void sessionClosed(IoSession session) throws Exception {
		System.out.println("sessionClosed");
	}

	@Override
	public void messageReceived(IoSession session, Object message)
			throws Exception {
		System.out.println("Received : " + message);
	}
}

                
> Mina Server and Client on same machine stop communicating on large amounts of data
> ----------------------------------------------------------------------------------
>
>                 Key: DIRMINA-865
>                 URL: https://issues.apache.org/jira/browse/DIRMINA-865
>             Project: MINA
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 2.0.4
>         Environment: Tested on Fedora 15 and Windows 7.
>            Reporter: Bjarke B. Blendstrup
>
> I have two small test programs both using MINA. The Server accepts connections on a port.
When clients connect, the server sends a large amount of small messages (approx. 1000 a second).
The client simply echoes the recieved messages to std out. The server echoes getScheduledWriteMessages()
for each client session approx. once every 10 seconds.
> When the simple version of the setup runs, the client always keeps up with the server
- no problem. BUT if I introduce a small burst of messages, 50 messages every 1 second sent
without delays, on top of the 1000 a second, something wierd happens.The number of scheduled
write messages starts to increase AND NEVER decrease again.
> The "funny" thing is that this ONLY happens when the client and server run on the same
machine (both linux and windows). If they comm. across a network, this *never* happens.
> This is the output from my server with two clients connected. One client is on the same
machine as the server, the other on another machine on the LAN.
> Wed Oct 05 15:16:56 CEST 2011: 0/0
> Wed Oct 05 15:16:56 CEST 2011: 1428/28
> Wed Oct 05 15:16:58 CEST 2011: 2550/50
> Wed Oct 05 15:16:58 CEST 2011: 1683/33
> Wed Oct 05 15:16:59 CEST 2011: 918/18
> Wed Oct 05 15:16:59 CEST 2011: 2346/46
> Wed Oct 05 15:17:00 CEST 2011: 1377/27
> Wed Oct 05 15:17:00 CEST 2011: 2193/43
> Wed Oct 05 15:17:01 CEST 2011: 2550/50
> Wed Oct 05 15:17:01 CEST 2011: 2550/50
> Wed Oct 05 15:17:03 CEST 2011: 2550/50
> Wed Oct 05 15:17:03 CEST 2011: 2550/50
> Wed Oct 05 15:17:04 CEST 2011: 2550/50
> Wed Oct 05 15:17:04 CEST 2011: 1989/39
> Wed Oct 05 15:17:05 CEST 2011: 2550/50
> Wed Oct 05 15:17:05 CEST 2011: 1683/33
> Wed Oct 05 15:17:06 CEST 2011: 2550/50
> Wed Oct 05 15:17:06 CEST 2011: 2550/50
> Wed Oct 05 15:17:07 CEST 2011: 2550/50
> Wed Oct 05 15:17:07 CEST 2011: 2550/50
> Wed Oct 05 15:17:09 CEST 2011: 0/0
> Wed Oct 05 15:17:09 CEST 2011: 2550/50
> Wed Oct 05 15:17:10 CEST 2011: 2550/50
> Wed Oct 05 15:17:10 CEST 2011: 2550/50
> Wed Oct 05 15:17:11 CEST 2011: 1836/36
> Wed Oct 05 15:17:11 CEST 2011: 2193/44
> Wed Oct 05 15:17:12 CEST 2011: 1785/35
> Wed Oct 05 15:17:12 CEST 2011: 4641/91
> Wed Oct 05 15:17:14 CEST 2011: 2499/49
> Wed Oct 05 15:17:14 CEST 2011: 6681/131
> Wed Oct 05 15:17:15 CEST 2011: 1989/39
> Wed Oct 05 15:17:15 CEST 2011: 56326/1105
> Wed Oct 05 15:17:16 CEST 2011: 1530/30
> Wed Oct 05 15:17:16 CEST 2011: 93441/1833
> Wed Oct 05 15:17:17 CEST 2011: 1734/34
> Wed Oct 05 15:17:17 CEST 2011: 130556/2560
> Wed Oct 05 15:17:18 CEST 2011: 1785/35
> Wed Oct 05 15:17:18 CEST 2011: 184055/3609
> Wed Oct 05 15:17:20 CEST 2011: 1734/34
> Wed Oct 05 15:17:20 CEST 2011: 221170/4337
> Wed Oct 05 15:17:21 CEST 2011: 1836/36
> Wed Oct 05 15:17:21 CEST 2011: 258285/5065
> Wed Oct 05 15:17:22 CEST 2011: 1734/34
> Wed Oct 05 15:17:22 CEST 2011: 311784/6114
> Wed Oct 05 15:17:23 CEST 2011: 2550/50
> Wed Oct 05 15:17:23 CEST 2011: 348899/6842
> Wed Oct 05 15:17:25 CEST 2011: 2550/50
> Wed Oct 05 15:17:25 CEST 2011: 386014/7569
> Wed Oct 05 15:17:26 CEST 2011: 1734/34
> Wed Oct 05 15:17:26 CEST 2011: 439513/8618
> Wed Oct 05 15:17:27 CEST 2011: 1581/28
> Wed Oct 05 15:17:27 CEST 2011: 476628/9346
> Wed Oct 05 15:17:28 CEST 2011: 1887/36
> Wed Oct 05 15:17:28 CEST 2011: 513743/10074
> Wed Oct 05 15:17:29 CEST 2011: 2550/50
> Wed Oct 05 15:17:29 CEST 2011: 567242/11123
> Wed Oct 05 15:17:31 CEST 2011: 2550/50
> Wed Oct 05 15:17:31 CEST 2011: 604357/11851
> Wed Oct 05 15:17:32 CEST 2011: 2550/50
> Wed Oct 05 15:17:32 CEST 2011: 641472/12578
> Wed Oct 05 15:17:33 CEST 2011: 1836/36
> Wed Oct 05 15:17:33 CEST 2011: 692733/13583
> Wed Oct 05 15:17:34 CEST 2011: 1734/34
> Wed Oct 05 15:17:34 CEST 2011: 732086/14355
> Wed Oct 05 15:17:36 CEST 2011: 2040/40
> Wed Oct 05 15:17:36 CEST 2011: 769201/15083
> Wed Oct 05 15:17:37 CEST 2011: 1734/34
> Wed Oct 05 15:17:37 CEST 2011: 817122/16022
> Wed Oct 05 15:17:38 CEST 2011: 1836/36
> Wed Oct 05 15:17:38 CEST 2011: 859815/16860
> Wed Oct 05 15:17:39 CEST 2011: 2499/49
> Wed Oct 05 15:17:39 CEST 2011: 896930/17587
> Wed Oct 05 15:17:41 CEST 2011: 1173/23
> Wed Oct 05 15:17:41 CEST 2011: 943449/18499
> Wed Oct 05 15:17:42 CEST 2011: 2550/50
> Wed Oct 05 15:17:42 CEST 2011: 987544/19364
> Wed Oct 05 15:17:43 CEST 2011: 1632/32
> Wed Oct 05 15:17:43 CEST 2011: 1024659/20092
> Wed Oct 05 15:17:44 CEST 2011: 2550/50
> Wed Oct 05 15:17:44 CEST 2011: 1068756/20956
> Wed Oct 05 15:17:45 CEST 2011: 1938/38
> Wed Oct 05 15:17:45 CEST 2011: 1115273/21869
> Wed Oct 05 15:17:47 CEST 2011: 2550/50
> Wed Oct 05 15:17:47 CEST 2011: 1152388/22596
> Wed Oct 05 15:17:48 CEST 2011: 969/19
> Wed Oct 05 15:17:48 CEST 2011: 1194267/23417
> Wed Oct 05 15:17:49 CEST 2011: 1785/35
> Wed Oct 05 15:17:49 CEST 2011: 1243002/24373
> Wed Oct 05 15:17:50 CEST 2011: 1530/30
> Wed Oct 05 15:17:50 CEST 2011: 1280117/25101
> Wed Oct 05 15:17:52 CEST 2011: 2550/50
> Wed Oct 05 15:17:52 CEST 2011: 1319319/25869
> Wed Oct 05 15:17:53 CEST 2011: 1632/32
> Wed Oct 05 15:17:53 CEST 2011: 1370731/26878
> Wed Oct 05 15:17:54 CEST 2011: 2550/50
> You can see here the number of messages queued start increasing, and no matter how long
the test runs, it will never decrease again.
> Another fun fact is that if I use telnet as a client, this never happens either - no
matter if the telnet is running on same machine or not.
> This may seem as a very stupid test case, but it is a very good simulation of the behaviour
of the system I am trying to build. I hope you can see from the code whether the fault is
mine or not!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message