incubator-s4-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dingyu Yang <yangdin...@gmail.com>
Subject Re: GC overhead limit exceeded
Date Thu, 18 Oct 2012 08:06:53 GMT
Thanks, Matthieu
I think the adapter progam can read the data and send it out to app
cluster.
My adapter app is similar with twitter count example and has a queue to
control the speed of sending.
Therefore the JVM is out of memory. I will check it.

Dingyu Yang


2012/10/18 Matthieu Morel <mmorel@apache.org>

> On 10/18/12 9:27 AM, Dingyu Yang wrote:
>
>> I don't know how and where to define the partition and cluster.
>> Every time I need to start zkServer and create cluster2 like this
>> ./s4 newCluster -c=testCluster1 -nbTasks=5 -flp=12000 -zk=myMaster
>>
>
> --> testCluster1 is the name of the cluster and nbTasks is the number of
> partitions.
>
>
>    ./s4 newCluster -c=testCluster2 -nbTasks=2 -flp=13000 -zk=myMaster
>> then start all the node (seven nodes)
>> ./s4 node -c=testCluster1 -zk=myMaster
>> ./s4 node -c=testCluster1 -zk=myMaster
>> ...
>> ./s4 node -c=testCluster2 -p=s4.adapter.output.stream=**datarow
>> -zk=myMaster
>> ..
>> And deploy the S4r programs:
>> ./s4 deploy -s4r=../build/libs/app.s4r -c=testCluster1 -appName=app
>>   -zk=myMaster
>> ...
>>
>> Then I want to change the app because I modify the program.
>> I have to restart previous steps.
>>
>
> Yes, and I suggest to script the operations. Note that we are also working
> on integrating with Yarn (= new Hadoop), in order to ease deployment
> (provided you run a Yarn cluster)
>
> Matthieu
>
>
>
>
>  So, How can define the partition?
>>
>>
>> 2012/10/18 Matthieu Morel <mmorel@apache.org <mailto:mmorel@apache.org>>
>>
>>
>>
>>     On Thu, Oct 18, 2012 at 8:16 AM, 杨定裕 <yangdingyu@gmail.com
>>     <mailto:yangdingyu@gmail.com>> wrote:
>>
>>         Hi, all
>>         When My adapter send large data to APP, adapter app occurs a
>>         error like this:
>>         Maybe the memory is limited and  the following processing is slow?
>>
>>
>>     you´d have to be more specific about your app. What is the
>>     approximate size of your messages? What is the available memory in
>>     the JVM? How many messages are you creating per second and per
>>     adapter node?
>>     Note that the culprit may not be the serialization but could also be
>>     intermediate objects created by Netty, the comm layer library. It
>>     will be useful to get more feedback on your issue.
>>
>>
>>         How can configure file in a real cluster.
>>         I found that it is complex in a 20 nodes to create the topology.
>>
>>
>>     I´m sure sure what complexity you are referring to. To use 20 nodes,
>>     you just need to define a logical cluster with 20 partitions, and
>>     start 20 S4 nodes that point to that cluster configuration
>>     (Zookeeper ensemble + cluster name).
>>
>>
>>         And How can i remove a application in a cluster?
>>
>>
>>     Currently you simply clean up zookeeper and kill the S4 nodes. (We
>>     plan to add a more convenient way, like a command you issue to
>>     Zookeeper).
>>
>>
>>         I saw s4-0.3 has a configure file "clusters.xml", but s-0.5 has
>> not.
>>
>>
>>     This is not needed. In S4 0.5, you define a minimal number of
>>     parameter for the cluster (number of partitions, name) and you start
>>     S4 nodes independently.
>>
>>     Matthieu
>>
>>
>>         -----error-------------
>>         Oct 18, 2012 1:50:04 PM
>>         org.jboss.netty.channel.**socket.nio.**
>> NioServerSocketPipelineSink
>>         WARNING: Failed to accept a connection.
>>         java.lang.OutOfMemoryError: GC overhead limit exceeded
>>                  at java.util.HashMap.**newKeyIterator(HashMap.java:**
>> 840)
>>                  at java.util.HashMap$KeySet.**iterator(HashMap.java:874)
>>                  at java.util.HashSet.iterator(**HashSet.java:153)
>>                  at
>>         sun.nio.ch.SelectorImpl.**processDeregisterQueue(**
>> SelectorImpl.java:127)
>>                  at
>>         sun.nio.ch.EPollSelectorImpl.**doSelect(EPollSelectorImpl.**
>> java:69)
>>                  at
>>         sun.nio.ch.SelectorImpl.**lockAndDoSelect(SelectorImpl.**java:69)
>>                  at sun.nio.ch.SelectorImpl.**
>> select(SelectorImpl.java:80)
>>                  at
>>         org.jboss.netty.channel.**socket.nio.**
>> NioServerSocketPipelineSink$**Boss.run(**NioServerSocketPipelineSink.**
>> java:240)
>>                  at
>>         java.util.concurrent.**ThreadPoolExecutor$Worker.**
>> runTask(ThreadPoolExecutor.**java:886)
>>                  at
>>         java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>> ThreadPoolExecutor.java:908)
>>                  at java.lang.Thread.run(Thread.**java:619)
>>         Exception in thread "Thread-7" java.lang.OutOfMemoryError: GC
>>         overhead limit exceeded
>>                  at java.lang.reflect.Array.get(**Native Method)
>>                  at
>>         com.esotericsoftware.kryo.**serialize.ArraySerializer.**
>> writeArray(ArraySerializer.**java:110)
>>                  at
>>         com.esotericsoftware.kryo.**serialize.ArraySerializer.**
>> writeObjectData(**ArraySerializer.java:88)
>>                  at
>>         com.esotericsoftware.kryo.**Serializer.writeObject(**
>> Serializer.java:43)
>>                  at
>>         com.esotericsoftware.kryo.**serialize.FieldSerializer.**
>> writeObjectData(**FieldSerializer.java:182)
>>                  at
>>         com.esotericsoftware.kryo.**Kryo.writeClassAndObject(Kryo.**
>> java:489)
>>                  at
>>         com.esotericsoftware.kryo.**ObjectBuffer.**writeClassAndObject(**
>> ObjectBuffer.java:230)
>>                  at
>>         org.apache.s4.comm.serialize.**KryoSerDeser.serialize(**
>> KryoSerDeser.java:90)
>>                  at
>>         org.apache.s4.comm.tcp.**TCPEmitter.send(TCPEmitter.**java:178)
>>                  at
>>         org.apache.s4.core.**RemoteSender.send(**RemoteSender.java:44)
>>                  at
>>         org.apache.s4.core.**RemoteSenders.send(**RemoteSenders.java:81)
>>                  at
>>         org.apache.s4.core.**RemoteStream.put(RemoteStream.**java:74)
>>                  at OLAAdapter.Adapter$Dequeuer.**run(Adapter.java:64)
>>                  at java.lang.Thread.run(Thread.**java:619)
>>
>>
>>
>>
>

Mime
View raw message