storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander T <mittspamko...@gmail.com>
Subject Re: GC overhead limit exceeded
Date Thu, 14 Apr 2016 07:08:08 GMT
GC overhead limit errors are typical when you are slowly filling up RAM.
When available memory is running low the JVM eventually spends almost all
its time GC:ing and this error is raised.

Most likely you are slowly filling up all available memory with your maps.
This might be hard to spot in VisualVM given the short timeframe you
sampled. Try to supervise it over time and see what happens.

What you need to do is to make sure that you are not just growing your maps
indefinately. How you do that is implementation dependent. If you have a
large but fixed but number of items to put in the map you can increase
memory so that you will never use it up. Otherwise you will have to set a
max size of your map or otherwise prevent tracking all items forever. Maybe
a cache instead of a simple hashmap?

Cheers
Alex
On Apr 14, 2016 1:05 AM, "sam mohel" <sammohel5@gmail.com> wrote:

> thanks i tried it but error still , i thinks you are right and Andey too
> about hashmap
> because the code i have i changed it by increasing another calculations
> using hashmaps
>
> i ran the code without any update in my machine and workerd well without
> any problem with GC
> but after i changed it by adding another hashmaps this error appeared i
> mean GC overhead
> so the best solution that you told me before is to remove but i couldn't
> find or know how can i remove it and where
> do you mean by removing
>
> like
> h.remove(); ?
>
> and where can i remove it ? the code using trident
>
> On Wed, Apr 13, 2016 at 4:22 PM, Spico Florin <spicoflorin@gmail.com>
> wrote:
>
>> Hi!
>>   For the issue you have to exclude the storm-core from your generated
>> fat jar. You use maven as far I as understood, therefore you have to use
>> something like this:
>> set the scope for  the storm-core dependency to compile.
>> <dependency>
>> <groupId>org.apache.storm</groupId>
>> <artifactId>storm-core</artifactId>
>> <version>0.10.0</version>
>> <scope>compile</scope>
>> </dependency>
>>
>> Please check.
>> Regards,
>>  Florin
>>
>> On Wed, Apr 13, 2016 at 5:17 PM, sam mohel <sammohel5@gmail.com> wrote:
>>
>>> 1- i set debug for true in the code , i'm using trident and as i run
>>> code in local mode using maven command i didn't get
>>> for second point it didn't work
>>>
>>> Caused by: java.lang.RuntimeException: Found multiple defaults.yaml
>>> resources. You're probably bundling the Storm jars with your topology jar.
>>> [jar:file:/usr/local/storm/lib/storm-core-0.9.6.jar!/defaults.yaml,
>>> jar:file:/home/user/.m2/repository/org/apache/storm/storm-core/0.9.6/storm-core-0.9.6.jar!/defaults.yaml]
>>>
>>> 2- i tried to run topology in production mode and got
>>> Exception in thread "main" DRPCExecutionException(msg:Request timed out)
>>>
>>> should i increase
>>>  drpc.request.timeout.secs: 600
>>>
>>> as the file that code will use it to get results contains 50.000 tweets
>>> , so should i increase time request for drpc ?
>>>
>>>
>>> On Tue, Apr 12, 2016 at 2:22 PM, Spico Florin <spicoflorin@gmail.com>
>>> wrote:
>>>
>>>> Hi!
>>>>  0. I agree with Andrey that you put a large map (dictionary) on the
>>>> heap that you are loading it without remove anything to it.
>>>>   1. For running the topology in LocalCluster you can use a code like
>>>> this inyour main class (that runs the topology and has the public static
>>>> void main method):
>>>> TopologyBuilder builder = ...//your topology
>>>> Config conf = new Config();
>>>>
>>>> conf.setDebug(true);
>>>>
>>>> if (args.length == 0) {
>>>> LocalCluster cluster = new LocalCluster();
>>>>
>>>> cluster.submitTopology("Hello-World", config,
>>>> builder.createTopology());
>>>>
>>>>
>>>> }
>>>> 2. If you have an environment like eclipse, you can run your code by
>>>> setting the -Xmx2048m for your topology  main class (has the public static
>>>> void main method) like this:
>>>>       a)Run your Java main class as Java application (this will create
>>>> a launch configuration with the name of your class)
>>>>     b) Go to Run Configurations -> Go to your launch configuration
>>>> having the name of your class
>>>>   c) Go to the VM arguments tab -> add your -Xmx2048m flag
>>>>
>>>> 3. If you run your topology on the cluster in order to see how much
>>>> memory you have allocated to your worker
>>>>   a) go to the storm ui ( http://<your_storm_UI_IP>:8080/)
>>>>   b) check the worker.childopts nimbus configuration
>>>>
>>>> I hope that these help.
>>>>  Regards,
>>>>   Florin
>>>>
>>>> On Mon, Apr 11, 2016 at 5:38 PM, Andrey Dudin <doodin201@gmail.com>
>>>> wrote:
>>>>
>>>>> *sorry again how can i know XmX and XMs in my JVM? *
>>>>> If you are using linux, you can use this command: ps -ef | grep java.
>>>>> Then find your topology in process list. Or add
>>>>> *-XX:+PrintCommandLineFlags* to worker childopts.
>>>>>
>>>>> Please add this params to worker.childopts: -XX:+PrintGCTimeStamps
>>>>> -XX:+PrintGCDetails -Xloggc:gc%ID%.log
>>>>> for dump GC activity to logfile.
>>>>>
>>>>>
>>>>> *how can memory not active and it is not leak and i need to extra it
*
>>>>> Objects can use heap for it live. Without source code complicated say
>>>>> whats wrong. Just try to add memory and look at GC monitor.
>>>>>
>>>>>
>>>>> May be useful:
>>>>> "To begin, we need to run the application with the largest possible
>>>>> amount of memory than is actually needed application. If we did not know
>>>>> initially how many will take our application in mind - you can run the
>>>>> application without specifying the -Xmx and -Xms and HotSpot VM will
select
>>>>> the size of memory. If at the start of the application we will get
>>>>> OutOfMemory (Java heap space or PermGen space), then we can iteratively
>>>>> increase the available memory size (-Xmx or -XX: PermSize) until the
error
>>>>> does not go away.
>>>>> The next step is to calculate the size of long-lived live data - is
>>>>> the size of old and permanent areas of the heap after a phase of full
>>>>> garbage collection. This size - the approximate amount of memory required
>>>>> for the functioning of the application, to obtain it, you can see the
size
>>>>> of the area after a series of the complete assembly. Usually the size
of
>>>>> the memory required for the application -Xms and -Xmx 3-4 times more
than
>>>>> the amount of live data."
>>>>>
>>>>>
>>>>> 2016-04-11 16:53 GMT+03:00 sam mohel <sammohel5@gmail.com>:
>>>>>
>>>>>> @Florin
>>>>>> thanks for replying , really iam using 3 hashmap in my code
>>>>>> please how can i debug the code in local mode ?
>>>>>> after error appeared visualVM closed my application , should i run
it
>>>>>> again to see what i got in the profiler tab ?  i saved what i got
in
>>>>>> heapdump , should i use it or get something from it ?
>>>>>>
>>>>>> sorry again how can i know XmX and XMs in my JVM?
>>>>>>
>>>>>> Thanks a lot
>>>>>>
>>>>>> @Andrey
>>>>>> Thanks for replying i've question about memory it's first time deal
>>>>>> with that problem
>>>>>> how can memory not active and it is not leak and i need to extra
it
>>>>>> how can i change GC trigger ?
>>>>>>
>>>>>> Thanks a lot , Really thanks for helping
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 11, 2016 at 2:18 PM, Andrey Dudin <doodin201@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> There is not memory leak. Your memory graph show that memory
not
>>>>>>> active using. Likely you use a big object\map\etc in memory.
GC don't stop
>>>>>>> working because level of free memory is low. You need to extra
memory or
>>>>>>> change GC trigger.
>>>>>>>
>>>>>>> 2016-04-11 7:31 GMT+03:00 sam mohel <sammohel5@gmail.com>:
>>>>>>>
>>>>>>>>
>>>>>>>> ​@spico there is what i got after ruuning code again in
local mode
>>>>>>>> , how can i know if there is memory leak or not ?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Apr 8, 2016 at 1:45 AM, sam mohel <sammohel5@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> sorry it's supposed to use hashmap.remove() to not to
make it
>>>>>>>>> reach to heapsize  right !
>>>>>>>>>
>>>>>>>>> On Fri, Apr 8, 2016 at 1:43 AM, sam mohel <sammohel5@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Really thanks for your patience , what i got about
hashmap you
>>>>>>>>>> mentioned that it's supposed not to use hashmap.remove();
>>>>>>>>>> Right ?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thu, Apr 7, 2016 at 10:45 AM, Spico Florin <
>>>>>>>>>> spicoflorin@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi!
>>>>>>>>>>>   By release the hashmap, I mean that you need
to remove the
>>>>>>>>>>> keys at some points. Meaning hashMap.remove(key).
If you just use
>>>>>>>>>>> hashMap.put() in the nextTuple method of the
spout or in the execute method
>>>>>>>>>>> of the bolt, and never use the hashMap.remove()
and your hashMap is a field
>>>>>>>>>>> in the Bolt or Spout class, then your map will
grow and you'll reach your
>>>>>>>>>>> heap Size.
>>>>>>>>>>>  The issue that yo have with the jvisualvm is
that you have
>>>>>>>>>>> installed only the Java Runtime Evironment (only
the java vm) but not the
>>>>>>>>>>> the JDK (Java Development Kit) . Please install
the JDK.
>>>>>>>>>>> After installing look at hashmap classes. Check
the memory size
>>>>>>>>>>> for them. Run GC and check if the memort size
for them grows. If they grow
>>>>>>>>>>> even after GC then you could have a memory leak.
>>>>>>>>>>>
>>>>>>>>>>> I hope that it helps.
>>>>>>>>>>>  Florin
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Apr 6, 2016 at 8:49 AM, sam mohel <sammohel5@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> @florin
>>>>>>>>>>>> i used this command  java -XX:+PrintFlagsFinal
-version | grep
>>>>>>>>>>>> HeapSize
>>>>>>>>>>>>
>>>>>>>>>>>> and got
>>>>>>>>>>>>
>>>>>>>>>>>> uintx ErgoHeapSizeLimit                 
       =
>>>>>>>>>>>> 0               {product}
>>>>>>>>>>>>     uintx HeapSizePerGCThread           
           =
>>>>>>>>>>>> 87241520        {product}
>>>>>>>>>>>>     uintx InitialHeapSize               
          :=
>>>>>>>>>>>> 63056640        {product}
>>>>>>>>>>>>     uintx LargePageHeapSizeThreshold    
           =
>>>>>>>>>>>> 134217728       {product}
>>>>>>>>>>>>     uintx MaxHeapSize                   
          :=
>>>>>>>>>>>> 1010827264      {product}
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Apr 6, 2016 at 12:44 AM, sam mohel
<sammohel5@gmail.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> sorry i forgot to mention that my ram
is 3.8 GB and i used
>>>>>>>>>>>>> hahsmap in the code but i don't know
what do you mean by release it ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Apr 6, 2016 at 12:20 AM, sam
mohel <
>>>>>>>>>>>>> sammohel5@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> @ florin thanks for replying , i
installed tool but got when
>>>>>>>>>>>>>> i ran it
>>>>>>>>>>>>>> i checked update-alternatives --config
java
>>>>>>>>>>>>>> There are 3 choices for the alternative
java (providing
>>>>>>>>>>>>>> /usr/bin/java).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>   Selection
>>>>>>>>>>>>>> Path                            
               Priority   Status
>>>>>>>>>>>>>> ------------------------------------------------------------
>>>>>>>>>>>>>>   0
>>>>>>>>>>>>>> /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
  1071      auto mode
>>>>>>>>>>>>>> * 1
>>>>>>>>>>>>>> /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java
  1061      manual mode
>>>>>>>>>>>>>>   2
>>>>>>>>>>>>>> /usr/lib/jvm/java-6-oracle/jre/bin/java
         1062      manual mode
>>>>>>>>>>>>>>   3
>>>>>>>>>>>>>> /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
  1071      manual mode
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ​
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Sun, Apr 3, 2016 at 9:19 PM, Spico
Florin <
>>>>>>>>>>>>>> spicoflorin@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> hi!
>>>>>>>>>>>>>>> before increasing the ram (by
rpoviding in command line
>>>>>>>>>>>>>>> arguments the jvm options java
-Xmx) try to use a profile tool such as
>>>>>>>>>>>>>>> jvisualvm jprobe to see if you
have amemory leak. do you use a cache (for
>>>>>>>>>>>>>>> example hashmap where you store
some data but never relese it). how much
>>>>>>>>>>>>>>> ram do you have on your machine?
check your default heap size with the help
>>>>>>>>>>>>>>> of this link
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> http://stackoverflow.com/questions/4667483/how-is-the-default-java-heap-size-determined
>>>>>>>>>>>>>>> regards florin
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sunday, April 3, 2016, sam
mohel <sammohel5@gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>> > do you mean in storm.yaml
? or where ?
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > On Sun, Apr 3, 2016 at 11:56
AM, Andrey Dudin <
>>>>>>>>>>>>>>> doodin201@gmail.com> wrote:
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> Try to extra more RAM
for this topology.
>>>>>>>>>>>>>>> >> -Xms and -Xmx options
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> 2016-04-03 1:32 GMT+03:00
sam mohel <sammohel5@gmail.com
>>>>>>>>>>>>>>> >:
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> i'm facing problem
with topology i ran it in local mode
>>>>>>>>>>>>>>> and got
>>>>>>>>>>>>>>> >>> Async loop died!java.lang.OutOfMemoryError:
GC overhead
>>>>>>>>>>>>>>> limit exceeded
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Can you help on
this ? if there is any data you need for
>>>>>>>>>>>>>>> helping just tell me
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Thanks in advance
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> --
>>>>>>>>>>>>>>> >> С уважением
Дудин Андрей
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> С уважением Дудин Андрей
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> С уважением Дудин Андрей
>>>>>
>>>>
>>>>
>>>
>>
>

Mime
View raw message