flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fabian Hueske (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2293) Division by Zero Exception
Date Mon, 06 Jul 2015 10:26:04 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14614835#comment-14614835

Fabian Hueske commented on FLINK-2293:

The fix changes the logic of the bucket number computation and sets it to at least 10.
Are you 100% sure you ran on the new build? 
Can you check the log files for the Git commit hash (is one of the first log entries)?

> Division by Zero Exception
> --------------------------
>                 Key: FLINK-2293
>                 URL: https://issues.apache.org/jira/browse/FLINK-2293
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime
>    Affects Versions: 0.9, 0.10
>            Reporter: Andra Lungu
>            Priority: Critical
>             Fix For: 0.9.1
> I am basically running an algorithm that simulates a Gather Sum Apply Iteration that
performs Traingle Count (Why simulate it? Because you just need a superstep -> useless
overhead if you use the runGatherSumApply function in Graph).
> What happens, at a high level:
> 1). Select neighbors with ID greater than the one corresponding to the current vertex;
> 2). Propagate the received values to neighbors with higher ID;
> 3). compute the number of triangles by checking whether
> trgVertex.getValue().get(srcVertex.getId());
> As you can see, I *do not* perform any division at all;
> code is here: https://github.com/andralungu/gelly-partitioning/blob/master/src/main/java/example/GSATriangleCount.java
> Now for small graphs, 50MB max, the computation finishes nicely with the correct result.
For a 10GB graph, however, I got this:
> java.lang.ArithmeticException: / by zero
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:836)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.buildTableFromSpilledPartition(MutableHashTable.java:819)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.prepareNextPartition(MutableHashTable.java:508)
>     at org.apache.flink.runtime.operators.hash.MutableHashTable.nextRecord(MutableHashTable.java:544)
>     at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.callWithNextKey(NonReusingBuildFirstHashMatchIterator.java:104)
>     at org.apache.flink.runtime.operators.MatchDriver.run(MatchDriver.java:173)
>     at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
>     at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>     at java.lang.Thread.run(Thread.java:722)
> see the full log here: https://gist.github.com/andralungu/984774f6348269df7951

This message was sent by Atlassian JIRA

View raw message