hbase-builds mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: HBase-0.98-on-Hadoop-1.1 #1017
Date Sat, 01 Aug 2015 00:44:12 GMT
See <https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1017/changes>

Changes:

[apurtell] HBASE-14087 Ensure correct ASF headers for docs/code

[apurtell] HBASE-14176 Add missing headers to META-INF files

------------------------------------------
[...truncated 1828 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 179.99 sec - in org.apache.hadoop.hbase.mapreduce.TestCopyTable
Running org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.463 sec - in org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat
Running org.apache.hadoop.hbase.io.TestFileLink
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.868 sec - in org.apache.hadoop.hbase.io.TestFileLink
Running org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.857 sec - in org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.974 sec - in org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
Running org.apache.hadoop.hbase.io.hfile.TestHFileSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.006 sec - in org.apache.hadoop.hbase.io.hfile.TestHFileSeek
Running org.apache.hadoop.hbase.io.hfile.TestHFileBlock
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.972 sec - in org.apache.hadoop.hbase.io.hfile.TestHFileBlock
Running org.apache.hadoop.hbase.io.hfile.slab.TestSlabCache
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.458 sec - in org.apache.hadoop.hbase.io.hfile.slab.TestSlabCache
Running org.apache.hadoop.hbase.io.hfile.slab.TestSingleSizeCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.622 sec - in org.apache.hadoop.hbase.io.hfile.slab.TestSingleSizeCache
Running org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.615 sec - in org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks
Running org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite
Tests run: 216, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 374.03 sec <<<
FAILURE! - in org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite
testNotCachingDataBlocksDuringCompaction[102](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.101 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:442)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

testNotCachingDataBlocksDuringCompaction[103](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.093 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:436)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

testNotCachingDataBlocksDuringCompaction[104](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.092 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:436)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

testNotCachingDataBlocksDuringCompaction[105](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.093 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:436)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

testNotCachingDataBlocksDuringCompaction[106](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.092 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:436)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

testNotCachingDataBlocksDuringCompaction[107](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
 Time elapsed: 0.092 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:693)
	at org.apache.hadoop.hbase.util.HasThread.start(HasThread.java:85)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:436)
	at org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:296)
	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createHLog(HLogFactory.java:47)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4552)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4519)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4492)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4570)
	at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4450)
	at org.apache.hadoop.hbase.HBaseTestingUtility.createTestRegion(HBaseTestingUtility.java:3500)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompactionInternals(TestCacheOnWrite.java:429)
	at org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction(TestCacheOnWrite.java:485)

Running org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.968 sec - in org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk
Running org.apache.hadoop.hbase.io.encoding.TestChangingEncoding
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.793 sec - in org.apache.hadoop.hbase.io.encoding.TestChangingEncoding
Running org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers
Build timed out (after 300 minutes). Marking the build as failed.
Build was aborted
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  :   ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
  if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
    #It seems sometimes the tests are not dying immediately. Let's give them 10s
    echo "Suspicious java process found - waiting 10s to see if there are just slow to stop"
    sleep 10
    ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
    if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
      echo "There are $ZOMBIE_TESTS_COUNT zombie tests, they should have been killed by surefire
but survived"
      echo "************ BEGIN zombies jstack extract"
      ZB_STACK=`jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack | grep ".test"
| grep "\.java"`
      jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack
      echo "************ END  zombies jstack extract"
      JIRA_COMMENT="$JIRA_COMMENT

     {color:red}-1 core zombie tests{color}.  There are ${ZOMBIE_TESTS_COUNT} zombie test(s):
${ZB_STACK}"
      BAD=1
      jps | grep surefirebooter | cut -d ' ' -f 1 | xargs kill -9
    else
      echo "We're ok: there is no zombie test, but some tests took some time to stop"
    fi
  else
    echo "We're ok: there is no zombie test"
  fi
[HBase-0.98-on-Hadoop-1.1] $ /bin/bash -xe /tmp/hudson885034518370396791.sh
++ jps
++ wc -l
++ grep surefirebooter

Results :


Tests in error: 
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory
  TestCacheOnWrite.testNotCachingDataBlocksDuringCompaction:485->testNotCachingDataBlocksDuringCompactionInternals:429
» OutOfMemory


Tests run: 2239, Failures: 0, Errors: 6, Skipped: 24

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] HBase ............................................. SUCCESS [3.333s]
[INFO] HBase - Checkstyle ................................ SUCCESS [0.631s]
[INFO] HBase - Annotations ............................... SUCCESS [0.886s]
[INFO] HBase - Common .................................... SUCCESS [46.429s]
[INFO] HBase - Protocol .................................. SUCCESS [9.043s]
[INFO] HBase - Client .................................... SUCCESS [50.762s]
[INFO] HBase - Hadoop Compatibility ...................... SUCCESS [7.177s]
[INFO] HBase - Hadoop One Compatibility .................. SUCCESS [5.789s]
[INFO] HBase - Prefix Tree ............................... SUCCESS [8.027s]
[INFO] HBase - Server .................................... FAILURE [4:57:45.888s]
[INFO] HBase - Testing Util .............................. SKIPPED
[INFO] HBase - Thrift .................................... SKIPPED
[INFO] HBase - Rest ...................................... SKIPPED
[INFO] HBase - Shell ..................................... SKIPPED
[INFO] HBase - Integration Tests ......................... SKIPPED
[INFO] HBase - Examples .................................. SKIPPED
[INFO] HBase - Assembly .................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4:59:58.797s
[INFO] Finished at: Sat Aug 01 00:42:58 UTC 2015
+ ZOMBIE_TESTS_COUNT=1
+ [[ 1 != 0 ]]
+ echo 'Suspicious java process found - waiting 10s to see if there are just slow to stop'
Suspicious java process found - waiting 10s to see if there are just slow to stop
+ sleep 10
[INFO] Final Memory: 46M/683M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18:test (secondPartTestsExecution)
on project hbase-server: ExecutionException: java.lang.RuntimeException: The forked VM terminated
without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/ws/hbase-server>
&& /home/jenkins/tools/java/jdk1.7.0_25-32/jre/bin/java -enableassertions -XX:MaxDirectMemorySize=1G
-Xmx2800m -XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
-Djava.awt.headless=true -jar <https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/ws/hbase-server/target/surefire/surefirebooter6203613410970346544.jar>
<https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/ws/hbase-server/target/surefire/surefire3877278370833982180tmp>
<https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/ws/hbase-server/target/surefire/surefire_10801274207227664627596tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hbase-server
++ grep surefirebooter
++ jps
++ wc -l
+ ZOMBIE_TESTS_COUNT=1
+ [[ 1 != 0 ]]
+ echo 'There are 1 zombie tests, they should have been killed by surefire but survived'
There are 1 zombie tests, they should have been killed by surefire but survived
+ echo '************ BEGIN zombies jstack extract'
************ BEGIN zombies jstack extract
++ jps
++ grep surefirebooter
++ grep .test
++ cut -d ' ' -f 1
++ xargs -n 1 jstack
++ grep '\.java'
28116: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding
+ ZB_STACK=
POST BUILD TASK : FAILURE
END OF POST BUILD TASK : 0
Archiving artifacts
Sending artifact delta relative to HBase-0.98-on-Hadoop-1.1 #978
Archived 1754 artifacts
Archive block size is 32768
Received 6 blocks and 288414289 bytes
Compression is 0.1%
Took 1 min 7 sec
Recording test results
Updating HBASE-14087
Updating HBASE-14176

Mime
View raw message