cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nikolai Grigoriev (JIRA)" <>
Subject [jira] [Resolved] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF
Date Mon, 06 Jan 2014 17:21:54 GMT


Nikolai Grigoriev resolved CASSANDRA-6528.

    Resolution: Cannot Reproduce

Closing since I cannot reproduce it anymore. Will reopen if I manage to reproduce it again
and capture the debug information as per instructions above.

> TombstoneOverwhelmingException is thrown while populating data in recently truncated
> ---------------------------------------------------------------------------------------
>                 Key: CASSANDRA-6528
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Cassadra 2.0.3, Linux, 6 nodes
>            Reporter: Nikolai Grigoriev
>            Priority: Minor
> I am running some performance tests and recently I had to flush the data from one of
the tables and repopulate it. I have about 30M rows with a few columns in each, about 5kb
per row in in total. In order to repopulate the data I do "truncate <table>" from CQLSH
and then relaunch the test. The test simply inserts the data in the table, does not read anything.
Shortly after restarting the data generator I see this on one of the nodes:
> {code}
>  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 (line 323)
Started hinted handoff f
> or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /
> ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 (line 200) Scanned
> r 100000 tombstones; query aborted (see tombstone_fail_threshold)
> ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 (line 187) Exception
in thread Thread[HintedHandoff:655,1,main]
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException
>         at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(
>         at org.apache.cassandra.db.filter.QueryFilter.collateColumns(
>         at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(
>         at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(
>         at org.apache.cassandra.db.CollationController.collectAllData(
>         at org.apache.cassandra.db.CollationController.getTopLevelColumns(
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(
>         at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(
>         at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(
>         at org.apache.cassandra.db.HintedHandOffManager.access$4(
>         at org.apache.cassandra.db.HintedHandOffManager$
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
>  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 (line 63) flushing
high-traffic column family CFS(Keyspace='test_jmeter', ColumnFamily='test_profiles') (estimated
192717267 bytes)
> {code}
> I am inserting the data with CL=1.
> It seems to be happening every time I do it. But I do not see any errors on the client
side and the node seems to continue operating, this is why I think it is not a major issue.
Maybe not an issue at all, but the message is logged as ERROR.

This message was sent by Atlassian JIRA

View raw message