cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeremy Hanna (JIRA)" <>
Subject [jira] [Updated] (CASSANDRA-13943) Infinite compaction of L0 SSTables in JBOD
Date Mon, 15 Jan 2018 19:45:00 GMT


Jeremy Hanna updated CASSANDRA-13943:
    Labels: jbod-aware-compaction  (was: )

> Infinite compaction of L0 SSTables in JBOD
> ------------------------------------------
>                 Key: CASSANDRA-13943
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Compaction
>         Environment: Cassandra 3.11.0 / Centos 6
>            Reporter: Dan Kinder
>            Assignee: Marcus Eriksson
>            Priority: Major
>              Labels: jbod-aware-compaction
>         Attachments: cassandra-jstack-2017-10-12-infinite-sstable-adding.txt, cassandra-jstack-2017-10-12.txt,
cassandra.yaml, debug.log, debug.log-with-commit-d8f3f2780,,,
> I recently upgraded from 2.2.6 to 3.11.0.
> I am seeing Cassandra loop infinitely compacting the same data over and over. Attaching
> It is compacting two tables, one on /srv/disk10, the other on /srv/disk1. It does create
new SSTables but immediately recompacts again. Note that I am not inserting anything at the
moment, there is no flushing happening on this table (Memtable switch count has not changed).
> My theory is that it somehow thinks those should be compaction candidates. But they shouldn't
be, they are on different disks and I ran nodetool relocatesstables as well as nodetool compact.
So, it tries to compact them together, but the compaction results in the exact same 2 SSTables
on the 2 disks, because the keys are split by data disk.
> This is pretty serious, because all our nodes right now are consuming CPU doing this
for multiple tables, it seems.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message