cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John Buczkowski (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-5011) Is there an issue with node token collisions when running Cassandra cluster on VMWare?
Date Sat, 01 Dec 2012 04:51:58 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507863#comment-13507863
] 

John Buczkowski commented on CASSANDRA-5011:
--------------------------------------------

Hey Brandon:

The entry: Why does nodetool ring only show one entry, even though my nodes logged that they
see each other joining the ring? sounds like it could be the case here.



I've gone through the steps of deleting  data and commitlog directories and then restarting.
I still run into the same issue.



Is there somewhere else where that token might be saved that I can clear it out?



Thanks,

JohnB


                
> Is there an issue with node token collisions when running Cassandra cluster on VMWare?
> --------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-5011
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5011
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.6
>            Reporter: John Buczkowski
>
> Hi:
> Are there any known issues with initial_token collision when adding nodes to a cluster
in a VM environment?
> I'm working on a 4 node cluster set up on a VM. We're running into issues when we attempt
to add nodes to the cluster.
> In the cassandra.yaml file, initial_token is left blank.
> Since we're running > 1.0 cassandra, auto_bootstrap should be true by default.
> It's my understanding that each of the nodes in the cluster should be assigned an initial
token at startup. 
> This is not what we're currently seeing. 
> We do not want to manually set the value for initial_token for each node (kind of defeats
the goal of being dynamic..)
> We also have set the partitioner to random:  partitioner: org.apache.cassandra.dht.RandomPartitioner
> I've outlined the steps we follow and results we are seeing below.
> Can someone please asdvise as to what we're missing here?
> Here are the detailed steps we are taking:
> 1) Kill all cassandra instances and delete data & commit log files on each node.
> 2) Startup Seed Node (S.S.S.S)
> ---------------------
> Starts up fine.
> 3) Run nodetool -h W.W.W.W  ring and see:
> -------------------------------------
> Address         DC          Rack        Status State   Load            Effective-Ownership
Token
> S.S.S.S         datacenter1 rack1       Up     Normal  28.37 GB        100.00%      
      24360745721352799263907128727168388463
> 4) X.X.X.X Startup
> -----------------
>  INFO [GossipStage:1] 2012-11-29 21:16:02,194 Gossiper.java (line 850) Node /X.X.X.X
is now part of the cluster
>  INFO [GossipStage:1] 2012-11-29 21:16:02,194 Gossiper.java (line 816) InetAddress /X.X.X.X
is now UP
>  INFO [GossipStage:1] 2012-11-29 21:16:02,195 StorageService.java (line 1138) Nodes /X.X.X.X
and /Y.Y.Y.Y have the same token 113436792799830839333714191906879955254.  /X.X.X.X is the
new owner
>  WARN [GossipStage:1] 2012-11-29 21:16:02,195 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254
changing ownership from /Y.Y.Y.Y to /X.X.X.X
> 5) Run nodetool -h W.W.W.W  ring and see:
> -------------------------------------
> Address         DC          Rack        Status State   Load            Effective-Ownership
Token
>                                                                                     
      113436792799830839333714191906879955254
> S.S.S.S         datacenter1 rack1       Up     Normal  28.37 GB        100.00%      
      24360745721352799263907128727168388463
> W.W.W.W         datacenter1 rack1       Up     Normal  123.87 KB       100.00%      
      113436792799830839333714191906879955254
> 6) Y.Y.Y.Y Startup
> -----------------
>  INFO [GossipStage:1] 2012-11-29 21:17:36,458 Gossiper.java (line 850) Node /Y.Y.Y.Y
is now part of the cluster
>  INFO [GossipStage:1] 2012-11-29 21:17:36,459 Gossiper.java (line 816) InetAddress /Y.Y.Y.Y
is now UP
>  INFO [GossipStage:1] 2012-11-29 21:17:36,459 StorageService.java (line 1138) Nodes /Y.Y.Y.Y
and /X.X.X.X have the same token 113436792799830839333714191906879955254.  /Y.Y.Y.Y is the
new owner
>  WARN [GossipStage:1] 2012-11-29 21:17:36,459 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254
changing ownership from /X.X.X.X to /Y.Y.Y.Y
> 7) Run nodetool -h W.W.W.W  ring and see:
> -------------------------------------
> Address         DC          Rack        Status State   Load            Effective-Ownership
Token
>                                                                                     
      113436792799830839333714191906879955254
> S.S.S.S         datacenter1 rack1       Up     Normal  28.37 GB        100.00%      
      24360745721352799263907128727168388463
> Y.Y.Y.Y         datacenter1 rack1       Up     Normal  123.87 KB       100.00%      
      113436792799830839333714191906879955254
> 8) Z.Z.Z.Z Startup
> -----------------
>  INFO [GossipStage:1] 2012-11-30 04:52:28,590 Gossiper.java (line 850) Node /Z.Z.Z.Z
is now part of the cluster
>  INFO [GossipStage:1] 2012-11-30 04:52:28,591 Gossiper.java (line 816) InetAddress /Z.Z.Z.Z
is now UP
>  INFO [GossipStage:1] 2012-11-30 04:52:28,591 StorageService.java (line 1138) Nodes /Z.Z.Z.Z
and /Y.Y.Y.Y have the same token 113436792799830839333714191906879955254.  /Z.Z.Z.Z is the
new owner
>  WARN [GossipStage:1] 2012-11-30 04:52:28,592 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254
changing ownership from /Y.Y.Y.Y to /Z.Z.Z.Z
> 9) Run nodetool -h W.W.W.W  ring and see:
> -------------------------------------
> Address         DC          Rack        Status State   Load            Effective-Ownership
Token
>                                                                                     
      113436792799830839333714191906879955254
> W.W.W.W         datacenter1 rack1       Up     Normal  28.37 GB        100.00%      
      24360745721352799263907128727168388463
> S.S.S.S         datacenter1 rack1       Up     Normal  28.37 GB        100.00%      
      24360745721352799263907128727168388463
> Z.Z.Z.Z         datacenter1 rack1       Up     Normal  123.87 KB       100.00%      
      113436792799830839333714191906879955254
> Thanks in advance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message