From users-return-22070-apmail-cloudstack-users-archive=cloudstack.apache.org@cloudstack.apache.org Thu Jun 4 17:27:51 2015 Return-Path: X-Original-To: apmail-cloudstack-users-archive@www.apache.org Delivered-To: apmail-cloudstack-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 13CE118214 for ; Thu, 4 Jun 2015 17:27:51 +0000 (UTC) Received: (qmail 10355 invoked by uid 500); 4 Jun 2015 17:27:50 -0000 Delivered-To: apmail-cloudstack-users-archive@cloudstack.apache.org Received: (qmail 10308 invoked by uid 500); 4 Jun 2015 17:27:50 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 10285 invoked by uid 99); 4 Jun 2015 17:27:49 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 04 Jun 2015 17:27:49 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 50B75CB6BF; Thu, 4 Jun 2015 17:27:49 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.881 X-Spam-Level: ** X-Spam-Status: No, score=2.881 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id bzHU8ZzgNV19; Thu, 4 Jun 2015 17:27:48 +0000 (UTC) Received: from mail-wi0-f174.google.com (mail-wi0-f174.google.com [209.85.212.174]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 668EC27604; Thu, 4 Jun 2015 17:27:47 +0000 (UTC) Received: by wifw1 with SMTP id w1so66304586wif.0; Thu, 04 Jun 2015 10:27:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PMmUQndZfXD4va+KrdaYtrKI8UJ7N89+F8DAhW1qgDI=; b=drbBuwdPuLTDMIdjuS0RZIFJEdksjaucBqJ/ZhXB0OgfcEYziBLpFwKwKsyln4A5tq O89+47Qy9o+CWS5yAPicTCmU0a5aDQ4zdBZpEzxYgUbK1iex7Dz3sLhp2iY5lAJFoPdu fvv6l6MZlW2KrqtKeAVxMS4Kiej/wD/ySFsp5cjtDX2WM3mO6iHeFUTeA2J+6/F0jH3+ IGa6WiUTblA6k4E3iPacjkArRHXObXGsFtssIJNR8e516gMr6NoNHqHfPiwRuyze2J/b MkafZ/4NnBhCx+CDhVPteR8c4GNucZFGS7u+dWGF5rcHSfPXrytzYKaoRwOUIV2YLo9Z f0ZA== MIME-Version: 1.0 X-Received: by 10.180.92.162 with SMTP id cn2mr9981540wib.26.1433438866145; Thu, 04 Jun 2015 10:27:46 -0700 (PDT) Received: by 10.28.172.199 with HTTP; Thu, 4 Jun 2015 10:27:46 -0700 (PDT) In-Reply-To: References: <25FE2217-D6D4-46DE-9191-FF1861C61CB8@citrix.com> Date: Thu, 4 Jun 2015 19:27:46 +0200 Message-ID: Subject: Re: Strange bug? "spam" in management log files... From: Andrija Panic To: "dev@cloudstack.apache.org" Cc: "users@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=f46d043be0fc6c85110517b47e35 --f46d043be0fc6c85110517b47e35 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable And if of any help another hint: while Im having this lines sent to logs in high volume...if I stop second mgmt server, first one (that is making all these lines, doesnt stop to make them), so log is still heavily writen to - only when I also restart mgmt on 1st node (2nd node is down), then these log lines dissapear. Thx On 4 June 2015 at 19:19, Andrija Panic wrote: > And I could add - these lines (in this volume) only appears on first mgmt > server (Actually I have 2 separate, but identical ACS installations, and > same behaviour). > > On 4 June 2015 at 19:18, Andrija Panic wrote: > >> Just checked, in the HOSTS table, all agents are connected (via haproxy) >> to the first mgmt server...I just restarted haproxy, and still inside th= e >> DB, it says same mgmt_server_id for all agents - which is not really tru= e. >> >> Actually, on the haproxy itslef (statistics page) I can see almoust >> 50%-50% distribution across 2 backends - which means by haproxy it shoul= d >> be fine. >> total 18 agents, 10 goes to 1 backend, 8 goes to other backend (ACS mgmt >> server) >> >> This is our haproxy config, I think it's fine, but... DB says >> differently, althouh haproxy statistick say all fine >> >> ### ACS 8250 >> ########################################################################= ############### >> frontend front_ACS_8250 10.20.10.100:8250 >> option tcplog >> mode tcp >> default_backend back_8250 >> backend back_8250 >> mode tcp >> balance source >> server acs1_8250 10.20.10.7:8250 check port 8250 inter 2000 rise >> 3 fall 3 >> server acs2_8250 10.20.10.8:8250 check port 8250 inter 2000 rise >> 3 fall 3 >> >> ########################################################################= ########################## >> >> Any info on how to proceed with this, since because of these lines, it >> makes mgmt logs almoust unreadable... :( >> >> Thanks, >> Andrija >> >> On 4 June 2015 at 19:00, Andrija Panic wrote: >> >>> Thanks Koushik, >>> >>> I will check and let you know - but 11GB log file for 10h ? I dont >>> expect this is expected :) >>> I understand that the message is there because of setup, just an awful >>> lot of lines.... >>> >>> Will check thx for the help ! >>> >>> Andrija >>> >>> On 4 June 2015 at 18:53, Koushik Das wrote: >>> >>>> This is expected in a clustered MS setup. What is the distribution of >>>> HV hosts across these MS (check host table in db for MS id)? MS owning= the >>>> HV host processes all commands for that host. >>>> Grep for the sequence numbers (for e.g. 73-7374644389819187201) in bot= h >>>> MS logs to correlate. >>>> >>>> >>>> >>>> On 04-Jun-2015, at 8:30 PM, Andrija Panic >>>> wrote: >>>> >>>> > Hi, >>>> > >>>> > I have 2 ACS MGMT servers, loadbalanced properly (AFAIK), and >>>> sometimes it >>>> > happens that on the first node, we have extremem number of folowing >>>> line >>>> > entries in the log fie, which causes many GB log in just few hours o= r >>>> less: >>>> > (as you can see here they are not even that frequent, but sometimes, >>>> it >>>> > gets really crazy with the speed/numer logged per seconds: >>>> > >>>> > 2015-06-04 16:55:04,089 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-29:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,129 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-28:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,129 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-8:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,169 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-26:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,169 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-30:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,209 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-27:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,209 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-2:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,249 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-4:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,249 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-7:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,289 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-3:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,289 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-5:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,329 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-1:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,330 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-15:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,369 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-11:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,369 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-17:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,409 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-14:null) Seq 1-3297479352165335041: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > 2015-06-04 16:55:04,409 DEBUG [c.c.a.m.ClusteredAgentManagerImpl] >>>> > (AgentManager-Handler-12:null) Seq 73-7374644389819187201: MgmtId >>>> > 90520745449919: Resp: Routing to peer >>>> > >>>> > >>>> > We have haproxy VIP, to which SSVM connects, and all cloudstack agen= ts >>>> > (agent.properties file). >>>> > >>>> > Any suggestions, how to avoid this - I noticed when I turn off secon= d >>>> ACS >>>> > MGMT server, and then reboot first one (restart >>>> cloudstack-management) it >>>> > stops and behaves nice :) >>>> > >>>> > This is ACS 4.5.1, Ubuntu 14.04 for mgmt nodes. >>>> > >>>> > Thanks, >>>> > -- >>>> > >>>> > Andrija Pani=C4=87 >>>> >>>> >>> >>> >>> -- >>> >>> Andrija Pani=C4=87 >>> >> >> >> >> -- >> >> Andrija Pani=C4=87 >> > > > > -- > > Andrija Pani=C4=87 > --=20 Andrija Pani=C4=87 --f46d043be0fc6c85110517b47e35--