netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* A one-liner ipvs patch
@ 2008-09-03  8:24 Siim Põder
  2008-09-03  8:47 ` Simon Horman
  2008-09-03  9:02 ` Siim Põder
  0 siblings, 2 replies; 3+ messages in thread
From: Siim Põder @ 2008-09-03  8:24 UTC (permalink / raw)
  To: netdev, lvs-devel; +Cc: Simon Horman

[-- Attachment #1: Type: text/plain, Size: 487 bytes --]

Hi!

I have a few very high connection rate services behind LVS and when
keepalived removes them because of problems, the LVS would be rebooted
with tens of thousands of this message in the logs (rebooted by hardware
watchdogs).

I have had it running on production systems for maybe 3 months or so and
this problem hasn't occured any more, which does not prove that the
patch isn't just curing a symptom, but I think it is concievable, that a
lot of printks could hang a system.

Siim


[-- Attachment #2: ip_vs_wrr-ratelimit-printk.patch --]
[-- Type: text/x-patch, Size: 671 bytes --]

ipvs: ratelimit flooding message in wrr scheduler

When all backends are removed from wrr scheduler by a monitoring agent, this
message can flood so fast that a 10s watchdog would reset the system.

Signed-off-by: Siim Po~der <siim@p6drad-teel.net>
--- linux-2.6.24/net/ipv4/ipvs/ip_vs_wrr.c	2008-01-24 22:58:37.000000000 +0000
+++ linux-2.6.24-ipvs_patches/net/ipv4/ipvs/ip_vs_wrr.c	2008-05-06 16:17:17.790662800 +0000
@@ -169,7 +169,7 @@
 				 */
 				if (mark->cw == 0) {
 					mark->cl = &svc->destinations;
-					IP_VS_INFO("ip_vs_wrr_schedule(): "
+					IP_VS_DBG_RL("ip_vs_wrr_schedule(): "
 						   "no available servers\n");
 					dest = NULL;
 					goto out;


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: A one-liner ipvs patch
  2008-09-03  8:24 A one-liner ipvs patch Siim Põder
@ 2008-09-03  8:47 ` Simon Horman
  2008-09-03  9:02 ` Siim Põder
  1 sibling, 0 replies; 3+ messages in thread
From: Simon Horman @ 2008-09-03  8:47 UTC (permalink / raw)
  To: Siim Põder; +Cc: netdev, lvs-devel

On Wed, Sep 03, 2008 at 11:24:03AM +0300, Siim Põder wrote:
> Hi!
> 
> I have a few very high connection rate services behind LVS and when
> keepalived removes them because of problems, the LVS would be rebooted
> with tens of thousands of this message in the logs (rebooted by hardware
> watchdogs).
> 
> I have had it running on production systems for maybe 3 months or so and
> this problem hasn't occured any more, which does not prove that the
> patch isn't just curing a symptom, but I think it is concievable, that a
> lot of printks could hang a system.

Hi Siim,

When trying to apply this I noticed that it has already been
included as "ipvs: Make wrr "no available servers" error message
rate-limited" by Sven Wegener, which was included in 2.6.25-rc1.

Thanks


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: A one-liner ipvs patch
  2008-09-03  8:24 A one-liner ipvs patch Siim Põder
  2008-09-03  8:47 ` Simon Horman
@ 2008-09-03  9:02 ` Siim Põder
  1 sibling, 0 replies; 3+ messages in thread
From: Siim Põder @ 2008-09-03  9:02 UTC (permalink / raw)
  To: netdev, lvs-devel

Siim Põder wrote:
> I have a few very high connection rate services behind LVS and when
> keepalived removes them because of problems, the LVS would be rebooted
> with tens of thousands of this message in the logs (rebooted by hardware
> watchdogs).
> 
> I have had it running on production systems for maybe 3 months or so and
> this problem hasn't occured any more, which does not prove that the
> patch isn't just curing a symptom, but I think it is concievable, that a
> lot of printks could hang a system.

sorry, this has been fixed in 2.6.25 already.

Siim

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-09-03  9:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-03  8:24 A one-liner ipvs patch Siim Põder
2008-09-03  8:47 ` Simon Horman
2008-09-03  9:02 ` Siim Põder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).