netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Greear <greearb@candelatech.com>
To: "David S. Miller" <davem@davemloft.net>
Cc: Stephen Hemminger <shemminger@osdl.org>,
	webmaster@rapidforum.com, netdev@oss.sgi.com
Subject: Re: Bug in 2.6.10
Date: Fri, 28 Jan 2005 12:04:23 -0800	[thread overview]
Message-ID: <41FA9AC7.5080007@candelatech.com> (raw)
In-Reply-To: <20050128114600.46f3a70a.davem@davemloft.net>

David S. Miller wrote:
> On Fri, 28 Jan 2005 11:42:51 -0800
> Stephen Hemminger <shemminger@osdl.org> wrote:
> 
> 
>>On Fri, 28 Jan 2005 20:27:53 +0100
>>Christian Schmid <webmaster@rapidforum.com> wrote:
>>
>>
>>>Hello.
>>>
>>>In 2.6.10 there has been a "bug" introduced. You may also call it a feature, but its a crappy 
>>>feature for big servers. It seems the kernel is dynamically adjusting the buffer-space available for 
>>>sockets. Even if send-buffer has been set to 1024 KB, the kernel blocks at less if there are enough 
>>>sockets in use. If you have 10 sockets with 1024 KB each, they do not block at all, using full 1024 
>>>KB. If you have 4000 sockets, they only use 200 KB. So it seems its blocking at 800 MB. This is 
>>>good, if you have a 1/3 system, because else the kernel would run out of low mem. But I have a 2/2 
>>>system and I need them for buffers. So what can I do? Where can I adjust the "pool"?
>>
>>You can set the upper bound by setting tcp_wmem. There are three values
>>all documented in Documentation/networking/ip-sysctl.txt
> 
> 
> This feature is meant also to prevent remote denial of service
> attacks.  It limits the amount of system memory TCP can
> consume on your system.
> 
> Before this feature went in, it's really easy to remotely make a system
> consume %90 of system memory just with socket buffers.

Could you cause this attack without having the local machine explicitly
set it's local wmem buffers higher?

With the latest code, if you set the tcp_[rw]mem MAX to some really large thing,
as it appears Mr Schmid was doing, does the kernel just ignore the larger value
after 800MB?  I agree that by default the system should protect itself from OOM
attacks, but at the same time, if a user really wants to use up to 2GB of RAM for
buffers, I don't think we should stop them.

In addition to Mr Hemminger's suggestion, I think the more important knob
would be the /proc/sys/net/core/netdev_max_backlog which bounds the total
number of receive packets in the system, correct?  Is there a similar knob
for the total maximum number of buffers waiting in tx queues?

Ben

-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

      parent reply	other threads:[~2005-01-28 20:04 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-01-28 19:27 Bug in 2.6.10 Christian Schmid
2005-01-28 19:42 ` Stephen Hemminger
2005-01-28 19:46   ` David S. Miller
2005-01-28 20:03     ` Christian Schmid
2005-01-28 20:04     ` Ben Greear [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41FA9AC7.5080007@candelatech.com \
    --to=greearb@candelatech.com \
    --cc=davem@davemloft.net \
    --cc=netdev@oss.sgi.com \
    --cc=shemminger@osdl.org \
    --cc=webmaster@rapidforum.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).