From: Reindl Harald <h.reindl@thelounge.net>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Dave Jones <davej@redhat.com>,
netdev@vger.kernel.org,
Fedora Kernel Team <kernel-team@fedoraproject.org>
Subject: Re: order 7 allocations from xt_recent
Date: Thu, 03 Jan 2013 20:51:22 +0100 [thread overview]
Message-ID: <50E5E13A.80604@thelounge.net> (raw)
In-Reply-To: <1357236046.21409.25385.camel@edumazet-glaptop>
[-- Attachment #1: Type: text/plain, Size: 2274 bytes --]
Am 03.01.2013 19:00, schrieb Eric Dumazet:
> On Thu, 2013-01-03 at 12:26 -0500, Dave Jones wrote:
>> On Thu, Jan 03, 2013 at 12:11:15PM -0500, Dave Jones wrote:
>> > On Thu, Jan 03, 2013 at 08:55:04AM -0800, Eric Dumazet wrote:
>> > > On Thu, 2013-01-03 at 11:43 -0500, Dave Jones wrote:
>> > > > We had a report from a user that shows this code trying
>> > > > to do enormous allocations, which isn't going to work too well..
>> > > > ...
>> > > > Which is initialised thus..
>> > > >
>> > > > ip_list_hash_size = 1 << fls(ip_list_tot);
>> > > >
>> > > > And ip_list_tot is 10000 in this case. Hmm ?
>> > > >
>> > > > Complete report and setup described in his bug report at https://bugzilla.redhat.com/show_bug.cgi?id=890715
>> > >
>> > > Yes, we had a report and a patch :
>> > >
>> > > http://comments.gmane.org/gmane.linux.network/248216
>> > >
>> > > I'll send it in a more formal way.
>> >
>> > Ah! Excellent.
>> >
>> > That 'check size and vmalloc/kmalloc accordingly' thing seems to be a pattern
>> > that comes up time and time again. Is it worth maybe making a more generic
>> > version of that instead of open-coding it each time it comes up ?
>>
>> Something else that I'm puzzled by.
>>
>> In the report above, it failed to allocate 512kb, but..
>>
>> Node 0 Normal: 2388*4kB 347*8kB 1029*16kB 3512*32kB 29*64kB 2*128kB 1*256kB 5*512kB 1*1024kB 0*2048kB 0*4096kB = 147128kB
>> ^^^^^^^^^^^^^^^^
>>
>> Shouldn't the allocator have been able to satisfy that anyway ?
>>
>
> Might be something related to the CONFIG_COMPACTION=y and lumpy reclaim
> removal ?
>
> Anyway, we keep a fraction of memory for ATOMIC allocations
on the machine there is even "vm.min_free_kbytes" set to 128 MB
however, something goes terrible wrong if cache-pages leads to
stack-traces about failed memory allocation
vm.swappiness = 0
vm.overcommit_memory = 1
vm.overcommit_ratio = 60
vm.vfs_cache_pressure = 30
vm.dirty_background_ratio = 15
vm.dirty_ratio = 40
vm.dirty_expire_centisecs = 1500
vm.dirty_writeback_centisecs = 1500
vm.zone_reclaim_mode = 0
vm.min_free_kbytes = 131072
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 261 bytes --]
next prev parent reply other threads:[~2013-01-03 20:09 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-03 16:43 order 7 allocations from xt_recent Dave Jones
2013-01-03 16:55 ` Eric Dumazet
2013-01-03 17:11 ` Dave Jones
2013-01-03 17:26 ` Dave Jones
2013-01-03 18:00 ` Eric Dumazet
2013-01-03 19:51 ` Reindl Harald [this message]
2013-01-03 18:02 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50E5E13A.80604@thelounge.net \
--to=h.reindl@thelounge.net \
--cc=davej@redhat.com \
--cc=eric.dumazet@gmail.com \
--cc=kernel-team@fedoraproject.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).