From: Eric Dumazet <dada1@cosmosbay.com>
To: Nick Piggin <npiggin@suse.de>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [rfc][patch] dynamic resizing dentry hash using RCU
Date: Fri, 23 Feb 2007 17:31:17 +0100 [thread overview]
Message-ID: <200702231731.17262.dada1@cosmosbay.com> (raw)
In-Reply-To: <20070223153743.GA26141@wotan.suse.de>
On Friday 23 February 2007 16:37, Nick Piggin wrote:
> The dentry hash uses up 8MB for 1 million entries on my 4GB system is one
> of the biggest wasters of memory for me. Because I rarely have more than
> one or two hundred thousand dentries. And that's with several kernel trees
> worth of entries. Most desktop and probably even many types of servers will
> only use a fraction of that.
>
> So I introduce a new method for resizing hash tables with RCU, and apply
> that to the dentry hash.
>
> The primitive heuristic is that the hash size is doubled when the number of
> entries reaches 150% the hash size, and halved when the number is 50%.
> It should also be able to shrink under memory pressure, and scale up as
> large as we go.
>
> A pity it uses vmalloc memory for the moment.
>
> The implementation is not highly stress tested, but it is running now. It
> could do a bit more RCU stuff asynchronously rather than with
> synchronize_rcu, but who cares, for now.
>
> The hash is costing me about 256K now, which is a 32x reduction in memory.
>
> I don't know if it's worthwhile to do this, rather than move things to
> other data structures, but something just tempted me to have a go! I'd be
> interested to hear comments, and how many holes people can spot in my
> design ;)
>
> Thanks,
> Nick
Hi Nick
Thats a really good idea !
The vmalloc() thing could be a problem, so :
Could you bring back the support of 'dhash_entries=262144' boot param, so that
an admin could set the initial size of dhash table, (and not shrink it under
this size even if the number of dentries is low)
In case dhash_entries is set in boot params, we could try to use
alloc_large_system_hash() for the initial table, (eventually using Hugepages
(not vmalloc)), if we add a free_large_system_hash() function to be able to
free the initial table.
Or else, time is to add the possibility for vmalloc() to use hugepages
itself...
next prev parent reply other threads:[~2007-02-23 16:31 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-02-23 15:37 [rfc][patch] dynamic resizing dentry hash using RCU Nick Piggin
2007-02-23 16:31 ` Eric Dumazet [this message]
2007-02-24 1:08 ` Nick Piggin
2007-02-23 17:25 ` Zach Brown
2007-02-24 1:26 ` Nick Piggin
2007-02-24 2:07 ` Nick Piggin
2007-02-24 1:31 ` Michael K. Edwards
2007-02-24 1:52 ` Nick Piggin
2007-02-24 4:07 ` KAMEZAWA Hiroyuki
2007-02-24 5:15 ` Nick Piggin
2007-02-24 4:24 ` William Lee Irwin III
2007-02-24 5:09 ` Nick Piggin
2007-02-24 22:56 ` William Lee Irwin III
2007-02-25 0:56 ` David Miller
2007-02-25 2:15 ` William Lee Irwin III
2007-02-25 6:21 ` Paul E. McKenney
2007-03-05 4:11 ` David Miller
2007-03-05 4:27 ` Nick Piggin
2007-03-05 4:38 ` David Miller
2007-03-05 4:42 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200702231731.17262.dada1@cosmosbay.com \
--to=dada1@cosmosbay.com \
--cc=linux-kernel@vger.kernel.org \
--cc=npiggin@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox