From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] seq_file: Use larger buffer to reduce time traversing lists
Date: Fri, 01 Jun 2012 16:28:20 +0100 [thread overview]
Message-ID: <1338564500.2708.73.camel@menhir> (raw)
In-Reply-To: <1338562627.2760.1526.camel@edumazet-glaptop>
Hi,
On Fri, 2012-06-01 at 16:57 +0200, Eric Dumazet wrote:
> On Fri, 2012-06-01 at 15:18 +0100, Steven Whitehouse wrote:
> > 0m0.374s
> >
> > So even with the current tcp scheme this appears to speed things up by
> > nearly 3x. Also that was with only 28000 entries in the file,
>
> Initial speedup was 100x, not 3x.
>
According to the patch description, that 100x was with 182k entries.
That is not comparing like with like, although I accept that that did
provide a really good improvement. I'm not suggesting that we should
have one approach or the other, but that both are worth considering.
I'll certainly have a look at the hash table based approach too.
> Of course using a 32KB buffer instead of 4KB will help.
>
> And If someones need 100.000 active unix sockets and fast /proc/net/udp
> file as well, patch is welcome. If I have time I can do it eventually.
>
> Really, kmalloc(2 MB) is not going to happen, even using __GFP_NOWARN
>
It is designed so that if this allocation fails, then we just fall back
to the old slow way, so I'm not sure that this is an issue. It will not
fail to work just because the initial kmalloc fails, so we would be no
worse off than with the current code, and we could also trim down the
top size limit to something a bit smaller than KMALLOC_MAX_SIZE and
still get most of the benefit. I just chose that as a convenient upper
limit to show the principle.
Also, I think it wouldn't be unreasonable to argue that if the
probability of a KMALLOC_MAX_SIZE allocation failing is so high that it
is very unlikely to ever succeed, then perhaps KMALLOC_MAX_SIZE is too
large.
So I know that I might have not convinced you :-) but I still think that
perhaps something along these lines is worth considering. I had looked
at various other possible ways of achieving a similar effect but all of
those I rejected in the end, as they fell foul of some of the subtleties
of the seq_read() code,
Steve.
next prev parent reply other threads:[~2012-06-01 15:28 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-01 10:39 [Cluster-devel] seq_file: Use larger buffer to reduce time traversing lists Steven Whitehouse
[not found] ` <1338552626.2760.1510.camel@edumazet-glaptop>
2012-06-01 12:24 ` Steven Whitehouse
[not found] ` <1338554890.2760.1517.camel@edumazet-glaptop>
2012-06-01 13:14 ` Steven Whitehouse
[not found] ` <1338557229.2760.1520.camel@edumazet-glaptop>
2012-06-01 14:18 ` Steven Whitehouse
[not found] ` <1338562627.2760.1526.camel@edumazet-glaptop>
2012-06-01 15:28 ` Steven Whitehouse [this message]
[not found] ` <1338562897.2760.1528.camel@edumazet-glaptop>
[not found] ` <1338563900.2760.1529.camel@edumazet-glaptop>
2012-06-01 15:30 ` Steven Whitehouse
[not found] ` <1338552870.2760.1512.camel@edumazet-glaptop>
2012-06-01 12:26 ` Steven Whitehouse
2012-06-01 15:54 ` Joe Perches
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1338564500.2708.73.camel@menhir \
--to=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).