linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Haslett <knotwurk@gmail.com>
To: steve@digidescorp.com
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH] increase pipe size/buffers/atomicity :D
Date: Fri, 9 Apr 2010 14:50:49 -0500	[thread overview]
Message-ID: <i2yb985fb501004091250i782b9b1ak1698149e10b1b601@mail.gmail.com> (raw)
In-Reply-To: <1270739687.3066.16.camel@iscandar.digidescorp.com>

[-- Attachment #1: Type: text/plain, Size: 3722 bytes --]

> On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
>> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
>
> It would be good to know what issue this addresses. Gives people a way
> to weigh any side-effects/drawbacks against the benefits, and an
> opportunity to suggest alternate/better approaches.
>

I wouldn't say it addresses anything that I'd really consider broken;
it started as a personal experiment of mine, aimed at some little
performance gain.  I figured, hey, bigger pipes, why not? Looks like
these pipe sizes have practically been around since the epoch.


>>  #define PIPE_BUF_FLAG_LRU      0x01    /* page is on the LRU */
>>  #define PIPE_BUF_FLAG_ATOMIC   0x02    /* was atomically mapped */
>> --- include/asm-generic/page.h.orig     2010-04-06 22:57:08.000000000
>> -0500
>> +++ include/asm-generic/page.h  2010-04-06 22:57:23.000000000 -0500
>> @@ -12,7 +12,7 @@
>>
>>  /* PAGE_SHIFT determines the page size */
>>
>> -#define PAGE_SHIFT     12
>> +#define PAGE_SHIFT     13
>
> This has pretty wide-ranging implications, both within and across
> arches. I don't think it's something that can be changed easily. Also I
> don't believe this #define used in your configuration (Athlon/686)
> unless you're running without a MMU.
>

actually, the reason I went after this, gets into the only reason I
started this whole ordeal to begin with, line#135 in pipe_fs_i.h that
reads "#define PIPE_SIZE    PAGE_SIZE".


>>  #ifdef __ASSEMBLY__
>>  #define PAGE_SIZE      (1 << PAGE_SHIFT)
>>  #else
>> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
>> +++ include/linux/limits.h      2010-04-06 22:56:28.000000000 -0500
>> @@ -10,7 +10,7 @@
>>  #define MAX_INPUT        255   /* size of the type-ahead buffer */
>>  #define NAME_MAX         255   /* # chars in a file name */
>>  #define PATH_MAX        4096   /* # chars in a path name including nul */
>> -#define PIPE_BUF        4096   /* # bytes in atomic write to a pipe */
>> +#define PIPE_BUF        8192   /* # bytes in atomic write to a pipe */
>

You'd think so (according to some posts I'd read before I tried this),
but I actually tried several variations on a few things, and until I
changed *this one in particular*, my kernel would in fact boot up
fine, but the shell/init/system phase itself would start giving me
errors to the effect of "unable to create pipe" and "too many file
descriptors open" over and over again.

>> --- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000
>> -0500
>> +++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
>> @@ -3,7 +3,7 @@
>>
>>  #define PIPEFS_MAGIC 0x50495045
>>
>> -#define PIPE_BUFFERS (16)
>> +#define PIPE_BUFFERS (32)
>
> This worries me. In several places there are functions with 2 or 3
> pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
> anywhere from 128 to 384 bytes to the stack in these functions depending
> on sizeof(void*) and the number of arrays.
>

As my initial hope/goal was to just increase the size of the pipes, I
figured I may as well increase the buffers as well (although I'll
admit ignorance to not having poked around every little .c/.h file
that calls it).

I guess I wasn't seriously trying to push anyone into jumping through
hoops for this thing, I was just a little excited and figured I'd
share with you all.  I probably spent the better part of a few days
either researching, poking around the kernel headers, or experimenting
with different combinations.   As such, I've attached a .txt file
explaining the controlled (but probably not as thorough as you're used
to) benchmark I ran.   It's not a pretty graph, I know, but gimme a
break, I wrote it in vim and did the math with bc ;)

[-- Attachment #2: benchmark1.txt --]
[-- Type: text/plain, Size: 4418 bytes --]

%%%%%%%%%%%%%
WITHOUT PATCH
%%%%%%%%%%%%%

dd if=/dev/zero of=/root/benchmark bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.674347 s, 15.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.89386 s, 22.9 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=20000
20000+0 records in
20000+0 records out
40960000 bytes (41 MB) copied, 1.36237 s, 30.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=20000
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 2.81037 s, 29.1 MB/s

===============20000 blocks written averaged 24.325 MB/s
========================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=40000
40000+0 records in
40000+0 records out
20480000 bytes (20 MB) copied, 1.31354 s, 15.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=40000
40000+0 records in
40000+0 records out
40960000 bytes (41 MB) copied, 1.8173 s, 22.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=40000
40000+0 records in
40000+0 records out
81920000 bytes (82 MB) copied, 3.23683 s, 25.3 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=40000
40000+0 records in
40000+0 records out
163840000 bytes (164 MB) copied, 6.79296 s, 24.1 MB/s

================== 40000 blocks written averaged 21.875 MB/s
============================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=80000
80000+0 records in
80000+0 records out
40960000 bytes (41 MB) copied, 2.70969 s, 15.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=80000
80000+0 records in
80000+0 records out
81920000 bytes (82 MB) copied, 4.25879 s, 19.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=80000
80000+0 records in
80000+0 records out
163840000 bytes (164 MB) copied, 7.28753 s, 22.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=80000
80000+0 records in
80000+0 records out
327680000 bytes (328 MB) copied, 13.5436 s, 24.2 MB/s

==================== 80000 blocks written averaged 22.75 MB/s
=============================================================

%%%%%%%%%%
WITH PATCH (!)
%%%%%%%%%%

dd if=/dev/zero of=/root/benchmark bs=512 count=20000
20000+0 records in
20000+0 records out
10240000 bytes (10 MB) copied, 0.354359 s, 28.9 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.474818 s, 43.1 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=20000
20000+0 records in
20000+0 records out
40960000 bytes (41 MB) copied, 0.790466 s, 51.8 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=20000
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 1.51956 s, 53.9 MB/s

================= 40000 blocks written averaged 44.425 MB/s (+82.6%)
====================================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=40000
40000+0 records in
40000+0 records out
20480000 bytes (20 MB) copied, 0.731345 s, 28.0 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=40000
40000+0 records in
40000+0 records out
40960000 bytes (41 MB) copied, 1.06329 s, 38.5 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=40000
40000+0 records in
40000+0 records out
81920000 bytes (82 MB) copied, 1.85218 s, 44.2 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=40000
40000+0 records in
40000+0 records out
163840000 bytes (164 MB) copied, 4.08386 s, 40.1 MB/s

================= 40000 blocks written averaged 37.7 MB/s (+72.3%)
==================================================================

dd if=/dev/zero of=/root/benchmark bs=512 count=80000
80000+0 records in
80000+0 records out
40960000 bytes (41 MB) copied, 1.59573 s, 25.7 MB/s

dd if=/dev/zero of=/root/benchmark bs=1024 count=80000
80000+0 records in
80000+0 records out
81920000 bytes (82 MB) copied, 2.51223 s, 32.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=2048 count=80000
80000+0 records in
80000+0 records out
163840000 bytes (164 MB) copied, 4.59659 s, 35.6 MB/s

dd if=/dev/zero of=/root/benchmark bs=4096 count=80000
80000+0 records in
80000+0 records out
327680000 bytes (328 MB) copied, 10.3018 s, 31.8 MB/s
(31.425-22.75)/22.75

=================== 80000 blocks written averaged 31.425 MB/s (+38.1%)
=====================================================================

      reply	other threads:[~2010-04-09 19:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-08  1:38 [PATCH] increase pipe size/buffers/atomicity :D brian
2010-04-08  5:11 ` Eric Dumazet
2010-04-08 15:14 ` Steven J. Magnani
2010-04-09 19:50   ` Brian Haslett [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=i2yb985fb501004091250i782b9b1ak1698149e10b1b601@mail.gmail.com \
    --to=knotwurk@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=steve@digidescorp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).