From: David Mansfield <lkml@dm.ultramaster.com>
To: Andrea Arcangeli <andrea@suse.de>
Cc: Sasi Peter <sape@iq.rulez.org>,
Godfrey Livingstone <godfrey@hattaway-associates.com>,
linux-kernel@vger.kernel.org
Subject: Re: Ingo's RAID patch for 2.2.18 final?
Date: Fri, 16 Feb 2001 10:53:51 -0500 [thread overview]
Message-ID: <3A8D4D0F.5EB9BDB1@dm.ultramaster.com> (raw)
In-Reply-To: <200101241505.QAA01045@iq.rulez.org> <20010216151737.D14430@inspiron.random>
Andrea Arcangeli wrote:
>
> On Wed, Jan 24, 2001 at 04:05:12PM +0100, Sasi Peter wrote:
> > > This isn't obvious. Your working may not fit in cache and so the kernel
> > > understand it's worthless to swapout stuff to make space to a polluted
> > > cache.
> >
> > But your understanding agrees on that the larger chunks for each stream
> > we read into cache, the more efficient for this kind of RAID disk
> > structure the read is, thus basically the more cache we have, the more
> > bandwidth we can serve. (disks give more data in the same time with
> > fewer long reads than with several shorter ones)
>
> The size of the I/O requests doesn't grow linearly with with the size of the
> cache, as far as you have some mbyte of cache you will also be able to sumbit
> full sized requests to disk (512K per req on 2.4). In your workload you just
> had enough memory for the readahead.
>
> In general if your working set doesn't fit in cache, the size of the cache is
> unrelated to the bandwith you get out of your RAID, infact if your working set
> doesn't fit in cache you should not pass through the cache at all to get the
> best performance and to save CPU cycles and L1 dcache and L2 cache (O_DIRECT).
>
This may be a bit OT, but when you say O_DIRECT, that implies that you
can pass that flag to open(2) and it will bypass the page cache, and
read directly into user-space buffers (zero-copy IO)? Does this also
bypass the read-ahead mechanisms in the kernel? Does it imply O_SYNC?
Lots of questions... no answers. Sigh.
David
--
David Mansfield (718) 963-2020
david@ultramaster.com
Ultramaster Group, LLC www.ultramaster.com
next prev parent reply other threads:[~2001-02-16 19:16 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-01-24 15:05 Ingo's RAID patch for 2.2.18 final? Sasi Peter
2001-02-16 14:17 ` Andrea Arcangeli
2001-02-16 15:53 ` David Mansfield [this message]
2001-02-17 19:23 ` Andrea Arcangeli
2001-02-16 19:43 ` Can 2.2 Linux boot from an IDE ZIP? Dennis
2001-02-17 10:11 ` Sven Vermeulen
2001-02-17 10:14 ` Sven Vermeulen
-- strict thread matches above, loose matches on Subject: below --
2001-01-11 21:36 Ingo's RAID patch for 2.2.18 final? Jens Petersohn
2001-01-11 22:22 ` Takacs Sandor
2001-01-11 22:27 ` Alan Cox
2001-01-11 22:42 ` Takacs Sandor
2001-01-11 22:48 ` Jens Petersohn
2001-01-11 23:14 ` Takacs Sandor
2001-01-11 23:56 ` Matthias Kilian
2001-01-12 15:20 ` Oliver Teuber
2001-01-12 1:30 ` Andrea Arcangeli
2001-01-14 4:55 ` Godfrey Livingstone
2001-01-14 7:54 ` junio
2001-01-14 12:35 ` Ingo Molnar
2001-01-23 23:52 ` Sasi Peter
2001-01-24 0:09 ` Andrea Arcangeli
2001-01-24 0:43 ` Sasi Peter
2001-01-24 1:27 ` Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3A8D4D0F.5EB9BDB1@dm.ultramaster.com \
--to=lkml@dm.ultramaster.com \
--cc=andrea@suse.de \
--cc=godfrey@hattaway-associates.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sape@iq.rulez.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox