From: Dave Chinner <david@fromorbit.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>,
linux-fsdevel@vger.kernel.org, linux-mm@vger.kernel.org,
linux-kernel@vger.kernel.org, axboe@kernel.dk
Subject: Re: [PATCH 0/6] Page I/O
Date: Wed, 15 Jan 2014 11:04:08 +1100 [thread overview]
Message-ID: <20140115000408.GN3469@dastard> (raw)
In-Reply-To: <x49ha9bhnc8.fsf@segfault.boston.devel.redhat.com>
On Fri, Jan 10, 2014 at 10:24:07AM -0500, Jeff Moyer wrote:
> Matthew Wilcox <matthew.r.wilcox@intel.com> writes:
>
> > This patch set implements pageio as I described in my talk at
> > Linux.Conf.AU. It's for review more than application, I think
> > benchmarking is going to be required to see if it's a win. We've done
> > some benchmarking with an earlier version of the patch and a Chatham card,
> > and it's a win for us.
> >
> > The fundamental point of these patches is that we *can* do I/O without
> > allocating a BIO (or request, or ...) and so we can end up doing fun
> > things like swapping out a page without allocating any memory.
> >
> > Possibly it would be interesting to do sub-page I/Os (ie change the
> > rw_page prototype to take a 'start' and 'length' instead of requiring the
> > I/O to be the entire page), but the problem then arises about what the
> > 'done' callback should be.
>
> For those of us who were not fortunate enough to attend your talk, would
> mind providing some background, like why you went down this path in the
> first place, and maybe what benchmarks you ran where you found it "a
> win?"
No need to attend - the LCA A/V team live streamed it over the
intertubes and had the recording up on the mirror within 24 hours:
http://mirror.linux.org.au/pub/linux.conf.au/2014/Thursday/239-Further_adventures_in_non-volatile_memory_-_Matthew_Wilcox.mp4
> Another code path making an end-run around the block layer is
> interesting, but may keep cgroup I/O throttling from working properly,
> for example.
Well, this is really aimed at CPU cache coherent DRAM speed devices,
so I think the per-IO overhead of throttling would be prohibitive
for such devices. IMO, those devices will spend more CPU time in the
IO path than doing the IO (which is likely to be the CPU doing a
memcpy!), so IO rates will be more effectively controlled by
restricting CPU time rather than adding extra overhead into the
block layer to account for each individual IO....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
prev parent reply other threads:[~2014-01-15 0:04 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-10 2:39 [PATCH 0/6] Page I/O Matthew Wilcox
2014-01-10 2:39 ` [PATCH 1/6] Add bdev_read_page() and bdev_write_page() Matthew Wilcox
2014-01-10 2:39 ` [PATCH 2/6] Factor page_endio() out of mpage_end_io() Matthew Wilcox
2014-01-10 2:39 ` [PATCH 3/6] swap: Use bdev_read_page() / bdev_write_page() Matthew Wilcox
2014-01-10 2:39 ` [PATCH 4/6] brd: Add support for rw_page Matthew Wilcox
2014-01-10 2:39 ` [PATCH 5/6] virtio_blk: Add rw_page implementation Matthew Wilcox
2014-01-10 2:39 ` [PATCH 6/6] NVMe: Add support for rw_page Matthew Wilcox
2014-01-10 15:24 ` [PATCH 0/6] Page I/O Jeff Moyer
2014-01-10 16:17 ` Matthew Wilcox
2014-01-15 0:04 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140115000408.GN3469@dastard \
--to=david@fromorbit.com \
--cc=axboe@kernel.dk \
--cc=jmoyer@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@vger.kernel.org \
--cc=matthew.r.wilcox@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).