From: Jamie Lokier <jamie@shareable.org>
To: Christoph Hellwig <hch@lst.de>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 2/3] barriers: block-raw-posix barrier support
Date: Tue, 5 May 2009 17:00:11 +0100 [thread overview]
Message-ID: <20090505160011.GD31100@shareable.org> (raw)
In-Reply-To: <20090505132944.GA3416@lst.de>
Christoph Hellwig wrote:
> On Tue, May 05, 2009 at 01:33:11PM +0100, Jamie Lokier wrote:
> > You don't need two fdatasyncs if the barrier request is just a
> > barrier, no data write, used only to flush previously written data by
> > a guest's fsync/fdatasync implementation.
>
> Yeah. I'll put that optimization in after some testing.
I suggest keeping a flag "flush_needed". Set it whenever a write is
submitted, don't submit fsync/fdatasync when the flag is clear, clear
it whenever an fsync/fdatasync is submitted. Provides a few more
optimisation opportunities.
> > This is the best argument yet for having distinct "barrier" and "sync"
> > operations. "Barrier" is for ordering I/O, such as journalling
> > filesystems.
>
> Doesn't really help as long as we're using the normal Posix filesystem
> APIs on the host. The only way to guarantee ordering of multiple
> *write* systen calls is to call f(data)sync between them.
It doesn't help with journalling barriers, which I agree are dominant
in a lot of workloads, but it does help guest fsync-heavy workloads.
When "Sync && !Barrier" the guest doesn't require the full ordering
guarantee.
Therefore you can call f(data)sync _and_ call some writes on other I/O
threads in parallel. The f(data)sync) mustn't be started until
previous-queued writes are complete, but later-queued writes can be
called in parallel with f(data)sync.
(Or if using Linux AIO, the same with aio_fsync and later-queued
aio_writes in parallel).
In other words, with a guest fdatasync-heavy workload, like a
database, it could keep the I/O pipeline busy instead of draining it
as the full barrier does.
It won't help with a journalling-barrier-heavy workload, without
changes to the host to expose the distinct barrier types - i.e. a more
flexible alternative to f(data)sync, such as is occasionally discussed
elsewhere.
-- Jamie
next prev parent reply other threads:[~2009-05-05 16:00 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 12:08 [Qemu-devel] [PATCH 0/3] write barrier support Christoph Hellwig
2009-05-05 12:08 ` [Qemu-devel] barriers: block layer preparations Christoph Hellwig
2009-05-05 13:51 ` Avi Kivity
2009-05-05 15:38 ` Jamie Lokier
2009-05-05 15:49 ` Avi Kivity
2009-05-05 16:00 ` Jamie Lokier
2009-05-05 20:57 ` Christoph Hellwig
2009-05-05 22:49 ` Jamie Lokier
2009-05-05 12:08 ` [Qemu-devel] [PATCH 2/3] barriers: block-raw-posix barrier support Christoph Hellwig
2009-05-05 12:33 ` Jamie Lokier
2009-05-05 13:29 ` Christoph Hellwig
2009-05-05 16:00 ` Jamie Lokier [this message]
2009-05-05 12:09 ` [Qemu-devel] [PATCH 3/3] barriers: virtio Christoph Hellwig
2009-05-05 13:53 ` [Qemu-devel] [PATCH 0/3] write barrier support Avi Kivity
2009-05-05 21:00 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090505160011.GD31100@shareable.org \
--to=jamie@shareable.org \
--cc=hch@lst.de \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).