From: Christoph Hellwig <hch@infradead.org>
To: Daniel Stodden <daniel.stodden@citrix.com>
Cc: Christoph Hellwig <hch@infradead.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
"jaxboe@fusionio.com" <jaxboe@fusionio.com>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"konrad@kernel.org" <konrad@kernel.org>
Subject: Re: [Xen-devel] Re: [PATCH] xen block backend driver.
Date: Fri, 22 Apr 2011 05:09:04 -0400 [thread overview]
Message-ID: <20110422090904.GA29246@infradead.org> (raw)
In-Reply-To: <1303413277.9571.133.camel@agari.van.xensource.com>
On Thu, Apr 21, 2011 at 12:14:37PM -0700, Daniel Stodden wrote:
> > There is a huge userbase of guests out there that does rely on it.
>
> Which ones? Old blkfront would have make a difference back then when
> barriers used to be an option, but it never actually declared it, right?
Pre-Linux 2.6.37 guests using reiserfs actually relied on the queue
flushing. This includes a lot of SLES installation which are still
in common use. There's only two options to make sure they work:
(1) keep the original barrier semantics and flush the queue
(2) do not advertize "barrier" support at all, and make sure to submit
every I/O we get with the FUA bit.
In practice (2) is going to be faster for most real-life workloads. So
maybe you should just drop the old "barrier" support and just send
requests with the FUA bit set for now, until you have proper flush
and fua support in the protocol.
> Weeeeeelll, I certainly hope it can deal with backends which never got
> to see those headers. :o)
They probably try to handle it, no idea how correct the handling is
in the end.
next prev parent reply other threads:[~2011-04-22 9:09 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-20 21:05 [PATCH v3] xen block backend Konrad Rzeszutek Wilk
2011-04-20 21:05 ` [PATCH] xen block backend driver Konrad Rzeszutek Wilk
2011-04-21 3:40 ` Christoph Hellwig
2011-04-21 19:03 ` [Xen-devel] " Daniel Stodden
2011-04-21 19:06 ` Christoph Hellwig
2011-04-21 19:14 ` [Xen-devel] " Daniel Stodden
2011-04-22 9:09 ` Christoph Hellwig [this message]
2011-05-02 19:08 ` [Xen-devel] Re: [PATCH] xen block backend driver. - proper flush/barrier/fua support missing Konrad Rzeszutek Wilk
2011-04-21 3:37 ` [PATCH v3] xen block backend Christoph Hellwig
2011-04-21 7:28 ` [Xen-devel] " Ian Campbell
2011-04-21 8:03 ` Ian Campbell
2011-04-21 8:06 ` Christoph Hellwig
2011-04-21 8:38 ` Ian Campbell
2011-04-21 8:04 ` [Xen-devel] " Christoph Hellwig
2011-04-27 22:06 ` Konrad Rzeszutek Wilk
2011-04-28 19:29 ` Pasi Kärkkäinen
-- strict thread matches above, loose matches on Subject: below --
2011-05-05 18:11 [PATCH v3.1] " Konrad Rzeszutek Wilk
2011-05-05 18:11 ` [PATCH] xen block backend driver Konrad Rzeszutek Wilk
2011-05-11 15:02 ` Ian Campbell
2011-05-12 21:01 ` [Xen-devel] " Konrad Rzeszutek Wilk
2011-05-13 7:06 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110422090904.GA29246@infradead.org \
--to=hch@infradead.org \
--cc=daniel.stodden@citrix.com \
--cc=jaxboe@fusionio.com \
--cc=konrad.wilk@oracle.com \
--cc=konrad@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).