public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Anthony Liguori <anthony@codemonkey.ws>,
	Rusty Russell <rusty@rustcorp.com.au>,
	kvm@vger.kernel.org
Subject: Re: [PATCH, RFC] virtio_blk: add cache flush command
Date: Mon, 11 May 2009 19:49:37 +0300	[thread overview]
Message-ID: <4A085721.2050005@redhat.com> (raw)
In-Reply-To: <20090511162810.GA6027@lst.de>

Christoph Hellwig wrote:
> On Mon, May 11, 2009 at 06:45:50PM +0300, Avi Kivity wrote:
>   
>>> Right now it's fsync.  By the time I'll submit the backend change it
>>> will still be fsync, but at least called from the posix-aio-compat
>>> thread pool.
>>>  
>>>       
>> I think if we have cache=writeback we should ignore this.
>>     
>
> It's only needed for cache=writeback, because without that there is no
> reason to flush a write cache.
>   

Maybe we should add a fourth cache= mode then.  But 
cache=writeback+fsync doesn't correspond to any real world drive; in the 
real world you're limited to power failures and a few megabytes of cache 
(typically less), cache=writeback+fsync can lose hundreds of megabytes 
due to power loss or software failure.

Oh, and cache=writeback+fsync doesn't work on qcow2, unless we add fsync 
after metadata updates.

>> For cache=none and cache=writethrough we don't really need fsync, but we 
>> do need to flush the inflight commands.
>>     
>
> What we do need for those modes is the basic barrier support because
> we can currently re-order requests.  The next version of my patch will
> implement a barriers without cache flush mode, although I don't think
> a fdatasync without any outstanding dirty data should cause problems.
>   

Yeah.  And maybe one day push the barrier into the kernel.

> (Or maybe ext3 actually is stupid enough to flush the whole fs even for
> that case

Sigh.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


  reply	other threads:[~2009-05-11 16:50 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-11  8:39 [PATCH, RFC] virtio_blk: add cache flush command Christoph Hellwig
2009-05-11 14:51 ` Anthony Liguori
2009-05-11 15:40   ` Christoph Hellwig
2009-05-11 15:45     ` Avi Kivity
2009-05-11 16:28       ` Christoph Hellwig
2009-05-11 16:49         ` Avi Kivity [this message]
2009-05-11 17:47           ` Anthony Liguori
2009-05-11 18:00             ` Avi Kivity
2009-05-11 18:29               ` Anthony Liguori
2009-05-11 18:40                 ` Avi Kivity
2009-05-18 12:03                 ` Christoph Hellwig
2009-05-12  7:23             ` Christoph Hellwig
2009-05-12  7:19           ` Christoph Hellwig
2009-05-12  8:35             ` Avi Kivity
2009-05-18 12:06               ` Christoph Hellwig
2009-05-11 16:38     ` Anthony Liguori
2009-05-12  7:26       ` Christoph Hellwig
2009-05-12 13:54 ` Rusty Russell
2009-05-12 14:18   ` Christian Borntraeger
2009-05-13  1:52     ` Rusty Russell
2009-05-18 12:07     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A085721.2050005@redhat.com \
    --to=avi@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=hch@lst.de \
    --cc=kvm@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox