qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Cody <jcody@redhat.com>
To: Ric Wheeler <rwheeler@redhat.com>
Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org, kwolf@redhat.com,
	pkarampu@redhat.com, rgowdapp@redhat.com, ndevos@redhat.com,
	Rik van Riel <riel@redhat.com>
Subject: Re: [Qemu-devel] [PATCH for-2.6 v2 0/3] Bug fixes for gluster
Date: Tue, 19 Apr 2016 10:09:17 -0400	[thread overview]
Message-ID: <20160419140917.GC4841@localhost.localdomain> (raw)
In-Reply-To: <5716221F.8010200@redhat.com>

On Tue, Apr 19, 2016 at 08:18:39AM -0400, Ric Wheeler wrote:
> On 04/19/2016 08:07 AM, Jeff Cody wrote:
> >Bug fixes for gluster; third patch is to prevent
> >a potential data loss when trying to recover from
> >a recoverable error (such as ENOSPC).
> 
> Hi Jeff,
> 
> Just a note, I have been talking to some of the disk drive people
> here at LSF (the kernel summit for file and storage people) and got
> a non-public confirmation that individual storage devices (s-ata
> drives or scsi) can also dump cache state when a synchronize cache
> command fails.  Also followed up with Rik van Riel - in the page
> cache in general, when we fail to write back dirty pages, they are
> simply marked "clean" (which means effectively that they get
> dropped).
> 
> Long winded way of saying that I think that this scenario is not
> unique to gluster - any failed fsync() to a file (or block device)
> might be an indication of permanent data loss.
>

Ric,

Thanks.

I think you are right, we likely do need to address how QEMU handles fsync
failures across the board in QEMU at some point (2.7?).  Another point to
consider is that QEMU is cross-platform - so not only do we have different
protocols, and filesystems, but also different underlying host OSes as well.
It is likely, like you said, that there are other non-gluster scenarios where
we have non-recoverable data loss on fsync failure. 

With Gluster specifically, if we look at just ENOSPC, does this mean that
even if Gluster retains its cache after fsync failure, we still won't know
that there was no permanent data loss?  If we hit ENOSPC during an fsync, I
presume that means Gluster itself may have encountered ENOSPC from a fsync to
the underlying storage.  In that case, does Gluster just pass the error up
the stack?

Jeff

> 
> >
> >The final patch closes the gluster fd and sets the
> >protocol drv to NULL on fsync failure in gluster;
> >we have no way of knowing what gluster versions
> >support retaining fysnc cache on error, so until
> >we do the safest thing to do is invalidate the
> >drive.
> >
> >Jeff Cody (3):
> >   block/gluster: return correct error value
> >   block/gluster: code movement of qemu_gluster_close()
> >   block/gluster: prevent data loss after i/o error
> >
> >  block/gluster.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++-----------
> >  configure       |  8 +++++++
> >  2 files changed, 62 insertions(+), 12 deletions(-)
> >
> 

  reply	other threads:[~2016-04-19 14:09 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-19 12:07 [Qemu-devel] [PATCH for-2.6 v2 0/3] Bug fixes for gluster Jeff Cody
2016-04-19 12:07 ` [Qemu-devel] [PATCH for-2.6 v2 1/3] block/gluster: return correct error value Jeff Cody
2016-04-19 12:07 ` [Qemu-devel] [PATCH for-2.6 v2 2/3] block/gluster: code movement of qemu_gluster_close() Jeff Cody
2016-04-19 12:07 ` [Qemu-devel] [PATCH for-2.6 v2 3/3] block/gluster: prevent data loss after i/o error Jeff Cody
2016-04-19 12:27   ` Kevin Wolf
2016-04-19 12:29     ` Jeff Cody
2016-04-19 12:18 ` [Qemu-devel] [PATCH for-2.6 v2 0/3] Bug fixes for gluster Ric Wheeler
2016-04-19 14:09   ` Jeff Cody [this message]
2016-04-20  1:56     ` Ric Wheeler
2016-04-20  9:24       ` Kevin Wolf
2016-04-20 10:40         ` Ric Wheeler
2016-04-20 11:46           ` Kevin Wolf
2016-04-20 18:38             ` Rik van Riel
2016-04-21  8:43               ` Kevin Wolf
2016-04-20 18:37           ` Rik van Riel
2016-04-20  5:15     ` Raghavendra Gowdappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160419140917.GC4841@localhost.localdomain \
    --to=jcody@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=ndevos@redhat.com \
    --cc=pkarampu@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=rgowdapp@redhat.com \
    --cc=riel@redhat.com \
    --cc=rwheeler@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).