qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>,
	Anand Avati <aavati@redhat.com>,
	Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
	Vijay Bellur <vbellur@redhat.com>,
	Amar Tumballi <amarts@redhat.com>,
	qemu-devel@nongnu.org, Blue Swirl <blauwirbel@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend
Date: Wed, 15 Aug 2012 14:52:05 +0530	[thread overview]
Message-ID: <20120815092205.GM24944@in.ibm.com> (raw)
In-Reply-To: <502B571B.2090407@redhat.com>

On Wed, Aug 15, 2012 at 10:00:27AM +0200, Kevin Wolf wrote:
> Am 15.08.2012 07:21, schrieb Bharata B Rao:
> > On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> >>>>> +static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
> >>>>> +{
> >>>>> +    GlusterAIOCB *acb = (GlusterAIOCB *)arg;
> >>>>> +    BDRVGlusterState *s = acb->common.bs->opaque;
> >>>>> +
> >>>>> +    acb->ret = ret;
> >>>>> +    if (qemu_gluster_send_pipe(s, acb) < 0) {
> >>>>> +        /*
> >>>>> +         * Gluster AIO callback thread failed to notify the waiting
> >>>>> +         * QEMU thread about IO completion. Nothing much can be done
> >>>>> +         * here but to abruptly abort.
> >>>>> +         *
> >>>>> +         * FIXME: Check if the read side of the fd handler can somehow
> >>>>> +         * be notified of this failure paving the way for a graceful exit.
> >>>>> +         */
> >>>>> +        error_report("Gluster failed to notify QEMU about IO completion");
> >>>>> +        abort();
> >>>>
> >>>> In the extreme case you may choose to make this disk inaccessible
> >>>> (something like bs->drv = NULL), but abort() kills the whole VM and
> >>>> should only be called when there is a bug.
> >>>
> >>> There have been concerns raised about this earlier too. I settled for this
> >>> since I couldn't see a better way out and I could see the precedence
> >>> for this in posix-aio-compat.c
> >>>
> >>> So I could just do the necessary cleanup, set bs->drv to NULL and return from
> >>> here ? But how do I wake up the QEMU thread that is waiting on the read side
> >>> of the pipe ? W/o that, the QEMU thread that waits on the read side of the
> >>> pipe is still hung.
> >>
> >> There is no other thread. But you're right, you should probably
> >> unregister the aio_fd_handler and any other pending callbacks.
> > 
> > As I clarified in the other mail, this (gluster_finish_aiocb) is called
> > from gluster thread context and hence QEMU thread that raised the original
> > read/write request is still blocked on qemu_aio_wait().
> > 
> > I tried the following cleanup instead of abrupt abort:
> > 
> > close(read_fd); /* This will wake up the QEMU thread blocked on select(read_fd...) */
> > close(write_fd);
> > qemu_aio_set_fd_handler(read_fd, NULL, NULL, NULL, NULL);
> > qemu_aio_release(acb);
> > s->qemu_aio_count--;
> > bs->drv = NULL;
> > 
> > I tested this by manually injecting faults into qemu_gluster_send_pipe().
> > With the above cleanup, the guest kernel crashes with IO errors.
> 
> What does "crash" really mean? IO errors certainly shouldn't cause a
> kernel to crash?

Since an IO failed, it resulted in root file system corruption which
subsequently led to a panic.

[    1.529042] dracut: Switching root
qemu-system-x86_64: Gluster failed to notify QEMU about IO completion
qemu-system-x86_64: Gluster failed to notify QEMU about IO completion
qemu-system-x86_64: Gluster failed to notify QEMU about IO completion
qemu-system-x86_64: Gluster failed to notify QEMU about IO completion
[    1.584130] end_request: I/O error, dev vda, sector 13615224
[    1.585119] end_request: I/O error, dev vda, sector 13615344
[    1.585119] end_request: I/O error, dev vda, sector 13615352
[    1.585119] end_request: I/O error, dev vda, sector 13615360
[    1.593188] end_request: I/O error, dev vda, sector 1030144
[    1.594169] Buffer I/O error on device vda3, logical block 0
[    1.594169] lost page write due to I/O error on vda3
[    1.594169] EXT4-fs error (device vda3): __ext4_get_inode_loc:3539: inode #392441: block 1573135: comm systemd: unable to read itable block
[...]
[    1.620064] EXT4-fs error (device vda3): __ext4_get_inode_loc:3539: inode #392441: block 1573135: comm systemd: unable to read itable block
/usr/lib/systemd/systemd: error while loading shared libraries: libselinux.so.1: cannot open shared object file: Input/output error
[    1.626193] Kernel panic - not syncing: Attempted to kill init!
[    1.627789] Pid: 1, comm: systemd Not tainted 3.3.4-5.fc17.x86_64 #1
[    1.630063] Call Trace:
[    1.631120]  [<ffffffff815e21eb>] panic+0xba/0x1c6
[    1.632477]  [<ffffffff8105aff1>] do_exit+0x8b1/0x8c0
[    1.633851]  [<ffffffff8105b34f>] do_group_exit+0x3f/0xa0
[    1.635258]  [<ffffffff8105b3c7>] sys_exit_group+0x17/0x20
[    1.636619]  [<ffffffff815f38e9>] system_call_fastpath+0x16/0x1b

> 
> > Is there anything else that I need to do or do differently to retain the
> > VM running w/o disk access ?
> > 
> > I thought of completing the aio callback by doing
> > acb->common.cb(acb->common.opaque, -EIO);
> > but that would do a coroutine enter from gluster thread, which I don't think
> > should be done.
> 
> You would have to take the global qemu mutex at least. I agree it's not
> a good thing to do.

So is it really worth doing all this to handle this unlikely error ? The
chances of this error happening is quite remote I believe.

Regards,
Bharata.

  reply	other threads:[~2012-08-15  9:21 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-09 13:00 [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao
2012-08-09 13:01 ` [Qemu-devel] [PATCH v6 1/2] qemu: Add a config option for GlusterFS as block backend Bharata B Rao
2012-08-09 13:02 ` [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-08-13 12:50   ` Kevin Wolf
2012-08-14  4:38     ` Bharata B Rao
2012-08-14  8:29       ` Kevin Wolf
2012-08-14  9:34         ` Bharata B Rao
2012-08-14  9:58           ` Kevin Wolf
2012-09-06  8:29             ` Avi Kivity
2012-09-06 15:40               ` Bharata B Rao
2012-09-06 15:44                 ` Paolo Bonzini
2012-09-06 15:47                 ` Daniel P. Berrange
2012-09-06 16:04                   ` ronnie sahlberg
2012-09-06 16:06                   ` Avi Kivity
2012-09-07  3:24                   ` Bharata B Rao
2012-09-07  9:19                     ` Daniel P. Berrange
2012-09-07  9:36                     ` Paolo Bonzini
2012-09-07  9:57                       ` Kevin Wolf
2012-09-12  9:22                         ` Bharata B Rao
2012-09-12  9:24                           ` Paolo Bonzini
2012-09-07 10:00                   ` Kevin Wolf
2012-09-07 10:03                     ` Daniel P. Berrange
2012-09-07 10:05                       ` Paolo Bonzini
2012-08-15  5:21         ` Bharata B Rao
2012-08-15  8:00           ` Kevin Wolf
2012-08-15  9:22             ` Bharata B Rao [this message]
2012-08-15  8:51         ` Bharata B Rao
2012-09-05  7:41   ` Bharata B Rao
2012-09-05  9:57     ` Bharata B Rao
2012-09-06  7:23       ` Paolo Bonzini
2012-09-06  9:06         ` Kevin Wolf
2012-09-06  9:38           ` Paolo Bonzini
2012-09-06 10:07             ` Kevin Wolf
2012-09-06 10:18               ` Paolo Bonzini
2012-09-06 10:29                 ` Kevin Wolf
2012-09-06 11:01                   ` Paolo Bonzini
2012-09-07 15:06                   ` Bharata B Rao
2012-09-07 15:11                     ` Paolo Bonzini
2012-09-08 14:22                       ` Bharata B Rao
2012-09-05 10:01     ` Kevin Wolf
2012-09-05 10:43       ` Bharata B Rao
2012-09-06  7:35   ` Paolo Bonzini
2012-09-07  5:46     ` Bharata B Rao
2012-08-13  9:49 ` [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120815092205.GM24944@in.ibm.com \
    --to=bharata@linux.vnet.ibm.com \
    --cc=aavati@redhat.com \
    --cc=aliguori@us.ibm.com \
    --cc=amarts@redhat.com \
    --cc=blauwirbel@gmail.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=vbellur@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).