From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: Blue Swirl <blauwirbel@gmail.com>
Cc: Amar Tumballi <amarts@redhat.com>,
Vijay Bellur <vbellur@redhat.com>,
Anand Avati <aavati@redhat.com>,
qemu-devel@nongnu.org,
Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] [PATCH v4 2/2] block: Support GlusterFS as a QEMU block backend
Date: Sat, 4 Aug 2012 08:14:30 +0530 [thread overview]
Message-ID: <20120804024430.GD26789@in.ibm.com> (raw)
In-Reply-To: <CAAu8pHvKLgzieTqDT=UpNaX2b_Nv83VzsVqno_eA8_EYVW5pag@mail.gmail.com>
On Fri, Aug 03, 2012 at 03:57:20PM +0000, Blue Swirl wrote:
> >> > +static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
> >> > +{
> >> > + GlusterAIOCB *acb = (GlusterAIOCB *)arg;
> >> > + BDRVGlusterState *s = acb->common.bs->opaque;
> >> > +
> >> > + acb->ret = ret;
> >> > + if (qemu_gluster_send_pipe(s, acb) < 0) {
> >> > + error_report("Could not complete read/write/flush from gluster");
> >> > + abort();
> >>
> >> Aborting is a bit drastic, it would be nice to save and exit gracefully.
> >
> > I am not sure if there is an easy way to recover sanely and exit from this
> > kind of error.
> >
> > Here the non-QEMU thread (gluster thread) failed to notify the QEMU thread
> > on the read side of the pipe about the IO completion. So essentially
> > bdrv_read or bdrv_write will never complete if this error happens.
> >
> > Do you have any suggestions on how to exit gracefully here ?
>
> Ignore but set the callback return to -EIO, see for example curl.c:249.
I see the precedence for how I am handling this in
posix-aio-compat.c:posix_aio_notify_event().
So instead of aborting, I could do acb->common.cb(acb->common.opaque, -EIO)
as you suggest, but that would not help because, the thread at the read side
of the pipe is still waiting and user will see the read/write failure as hang.
[root@bharata qemu]# gdb ./x86_64-softmmu/qemu-system-x86_64
Starting program: ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024 -smp 4 -drive file=gluster://bharata/test/F16,if=virtio,cache=none
[New Thread 0x7ffff4c7f700 (LWP 6537)]
[New Thread 0x7ffff447e700 (LWP 6538)]
[New Thread 0x7ffff3420700 (LWP 6539)]
[New Thread 0x7ffff1407700 (LWP 6540)]
qemu-system-x86_64: -drive file=gluster://bharata/test/F16,if=virtio,cache=none: Could not complete read/write/flush from gluster
^C
Program received signal SIGINT, Interrupt.
0x00007ffff60e9403 in select () from /lib64/libc.so.6
(gdb) bt
#0 0x00007ffff60e9403 in select () from /lib64/libc.so.6
#1 0x00005555555baee3 in qemu_aio_wait () at aio.c:158
#2 0x00005555555cf57b in bdrv_rw_co (bs=0x5555564cfa50, sector_num=0, buf=
0x7fffffffb640 "\353c\220", nb_sectors=4, is_write=false) at block.c:1623
#3 0x00005555555cf5e1 in bdrv_read (bs=0x5555564cfa50, sector_num=0, buf=
0x7fffffffb640 "\353c\220", nb_sectors=4) at block.c:1633
#4 0x00005555555cf9d0 in bdrv_pread (bs=0x5555564cfa50, offset=0, buf=0x7fffffffb640,
count1=2048) at block.c:1720
#5 0x00005555555cc8d4 in find_image_format (filename=
0x5555564cc290 "gluster://bharata/test/F16", pdrv=0x7fffffffbe60) at block.c:529
#6 0x00005555555cd303 in bdrv_open (bs=0x5555564cef20, filename=
0x5555564cc290 "gluster://bharata/test/F16", flags=98, drv=0x0) at block.c:800
#7 0x0000555555609f69 in drive_init (opts=0x5555564cf900, default_to_scsi=0)
at blockdev.c:608
#8 0x0000555555711b6c in drive_init_func (opts=0x5555564cc1e0, opaque=0x555555c357a0)
at vl.c:775
#9 0x000055555574ceda in qemu_opts_foreach (list=0x555555c319e0, func=
0x555555711b31 <drive_init_func>, opaque=0x555555c357a0, abort_on_failure=1)
at qemu-option.c:1094
#10 0x0000555555719d78 in main (argc=9, argv=0x7fffffffe468, envp=0x7fffffffe4b8)
at vl.c:3430
next prev parent reply other threads:[~2012-08-04 2:43 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-01 14:14 [Qemu-devel] [PATCH v4 0/2] GlusterFS support in QEMU - v4 Bharata B Rao
2012-08-01 14:15 ` [Qemu-devel] [PATCH v4 1/2] qemu: Add a config option for GlusterFS as block backend Bharata B Rao
2012-08-01 14:16 ` [Qemu-devel] [PATCH v4 2/2] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-08-01 18:35 ` Blue Swirl
2012-08-02 3:55 ` Bharata B Rao
2012-08-03 15:57 ` Blue Swirl
2012-08-04 2:44 ` Bharata B Rao [this message]
2012-08-04 11:28 ` Blue Swirl
2012-08-02 10:31 ` [Qemu-devel] [PATCH v4 0/2] GlusterFS support in QEMU - v4 Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120804024430.GD26789@in.ibm.com \
--to=bharata@linux.vnet.ibm.com \
--cc=aavati@redhat.com \
--cc=amarts@redhat.com \
--cc=blauwirbel@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@linux.vnet.ibm.com \
--cc=vbellur@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).