* [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
@ 2017-12-06 7:28 Kangjie Xi
2017-12-06 9:12 ` Kevin Wolf
0 siblings, 1 reply; 6+ messages in thread
From: Kangjie Xi @ 2017-12-06 7:28 UTC (permalink / raw)
To: kwolf, qemu-devel
Hi,
I encountered a qemu-nbd segfault, finally I found it was caused by
NULL bs-drv, which is located in block/io.c function bdrv_co_flush
line 2377:
https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
It is before the patch at line 2402, so the patch needs to be updated
to fix NULL bs-drv at line 2337.
https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
> @@ -2373,6 +2399,12 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
> }
>
> BLKDBG_EVENT(bs->file, BLKDBG_FLUSH_TO_DISK);
> + if (!bs->drv) {
> + /* bs->drv->bdrv_co_flush() might have ejected the BDS
> + * (even in case of apparent success) */
> + ret = -ENOMEDIUM;
> + goto out;
> + }
> if (bs->drv->bdrv_co_flush_to_disk) {
> ret = bs->drv->bdrv_co_flush_to_disk(bs);
> } else if (bs->drv->bdrv_aio_flush) {
I have tested the latest qemu-2.11.0-rc2 and I am sure the qemu-nbd
segfault is caused by NULL bs-drv in block/io.c line 2337.
kernel: qemu-nbd[18768]: segfault at f8 ip 000055a24f7536a7 sp
00007f59b1137a40 error 4 in qemu-nbd[55a24f6d1000+188000]
However I have no methods to reproduce the segfault manually, the
qemu-nbd segfaut just occurs in my server cluster every week.
Thanks
-Kangjie
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
2017-12-06 7:28 [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv Kangjie Xi
@ 2017-12-06 9:12 ` Kevin Wolf
2017-12-06 10:08 ` Kangjie Xi
2017-12-08 13:39 ` Max Reitz
0 siblings, 2 replies; 6+ messages in thread
From: Kevin Wolf @ 2017-12-06 9:12 UTC (permalink / raw)
To: Kangjie Xi; +Cc: qemu-devel, mreitz
Am 06.12.2017 um 08:28 hat Kangjie Xi geschrieben:
> Hi,
>
> I encountered a qemu-nbd segfault, finally I found it was caused by
> NULL bs-drv, which is located in block/io.c function bdrv_co_flush
> line 2377:
>
> https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
>
> It is before the patch at line 2402, so the patch needs to be updated
> to fix NULL bs-drv at line 2337.
>
> https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
Can you please post a full backtrace? Do you see any error message
on stderr before the process crashes?
I don't see at the moment how this can happen, except the case that Max
mentioned where bs->drv = NULL is set when an image corruption is
detected - this involves an error message, though.
We check bdrv_is_inserted() as the first thing, which includes a NULL
check for bs->drv. So it must have been non-NULL at the start of the
function and then become NULL. I suppose this can theoretically happen
in qemu_co_queue_wait() if another flush request detects image
corruption.
Max: I think bs->drv = NULL in the middle of a request was a stupid
idea. In fact, it's already a stupid idea to have any BDS with
bs->drv = NULL. Maybe it would be better to schedule a BH that replaces
the qcow2 node with a dummy node (null-co?) and properly closes the
qcow2 one.
Kevin
> > @@ -2373,6 +2399,12 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
> > }
> >
> > BLKDBG_EVENT(bs->file, BLKDBG_FLUSH_TO_DISK);
> > + if (!bs->drv) {
> > + /* bs->drv->bdrv_co_flush() might have ejected the BDS
> > + * (even in case of apparent success) */
> > + ret = -ENOMEDIUM;
> > + goto out;
> > + }
> > if (bs->drv->bdrv_co_flush_to_disk) {
> > ret = bs->drv->bdrv_co_flush_to_disk(bs);
> > } else if (bs->drv->bdrv_aio_flush) {
>
> I have tested the latest qemu-2.11.0-rc2 and I am sure the qemu-nbd
> segfault is caused by NULL bs-drv in block/io.c line 2337.
>
> kernel: qemu-nbd[18768]: segfault at f8 ip 000055a24f7536a7 sp
> 00007f59b1137a40 error 4 in qemu-nbd[55a24f6d1000+188000]
>
> However I have no methods to reproduce the segfault manually, the
> qemu-nbd segfaut just occurs in my server cluster every week.
>
> Thanks
> -Kangjie
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
2017-12-06 9:12 ` Kevin Wolf
@ 2017-12-06 10:08 ` Kangjie Xi
2017-12-08 13:39 ` Max Reitz
1 sibling, 0 replies; 6+ messages in thread
From: Kangjie Xi @ 2017-12-06 10:08 UTC (permalink / raw)
To: Kevin Wolf; +Cc: qemu-devel, mreitz
2017-12-06 17:12 GMT+08:00 Kevin Wolf <kwolf@redhat.com>:
> Am 06.12.2017 um 08:28 hat Kangjie Xi geschrieben:
>> Hi,
>>
>> I encountered a qemu-nbd segfault, finally I found it was caused by
>> NULL bs-drv, which is located in block/io.c function bdrv_co_flush
>> line 2377:
>>
>> https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
>>
>> It is before the patch at line 2402, so the patch needs to be updated
>> to fix NULL bs-drv at line 2337.
>>
>> https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
>
> Can you please post a full backtrace? Do you see any error message
> on stderr before the process crashes?
No, I have no full backtrace, the qemu-nbd in our server cluster is
release version, I can't run a debug version, the performance is very
poor.
When the segfault happens, the qemu-ndb process's state is
uninterruptible sleep, I can't kill it and have to reboot the server.
There are errors in /var/log/message:
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (5)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32572640
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4071324, lost async page write
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (5)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32605376
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075416, lost async page write
Dec 1 09:42:07 server kernel: qemu-nbd[18768]: segfault at f8 ip
000055a24f7536a7 sp 00007f59b1137a40 error 4 in
qemu-nbd[55a24f6d1000+188000]
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075417, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075418, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075419, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075420, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075421, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075422, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075423, lost async page write
Dec 1 09:42:07 server kernel: Buffer I/O error on dev nbd10p1,
logical block 4075424, lost async page write
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (5)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32605632
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (5)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32605888
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32607168
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32606144
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32606656
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32606400
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32606912
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: blk_update_request: I/O error, dev
nbd10, sector 32607424
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: block nbd10: Other side returned error (22)
Dec 1 09:42:07 server kernel: block nbd10: Receive control failed (result -512)
Dec 1 09:42:07 server kernel: block nbd10: pid 18770, qemu-nbd, got signal 9
I just use objdump disassemble the qemu-nbd, confirm the segfault
happens in block/io.c line 2337.
-Kangjie
> I don't see at the moment how this can happen, except the case that Max
> mentioned where bs->drv = NULL is set when an image corruption is
> detected - this involves an error message, though.
>
> We check bdrv_is_inserted() as the first thing, which includes a NULL
> check for bs->drv. So it must have been non-NULL at the start of the
> function and then become NULL. I suppose this can theoretically happen
> in qemu_co_queue_wait() if another flush request detects image
> corruption.
>
> Max: I think bs->drv = NULL in the middle of a request was a stupid
> idea. In fact, it's already a stupid idea to have any BDS with
> bs->drv = NULL. Maybe it would be better to schedule a BH that replaces
> the qcow2 node with a dummy node (null-co?) and properly closes the
> qcow2 one.
>
> Kevin
>
>> > @@ -2373,6 +2399,12 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
>> > }
>> >
>> > BLKDBG_EVENT(bs->file, BLKDBG_FLUSH_TO_DISK);
>> > + if (!bs->drv) {
>> > + /* bs->drv->bdrv_co_flush() might have ejected the BDS
>> > + * (even in case of apparent success) */
>> > + ret = -ENOMEDIUM;
>> > + goto out;
>> > + }
>> > if (bs->drv->bdrv_co_flush_to_disk) {
>> > ret = bs->drv->bdrv_co_flush_to_disk(bs);
>> > } else if (bs->drv->bdrv_aio_flush) {
>>
>> I have tested the latest qemu-2.11.0-rc2 and I am sure the qemu-nbd
>> segfault is caused by NULL bs-drv in block/io.c line 2337.
>>
>> kernel: qemu-nbd[18768]: segfault at f8 ip 000055a24f7536a7 sp
>> 00007f59b1137a40 error 4 in qemu-nbd[55a24f6d1000+188000]
>>
>> However I have no methods to reproduce the segfault manually, the
>> qemu-nbd segfaut just occurs in my server cluster every week.
>>
>> Thanks
>> -Kangjie
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
2017-12-06 9:12 ` Kevin Wolf
2017-12-06 10:08 ` Kangjie Xi
@ 2017-12-08 13:39 ` Max Reitz
2017-12-08 13:51 ` Kevin Wolf
1 sibling, 1 reply; 6+ messages in thread
From: Max Reitz @ 2017-12-08 13:39 UTC (permalink / raw)
To: Kevin Wolf, Kangjie Xi; +Cc: qemu-devel, John Snow
[-- Attachment #1: Type: text/plain, Size: 2790 bytes --]
On 2017-12-06 10:12, Kevin Wolf wrote:
> Am 06.12.2017 um 08:28 hat Kangjie Xi geschrieben:
>> Hi,
>>
>> I encountered a qemu-nbd segfault, finally I found it was caused by
>> NULL bs-drv, which is located in block/io.c function bdrv_co_flush
>> line 2377:
>>
>> https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
>>
>> It is before the patch at line 2402, so the patch needs to be updated
>> to fix NULL bs-drv at line 2337.
>>
>> https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
>
> Can you please post a full backtrace? Do you see any error message
> on stderr before the process crashes?
>
> I don't see at the moment how this can happen, except the case that Max
> mentioned where bs->drv = NULL is set when an image corruption is
> detected - this involves an error message, though.
>
> We check bdrv_is_inserted() as the first thing, which includes a NULL
> check for bs->drv. So it must have been non-NULL at the start of the
> function and then become NULL. I suppose this can theoretically happen
> in qemu_co_queue_wait() if another flush request detects image
> corruption.
>
> Max: I think bs->drv = NULL in the middle of a request was a stupid
> idea. In fact, it's already a stupid idea to have any BDS with
> bs->drv = NULL. Maybe it would be better to schedule a BH that replaces
> the qcow2 node with a dummy node (null-co?) and properly closes the
> qcow2 one.
Yes, that is an idea John had, too. It sounded good to me (we'd just
need to add a new flag to null-co so it would respond with -ENOMEDIUM to
all requests or something)... The only issue I had is how that would
work together with the GRAPH_MOD op blocker.
Max
>
> Kevin
>
>>> @@ -2373,6 +2399,12 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
>>> }
>>>
>>> BLKDBG_EVENT(bs->file, BLKDBG_FLUSH_TO_DISK);
>>> + if (!bs->drv) {
>>> + /* bs->drv->bdrv_co_flush() might have ejected the BDS
>>> + * (even in case of apparent success) */
>>> + ret = -ENOMEDIUM;
>>> + goto out;
>>> + }
>>> if (bs->drv->bdrv_co_flush_to_disk) {
>>> ret = bs->drv->bdrv_co_flush_to_disk(bs);
>>> } else if (bs->drv->bdrv_aio_flush) {
>>
>> I have tested the latest qemu-2.11.0-rc2 and I am sure the qemu-nbd
>> segfault is caused by NULL bs-drv in block/io.c line 2337.
>>
>> kernel: qemu-nbd[18768]: segfault at f8 ip 000055a24f7536a7 sp
>> 00007f59b1137a40 error 4 in qemu-nbd[55a24f6d1000+188000]
>>
>> However I have no methods to reproduce the segfault manually, the
>> qemu-nbd segfaut just occurs in my server cluster every week.
>>
>> Thanks
>> -Kangjie
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 512 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
2017-12-08 13:39 ` Max Reitz
@ 2017-12-08 13:51 ` Kevin Wolf
2017-12-08 14:02 ` Max Reitz
0 siblings, 1 reply; 6+ messages in thread
From: Kevin Wolf @ 2017-12-08 13:51 UTC (permalink / raw)
To: Max Reitz; +Cc: Kangjie Xi, qemu-devel, John Snow
[-- Attachment #1: Type: text/plain, Size: 2129 bytes --]
Am 08.12.2017 um 14:39 hat Max Reitz geschrieben:
> On 2017-12-06 10:12, Kevin Wolf wrote:
> > Am 06.12.2017 um 08:28 hat Kangjie Xi geschrieben:
> >> Hi,
> >>
> >> I encountered a qemu-nbd segfault, finally I found it was caused by
> >> NULL bs-drv, which is located in block/io.c function bdrv_co_flush
> >> line 2377:
> >>
> >> https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
> >>
> >> It is before the patch at line 2402, so the patch needs to be updated
> >> to fix NULL bs-drv at line 2337.
> >>
> >> https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
> >
> > Can you please post a full backtrace? Do you see any error message
> > on stderr before the process crashes?
> >
> > I don't see at the moment how this can happen, except the case that Max
> > mentioned where bs->drv = NULL is set when an image corruption is
> > detected - this involves an error message, though.
> >
> > We check bdrv_is_inserted() as the first thing, which includes a NULL
> > check for bs->drv. So it must have been non-NULL at the start of the
> > function and then become NULL. I suppose this can theoretically happen
> > in qemu_co_queue_wait() if another flush request detects image
> > corruption.
> >
> > Max: I think bs->drv = NULL in the middle of a request was a stupid
> > idea. In fact, it's already a stupid idea to have any BDS with
> > bs->drv = NULL. Maybe it would be better to schedule a BH that replaces
> > the qcow2 node with a dummy node (null-co?) and properly closes the
> > qcow2 one.
>
> Yes, that is an idea John had, too. It sounded good to me (we'd just
> need to add a new flag to null-co so it would respond with -ENOMEDIUM to
> all requests or something)... The only issue I had is how that would
> work together with the GRAPH_MOD op blocker.
In order to answer this question, I'd first have to understand what
GRAPH_MOD is even supposed to mean and which operations it needs to
protect. There aren't currently any users of GRAPH_MOD.
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv
2017-12-08 13:51 ` Kevin Wolf
@ 2017-12-08 14:02 ` Max Reitz
0 siblings, 0 replies; 6+ messages in thread
From: Max Reitz @ 2017-12-08 14:02 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Kangjie Xi, qemu-devel, John Snow
[-- Attachment #1: Type: text/plain, Size: 2253 bytes --]
On 2017-12-08 14:51, Kevin Wolf wrote:
> Am 08.12.2017 um 14:39 hat Max Reitz geschrieben:
>> On 2017-12-06 10:12, Kevin Wolf wrote:
>>> Am 06.12.2017 um 08:28 hat Kangjie Xi geschrieben:
>>>> Hi,
>>>>
>>>> I encountered a qemu-nbd segfault, finally I found it was caused by
>>>> NULL bs-drv, which is located in block/io.c function bdrv_co_flush
>>>> line 2377:
>>>>
>>>> https://git.qemu.org/?p=qemu.git;a=blob;f=block/io.c;h=4fdf93a0144fa4761a14b8cc6b2a9a6b6e5d5bec;hb=d470ad42acfc73c45d3e8ed5311a491160b4c100#l2377
>>>>
>>>> It is before the patch at line 2402, so the patch needs to be updated
>>>> to fix NULL bs-drv at line 2337.
>>>>
>>>> https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg03425.html
>>>
>>> Can you please post a full backtrace? Do you see any error message
>>> on stderr before the process crashes?
>>>
>>> I don't see at the moment how this can happen, except the case that Max
>>> mentioned where bs->drv = NULL is set when an image corruption is
>>> detected - this involves an error message, though.
>>>
>>> We check bdrv_is_inserted() as the first thing, which includes a NULL
>>> check for bs->drv. So it must have been non-NULL at the start of the
>>> function and then become NULL. I suppose this can theoretically happen
>>> in qemu_co_queue_wait() if another flush request detects image
>>> corruption.
>>>
>>> Max: I think bs->drv = NULL in the middle of a request was a stupid
>>> idea. In fact, it's already a stupid idea to have any BDS with
>>> bs->drv = NULL. Maybe it would be better to schedule a BH that replaces
>>> the qcow2 node with a dummy node (null-co?) and properly closes the
>>> qcow2 one.
>>
>> Yes, that is an idea John had, too. It sounded good to me (we'd just
>> need to add a new flag to null-co so it would respond with -ENOMEDIUM to
>> all requests or something)... The only issue I had is how that would
>> work together with the GRAPH_MOD op blocker.
>
> In order to answer this question, I'd first have to understand what
> GRAPH_MOD is even supposed to mean and which operations it needs to
> protect. There aren't currently any users of GRAPH_MOD.
That is exactly the reason why we could not come to a conclusion. :-)
Max
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 512 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-12-08 14:02 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-06 7:28 [Qemu-devel] About [PULL 20/25] block: Guard against NULL bs->drv Kangjie Xi
2017-12-06 9:12 ` Kevin Wolf
2017-12-06 10:08 ` Kangjie Xi
2017-12-08 13:39 ` Max Reitz
2017-12-08 13:51 ` Kevin Wolf
2017-12-08 14:02 ` Max Reitz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).