From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: Peter Maydell <peter.maydell@linaro.org>,
Alberto Garcia <berto@igalia.com>
Cc: QEMU Developers <qemu-devel@nongnu.org>,
Qemu-block <qemu-block@nongnu.org>
Subject: Re: iotest 030 still occasionally intermittently failing
Date: Thu, 19 Nov 2020 22:30:09 +0300 [thread overview]
Message-ID: <1f53f805-9367-d7c6-94ca-8d91e88f362f@virtuozzo.com> (raw)
In-Reply-To: <a058f32e-402b-d269-a6a2-5c30e28abc4f@virtuozzo.com>
[-- Attachment #1: Type: text/plain, Size: 6140 bytes --]
19.11.2020 19:11, Vladimir Sementsov-Ogievskiy wrote:
> 16.11.2020 20:59, Peter Maydell wrote:
>> On Mon, 16 Nov 2020 at 17:34, Alberto Garcia <berto@igalia.com> wrote:
>>> Do you know if there is a core dump or stack trace available ?
>>
>> Nope, sorry. What you get is what the 'vm-build-netbsd' etc targets
>> produce, so if you want more diagnostics on failures you have to
>> arrange for the test harness to produce them...
>>
>> thanks
>> -- PMM
>>
>
> Hi!
>
> After some iterations I've reproduced on SIGABRT:
>
> #0 0x00007feb701bae35 in raise () at /lib64/libc.so.6
> #1 0x00007feb701a5895 in abort () at /lib64/libc.so.6
> #2 0x00007feb701a5769 in _nl_load_domain.cold () at /lib64/libc.so.6
> #3 0x00007feb701b3566 in annobin_assert.c_end () at /lib64/libc.so.6
> #4 0x000055a93374f7d3 in bdrv_replace_child (child=0x55a9363a3a00, new_bs=0x0) at ../block.c:2648
> #5 0x000055a93374fd5a in bdrv_detach_child (child=0x55a9363a3a00) at ../block.c:2777
> #6 0x000055a93374fd9c in bdrv_root_unref_child (child=0x55a9363a3a00) at ../block.c:2789
> #7 0x000055a933722e8b in block_job_remove_all_bdrv (job=0x55a935f4f4b0) at ../blockjob.c:191
> #8 0x000055a933722bb2 in block_job_free (job=0x55a935f4f4b0) at ../blockjob.c:88
> #9 0x000055a9337755fa in job_unref (job=0x55a935f4f4b0) at ../job.c:380
> #10 0x000055a9337767a6 in job_exit (opaque=0x55a935f4f4b0) at ../job.c:894
> #11 0x000055a93386037e in aio_bh_call (bh=0x55a9352e16b0) at ../util/async.c:136
> #12 0x000055a933860488 in aio_bh_poll (ctx=0x55a9351366f0) at ../util/async.c:164
> #13 0x000055a93383151e in aio_dispatch (ctx=0x55a9351366f0) at ../util/aio-posix.c:381
> #14 0x000055a9338608b9 in aio_ctx_dispatch (source=0x55a9351366f0, callback=0x0, user_data=0x0)
> at ../util/async.c:306
> #15 0x00007feb71349ecd in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
> #16 0x000055a933840300 in glib_pollfds_poll () at ../util/main-loop.c:221
> #17 0x000055a93384037a in os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:244
> #18 0x000055a933840482 in main_loop_wait (nonblocking=0) at ../util/main-loop.c:520
> #19 0x000055a933603979 in qemu_main_loop () at ../softmmu/vl.c:1678
> #20 0x000055a933190046 in main (argc=20, argv=0x7ffd48c31138, envp=0x7ffd48c311e0)
>
> (gdb) fr 4
> #4 0x000055a93374f7d3 in bdrv_replace_child (child=0x55a9363a3a00, new_bs=0x0) at ../block.c:2648
> 2648 assert(tighten_restrictions == false);
> (gdb) list
> 2643 int ret;
> 2644
> 2645 bdrv_get_cumulative_perm(old_bs, &perm, &shared_perm);
> 2646 ret = bdrv_check_perm(old_bs, NULL, perm, shared_perm, NULL,
> 2647 &tighten_restrictions, NULL);
> 2648 assert(tighten_restrictions == false);
> 2649 if (ret < 0) {
> 2650 /* We only tried to loosen restrictions, so errors are not fatal */
> 2651 bdrv_abort_perm_update(old_bs);
> 2652 } else {
> (gdb) p tighten_restrictions
> $1 = true
>
>
I've modified code a bit, to crash when we actually want to set tighten_restrictions to true, and get the following bt:
#0 0x00007f6dbb49ee35 in raise () at /lib64/libc.so.6
#1 0x00007f6dbb489895 in abort () at /lib64/libc.so.6
#2 0x000055b9174104d7 in bdrv_check_perm
(bs=0x55b918f09720, q=0x0, cumulative_perms=1, cumulative_shared_perms=21, ignore_children=0x55b918a57b20 = {...}, tighten_restrictions=0x55b917b044f8 <abort_on_set_to_true>, errp=0x0) at ../block.c:2009
#3 0x000055b917410ec0 in bdrv_check_update_perm
(bs=0x55b918f09720, q=0x0, new_used_perm=1, new_shared_perm=21, ignore_children=0x55b918a57b20 = {...}, tighten_restrictions=0x55b917b044f8 <abort_on_set_to_true>, errp=0x0) at ../block.c:2280
#4 0x000055b917410f38 in bdrv_child_check_perm
(c=0x55b91921fcf0, q=0x0, perm=1, shared=21, ignore_children=0x55b918a57b20 = {...}, tighten_restrictions=0x55b917b044f8 <abort_on_set_to_true>, errp=0x0) at ../block.c:2294
#5 0x000055b91741078c in bdrv_check_perm
(bs=0x55b918defd90, q=0x0, cumulative_perms=1, cumulative_shared_perms=21, ignore_children=0x0, tighten_restrictions=0x55b917b044f8 <abort_on_set_to_true>, errp=0x0) at ../block.c:2076
#6 0x000055b91741194e in bdrv_replace_child (child=0x55b919cf6200, new_bs=0x0) at ../block.c:2666
#7 0x000055b917411f1d in bdrv_detach_child (child=0x55b919cf6200) at ../block.c:2798
#8 0x000055b917411f5f in bdrv_root_unref_child (child=0x55b919cf6200) at ../block.c:2810
#9 0x000055b9173e4d88 in block_job_remove_all_bdrv (job=0x55b918f06a60) at ../blockjob.c:191
#10 0x000055b9173e4aaf in block_job_free (job=0x55b918f06a60) at ../blockjob.c:88
#11 0x000055b917437aca in job_unref (job=0x55b918f06a60) at ../job.c:380
#12 0x000055b917438c76 in job_exit (opaque=0x55b918f06a60) at ../job.c:894
#13 0x000055b917522a57 in aio_bh_call (bh=0x55b919a2b3b0) at ../util/async.c:136
#14 0x000055b917522b61 in aio_bh_poll (ctx=0x55b918a866f0) at ../util/async.c:164
#15 0x000055b9174f3bf7 in aio_dispatch (ctx=0x55b918a866f0) at ../util/aio-posix.c:381
#16 0x000055b917522f92 in aio_ctx_dispatch (source=0x55b918a866f0, callback=0x0, user_data=0x0)
at ../util/async.c:306
#17 0x00007f6dbc62decd in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#18 0x000055b9175029d9 in glib_pollfds_poll () at ../util/main-loop.c:221
#19 0x000055b917502a53 in os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:244
#20 0x000055b917502b5b in main_loop_wait (nonblocking=0) at ../util/main-loop.c:520
#21 0x000055b9172c5979 in qemu_main_loop () at ../softmmu/vl.c:1678
#22 0x000055b916e52046 in main (argc=20, argv=0x7fff7f81f208, envp=0x7fff7f81f2b0)
and the picture taken at the moment of abort (and it is the same as at the moment before bdrv_replace_child call) is attached. So, it looks like graph is already corrupted (you see that backing permissions are not propagated to node2-node0 child).
How graph was corrupted it's still the question..
--
Best regards,
Vladimir
[-- Attachment #2: abort.png --]
[-- Type: image/png, Size: 77874 bytes --]
next prev parent reply other threads:[~2020-11-19 19:35 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-16 16:16 iotest 030 still occasionally intermittently failing Peter Maydell
2020-11-16 17:34 ` Alberto Garcia
2020-11-16 17:59 ` Peter Maydell
2020-11-19 16:11 ` Vladimir Sementsov-Ogievskiy
2020-11-19 16:16 ` Vladimir Sementsov-Ogievskiy
2020-11-19 18:17 ` Alberto Garcia
2020-11-19 19:30 ` Vladimir Sementsov-Ogievskiy [this message]
2020-11-19 19:31 ` Vladimir Sementsov-Ogievskiy
2020-11-19 20:31 ` Vladimir Sementsov-Ogievskiy
2020-11-20 9:51 ` Vladimir Sementsov-Ogievskiy
2020-11-20 10:34 ` Philippe Mathieu-Daudé
2020-11-20 12:48 ` Vladimir Sementsov-Ogievskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1f53f805-9367-d7c6-94ca-8d91e88f362f@virtuozzo.com \
--to=vsementsov@virtuozzo.com \
--cc=berto@igalia.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).