From: bugzilla-daemon@kernel.org
To: kvm@vger.kernel.org
Subject: [Bug 199727] CPU freezes in KVM guests during high IO load on host
Date: Tue, 08 Mar 2022 08:01:19 +0000 [thread overview]
Message-ID: <bug-199727-28872-ruBssut0qW@https.bugzilla.kernel.org/> (raw)
In-Reply-To: <bug-199727-28872@https.bugzilla.kernel.org/>
https://bugzilla.kernel.org/show_bug.cgi?id=199727
--- Comment #15 from Roland Kletzing (devzero@web.de) ---
yes, i was using cache=none and io_uring also caused issues.
>aio=threads avoids softlockups because the preadv(2)/pwritev(2)/fdatasync(2)
> syscalls run in worker threads that don't take the QEMU global mutex.
>Therefore vcpu threads can execute even when I/O is stuck in the kernel due to
>a lock.
yes, that was a long search/journey to get to this information/params....
regarding io_uring - after proxmox enabled it as default, it was taken back
again after some issues had been reported.
have look at:
https://github.com/proxmox/qemu-server/blob/master/debian/changelog
maybe it's not ready for primetime yet !?
-- Proxmox Support Team <support@proxmox.com> Fri, 30 Jul 2021 16:53:44 +0200
qemu-server (7.0-11) bullseye; urgency=medium
<snip>
* lvm: avoid the use of io_uring for now
<snip>
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Jul 2021 11:08:48 +0200
qemu-server (7.0-10) bullseye; urgency=medium
<snip>
* avoid using io_uring for drives backed by LVM and configured for write-back
or write-through cache
<snip>
-- Proxmox Support Team <support@proxmox.com> Mon, 05 Jul 2021 20:49:50 +0200
qemu-server (7.0-6) bullseye; urgency=medium
<snip>
* For now do not use io_uring for drives backed by Ceph RBD, with KRBD and
write-back or write-through cache enabled, as in that case some polling/IO
may hang in QEMU 6.0.
<snip>
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
next prev parent reply other threads:[~2022-03-08 8:01 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <bug-199727-28872@https.bugzilla.kernel.org/>
2021-08-20 17:25 ` [Bug 199727] CPU freezes in KVM guests during high IO load on host bugzilla-daemon
2021-08-21 8:53 ` bugzilla-daemon
2021-08-22 12:11 ` bugzilla-daemon
2021-08-29 14:58 ` bugzilla-daemon
2022-01-13 12:09 ` bugzilla-daemon
2022-02-10 13:22 ` bugzilla-daemon
2022-02-12 0:13 ` bugzilla-daemon
2022-02-12 10:26 ` bugzilla-daemon
2022-02-24 18:58 ` bugzilla-daemon
2022-02-25 9:49 ` bugzilla-daemon
2022-03-02 13:33 ` bugzilla-daemon
2022-03-07 19:01 ` bugzilla-daemon
2022-03-08 6:20 ` bugzilla-daemon
2022-03-08 8:01 ` bugzilla-daemon [this message]
2022-03-08 8:26 ` bugzilla-daemon
2022-03-26 15:17 ` bugzilla-daemon
2022-04-06 23:25 ` bugzilla-daemon
2022-04-06 23:52 ` bugzilla-daemon
2022-11-29 10:03 ` bugzilla-daemon
2024-02-01 13:15 ` bugzilla-daemon
2024-02-01 13:25 ` bugzilla-daemon
2024-02-01 13:46 ` bugzilla-daemon
2024-02-01 13:51 ` bugzilla-daemon
2024-02-01 19:56 ` bugzilla-daemon
2024-08-25 7:29 ` bugzilla-daemon
2024-08-25 7:29 ` bugzilla-daemon
2024-08-25 7:29 ` bugzilla-daemon
2025-05-09 7:10 ` bugzilla-daemon
2025-05-09 8:48 ` bugzilla-daemon
2025-05-09 9:23 ` bugzilla-daemon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-199727-28872-ruBssut0qW@https.bugzilla.kernel.org/ \
--to=bugzilla-daemon@kernel.org \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).