qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <xadimgnik@gmail.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org, qemu-devel@nongnu.org
Subject: Re: [PATCH] xen-hvm: Avoid livelock while handling buffered ioreqs
Date: Mon, 8 Apr 2024 14:10:36 +0100	[thread overview]
Message-ID: <60c49e27-1275-411e-9cda-8405ca0ce010@xen.org> (raw)
In-Reply-To: <CAG7k0EqEwfS75n8FCncrr_VzPtJZk46KrLiYaMDfWfrZ6CEApg@mail.gmail.com>

On 08/04/2024 14:00, Ross Lagerwall wrote:
> On Sat, Apr 6, 2024 at 11:58 AM Durrant, Paul <xadimgnik@gmail.com> wrote:
>>
>> On 04/04/2024 15:08, Ross Lagerwall wrote:
>>> A malicious or buggy guest may generated buffered ioreqs faster than
>>> QEMU can process them in handle_buffered_iopage(). The result is a
>>> livelock - QEMU continuously processes ioreqs on the main thread without
>>> iterating through the main loop which prevents handling other events,
>>> processing timers, etc. Without QEMU handling other events, it often
>>> results in the guest becoming unsable and makes it difficult to stop the
>>> source of buffered ioreqs.
>>>
>>> To avoid this, if we process a full page of buffered ioreqs, stop and
>>> reschedule an immediate timer to continue processing them. This lets
>>> QEMU go back to the main loop and catch up.
>>>
>>
>> Do PV backends potentially cause the same scheduling issue (if not using
>> io threads)?
>>
> 
>  From what I can tell:
> 
> xen-block: It reads req_prod / req_cons once before entering the loop
> so it should be fine, I think.
> 
> xen_console: Same as xen-block
> 
> xen_nic: It reads req_prod / req_cons once before entering the loop.
> However, once the loop ends it checks for more requests and if there
> are more requests it restarts from the beginning. It seems like this
> could be susceptible to the same issue.
> 
> (These PV backends generally aren't used by XenServer's system QEMU
> so I didn't spend too much time looking into it.)
> 
> Thanks,

Ok. Thanks for checking.

   Paul



  reply	other threads:[~2024-04-08 13:11 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-04 14:08 [PATCH] xen-hvm: Avoid livelock while handling buffered ioreqs Ross Lagerwall
2024-04-06 10:58 ` Durrant, Paul
2024-04-08 13:00   ` Ross Lagerwall
2024-04-08 13:10     ` Paul Durrant [this message]
2024-04-08 13:12 ` Paul Durrant
2024-04-09 10:20 ` Anthony PERARD
2024-04-09 14:19   ` Ross Lagerwall
2024-04-09 14:35     ` Peter Maydell
2024-05-23 11:00     ` Ross Lagerwall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60c49e27-1275-411e-9cda-8405ca0ce010@xen.org \
    --to=xadimgnik@gmail.com \
    --cc=anthony.perard@citrix.com \
    --cc=paul@xen.org \
    --cc=qemu-devel@nongnu.org \
    --cc=ross.lagerwall@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).