qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 0/2] Xen queue
@ 2022-01-27 15:42 Anthony PERARD via
  2022-01-27 15:42 ` [PULL 1/2] xen-hvm: Allow disabling buffer_io_timer Anthony PERARD via
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Anthony PERARD via @ 2022-01-27 15:42 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Anthony PERARD

The following changes since commit 48302d4eb628ff0bea4d7e92cbf6b726410eb4c3:

  Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20220126' into staging (2022-01-26 10:59:50 +0000)

are available in the Git repository at:

  https://xenbits.xen.org/git-http/people/aperard/qemu-dm.git tags/pull-xen-20220127

for you to fetch changes up to a021a2dd8b790437d27db95774969349632f856a:

  xen-mapcache: Avoid entry->lock overflow (2022-01-27 15:14:21 +0000)

----------------------------------------------------------------
Xen patches

- bug fixes for mapcache and ioreq handling

----------------------------------------------------------------
Jason Andryuk (1):
      xen-hvm: Allow disabling buffer_io_timer

Ross Lagerwall (1):
      xen-mapcache: Avoid entry->lock overflow

 hw/i386/xen/xen-hvm.c      | 6 ++++--
 hw/i386/xen/xen-mapcache.c | 8 +++++++-
 2 files changed, 11 insertions(+), 3 deletions(-)


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PULL 1/2] xen-hvm: Allow disabling buffer_io_timer
  2022-01-27 15:42 [PULL 0/2] Xen queue Anthony PERARD via
@ 2022-01-27 15:42 ` Anthony PERARD via
  2022-01-27 15:42 ` [PULL 2/2] xen-mapcache: Avoid entry->lock overflow Anthony PERARD via
  2022-01-28 14:03 ` [PULL 0/2] Xen queue Peter Maydell
  2 siblings, 0 replies; 4+ messages in thread
From: Anthony PERARD via @ 2022-01-27 15:42 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Jason Andryuk, Anthony PERARD

From: Jason Andryuk <jandryuk@gmail.com>

commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
coded setting req.count = 1 during initial field setup before the main
loop.  This missed a subtlety that an early exit from the loop when
there are no ioreqs to process, would have req.count == 0 for the return
value.  handle_buffered_io() would then remove state->buffered_io_timer.
Instead handle_buffered_iopage() is basically always returning true and
handle_buffered_io() always re-setting the timer.

Restore the disabling of the timer by introducing a new handled_ioreq
boolean and use as the return value.  The named variable will more
clearly show the intent of the code.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Message-Id: <20211210193434.75566-1-jandryuk@gmail.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/i386/xen/xen-hvm.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 482be95415..cf8e500514 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1087,10 +1087,11 @@ static void handle_ioreq(XenIOState *state, ioreq_t *req)
     }
 }
 
-static int handle_buffered_iopage(XenIOState *state)
+static bool handle_buffered_iopage(XenIOState *state)
 {
     buffered_iopage_t *buf_page = state->buffered_io_page;
     buf_ioreq_t *buf_req = NULL;
+    bool handled_ioreq = false;
     ioreq_t req;
     int qw;
 
@@ -1144,9 +1145,10 @@ static int handle_buffered_iopage(XenIOState *state)
         assert(!req.data_is_ptr);
 
         qatomic_add(&buf_page->read_pointer, qw + 1);
+        handled_ioreq = true;
     }
 
-    return req.count;
+    return handled_ioreq;
 }
 
 static void handle_buffered_io(void *opaque)
-- 
Anthony PERARD



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PULL 2/2] xen-mapcache: Avoid entry->lock overflow
  2022-01-27 15:42 [PULL 0/2] Xen queue Anthony PERARD via
  2022-01-27 15:42 ` [PULL 1/2] xen-hvm: Allow disabling buffer_io_timer Anthony PERARD via
@ 2022-01-27 15:42 ` Anthony PERARD via
  2022-01-28 14:03 ` [PULL 0/2] Xen queue Peter Maydell
  2 siblings, 0 replies; 4+ messages in thread
From: Anthony PERARD via @ 2022-01-27 15:42 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Ross Lagerwall, Anthony PERARD

From: Ross Lagerwall <ross.lagerwall@citrix.com>

In some cases, a particular mapcache entry may be mapped 256 times
causing the lock field to wrap to 0. For example, this may happen when
using emulated NVME and the guest submits a large scatter-gather write.
At this point, the entry map be remapped causing QEMU to write the wrong
data or crash (since remap is not atomic).

Avoid this overflow by increasing the lock field to a uint32_t and also
detect it and abort rather than continuing regardless.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Message-Id: <20220124104450.152481-1-ross.lagerwall@citrix.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/i386/xen/xen-mapcache.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
index bd47c3d672..f2ef977963 100644
--- a/hw/i386/xen/xen-mapcache.c
+++ b/hw/i386/xen/xen-mapcache.c
@@ -52,7 +52,7 @@ typedef struct MapCacheEntry {
     hwaddr paddr_index;
     uint8_t *vaddr_base;
     unsigned long *valid_mapping;
-    uint8_t lock;
+    uint32_t lock;
 #define XEN_MAPCACHE_ENTRY_DUMMY (1 << 0)
     uint8_t flags;
     hwaddr size;
@@ -355,6 +355,12 @@ static uint8_t *xen_map_cache_unlocked(hwaddr phys_addr, hwaddr size,
     if (lock) {
         MapCacheRev *reventry = g_malloc0(sizeof(MapCacheRev));
         entry->lock++;
+        if (entry->lock == 0) {
+            fprintf(stderr,
+                    "mapcache entry lock overflow: "TARGET_FMT_plx" -> %p\n",
+                    entry->paddr_index, entry->vaddr_base);
+            abort();
+        }
         reventry->dma = dma;
         reventry->vaddr_req = mapcache->last_entry->vaddr_base + address_offset;
         reventry->paddr_index = mapcache->last_entry->paddr_index;
-- 
Anthony PERARD



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PULL 0/2] Xen queue
  2022-01-27 15:42 [PULL 0/2] Xen queue Anthony PERARD via
  2022-01-27 15:42 ` [PULL 1/2] xen-hvm: Allow disabling buffer_io_timer Anthony PERARD via
  2022-01-27 15:42 ` [PULL 2/2] xen-mapcache: Avoid entry->lock overflow Anthony PERARD via
@ 2022-01-28 14:03 ` Peter Maydell
  2 siblings, 0 replies; 4+ messages in thread
From: Peter Maydell @ 2022-01-28 14:03 UTC (permalink / raw)
  To: Anthony PERARD; +Cc: qemu-devel

On Thu, 27 Jan 2022 at 15:43, Anthony PERARD <anthony.perard@citrix.com> wrote:
>
> The following changes since commit 48302d4eb628ff0bea4d7e92cbf6b726410eb4c3:
>
>   Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20220126' into staging (2022-01-26 10:59:50 +0000)
>
> are available in the Git repository at:
>
>   https://xenbits.xen.org/git-http/people/aperard/qemu-dm.git tags/pull-xen-20220127
>
> for you to fetch changes up to a021a2dd8b790437d27db95774969349632f856a:
>
>   xen-mapcache: Avoid entry->lock overflow (2022-01-27 15:14:21 +0000)
>
> ----------------------------------------------------------------
> Xen patches
>
> - bug fixes for mapcache and ioreq handling
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.0
for any user-visible changes.

-- PMM


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-28 14:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-01-27 15:42 [PULL 0/2] Xen queue Anthony PERARD via
2022-01-27 15:42 ` [PULL 1/2] xen-hvm: Allow disabling buffer_io_timer Anthony PERARD via
2022-01-27 15:42 ` [PULL 2/2] xen-mapcache: Avoid entry->lock overflow Anthony PERARD via
2022-01-28 14:03 ` [PULL 0/2] Xen queue Peter Maydell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).