qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: alex.bennee@linaro.org, pbonzini@redhat.com,
	mark.cave-ayland@ilande.co.uk, f4bug@amsat.org
Subject: [PATCH 14/15] softmmu/memory: Support some unaligned access
Date: Sat, 19 Jun 2021 10:26:25 -0700	[thread overview]
Message-ID: <20210619172626.875885-15-richard.henderson@linaro.org> (raw)
In-Reply-To: <20210619172626.875885-1-richard.henderson@linaro.org>

Decline to support writes that cannot be covered by access_size_min.
Decline to support unaligned reads that require extraction from more
than two reads.

Do support exact size match when the model supports unaligned.
Do support reducing the operation size to match the alignment.
Do support loads that extract from 1 or 2 larger loads.

Diagnose anything that we do not handle via LOG_GUEST_ERROR,
as any of these cases are probably model or guest errors.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 softmmu/memory.c | 127 ++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 121 insertions(+), 6 deletions(-)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index 2fe237327d..baf8573f1b 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -529,7 +529,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
     MemoryRegionAccessFn *access_fn;
     uint64_t access_mask;
     unsigned access_size;
-    unsigned i;
+    signed access_sh;
     MemTxResult r = MEMTX_OK;
 
     if (!access_size_min) {
@@ -547,7 +547,6 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
                      : memory_region_read_with_attrs_accessor);
     }
 
-    /* FIXME: support unaligned access? */
     /*
      * Check for a small access.
      */
@@ -557,6 +556,10 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
          * cycle, and many mmio registers have side-effects on read.
          * In practice, this appears to be either (1) model error,
          * or (2) guest error via the fuzzer.
+         *
+         * TODO: Are all short reads also guest or model errors, because
+         * of said side effects?  Or is this valid read-for-effect then
+         * discard the (complete) result via narrow destination register?
          */
         if (write) {
             qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid short write: %s "
@@ -566,22 +569,134 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
                           access_size_min, access_size_max);
             return MEMTX_ERROR;
         }
+
+        /*
+         * If the original access is aligned, we can always extract
+         * from a single larger load.
+         */
+        access_size = access_size_min;
+        if (likely((addr & (size - 1)) == 0)) {
+            goto extract;
+        }
+
+        /*
+         * TODO: We could search for a larger load that happens to
+         * cover the unaligned load, but at some point we will always
+         * require two operations.  Extract from two loads.
+         */
+        goto extract2;
     }
 
+    /*
+     * Check for size in range.
+     */
+    if (likely(size <= access_size_max)) {
+        /*
+         * If the access is aligned or if the model supports
+         * unaligned accesses, use one operation directly.
+         */
+        if (likely((addr & (size - 1)) == 0) || mr->ops->impl.unaligned) {
+            access_size = size;
+            access_sh = 0;
+            goto direct;
+        }
+    }
+
+    /*
+     * It is certain that we require multiple operations.
+     * If the access is aligned (or the model supports unaligned),
+     * then we will perform N accesses which exactly cover the
+     * operation requested.
+     */
     access_size = MAX(MIN(size, access_size_max), access_size_min);
+    if (unlikely(addr & (access_size - 1))) {
+        unsigned lsb = addr & -addr;
+        if (lsb >= access_size_min) {
+            /*
+             * The model supports small enough loads that we can
+             * exactly match the operation requested.  For reads,
+             * this is preferable to touching more than requested.
+             * For writes, this is mandatory.
+             */
+            access_size = lsb;
+        } else if (write) {
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid unaligned write: %s "
+                          "hwaddr: 0x%" HWADDR_PRIx " size: %u "
+                          "min: %u max: %u\n", __func__,
+                          memory_region_name(mr), addr, size,
+                          access_size_min, access_size_max);
+            return MEMTX_ERROR;
+        } else if (size <= access_size_max) {
+            /* As per above, we can use two loads to implement. */
+            access_size = size;
+            goto extract2;
+        } else {
+            /*
+             * TODO: becaseu access_size_max is small, this case requires
+             * more than 2 loads to assemble and extract.  Bail out.
+             */
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: Unhandled unaligned read: %s "
+                          "hwaddr: 0x%" HWADDR_PRIx " size: %u "
+                          "min: %u max: %u\n", __func__,
+                          memory_region_name(mr), addr, size,
+                          access_size_min, access_size_max);
+            return MEMTX_ERROR;
+        }
+    }
+
     access_mask = MAKE_64BIT_MASK(0, access_size * 8);
     if (memory_region_big_endian(mr)) {
-        for (i = 0; i < size; i += access_size) {
+        for (unsigned i = 0; i < size; i += access_size) {
             r |= access_fn(mr, addr + i, value, access_size,
-                        (size - access_size - i) * 8, access_mask, attrs);
+                           (size - access_size - i) * 8, access_mask, attrs);
         }
     } else {
-        for (i = 0; i < size; i += access_size) {
+        for (unsigned i = 0; i < size; i += access_size) {
             r |= access_fn(mr, addr + i, value, access_size, i * 8,
-                        access_mask, attrs);
+                           access_mask, attrs);
         }
     }
     return r;
+
+ extract2:
+    /*
+     * Extract from one or two loads to produce the result.
+     * Validate that we need two loads before performing them.
+     */
+    access_sh = addr & (access_size - 1);
+    if (access_sh + size > access_size) {
+        addr &= ~(access_size - 1);
+        if (memory_region_big_endian(mr)) {
+            access_sh = (access_size - access_sh) * 8;
+            r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs);
+            access_sh -= access_size * 8;
+            r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs);
+        } else {
+            access_sh = (access_sh - access_size) * 8;
+            r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs);
+            access_sh += access_size * 8;
+            r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs);
+        }
+        *value &= MAKE_64BIT_MASK(0, size * 8);
+        return r;
+    }
+
+ extract:
+    /*
+     * Extract from one larger load to produce the result.
+     */
+    access_sh = addr & (access_size - 1);
+    addr &= ~(access_size - 1);
+    if (memory_region_big_endian(mr)) {
+        access_sh = access_size - size - access_sh;
+    }
+    /* Note that with this interface, right shift is negative. */
+    access_sh *= -8;
+
+ direct:
+    access_mask = MAKE_64BIT_MASK(0, size * 8);
+    return access_fn(mr, addr, value, access_size, access_sh,
+                     access_mask, attrs);
 }
 
 static AddressSpace *memory_region_to_address_space(MemoryRegion *mr)
-- 
2.25.1



  parent reply	other threads:[~2021-06-19 17:42 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-19 17:26 [PATCH 00/15] accel/tcg: Fix for #360 and other i/o alignment issues Richard Henderson
2021-06-19 17:26 ` [PATCH 01/15] NOTFORMERGE q800: test case for do_unaligned_access issue Richard Henderson
2021-06-19 17:26 ` [PATCH 02/15] accel/tcg: Extract load_helper_unaligned from load_helper Richard Henderson
2021-06-19 17:26 ` [PATCH 03/15] accel/tcg: Use byte ops for unaligned loads Richard Henderson
2021-06-19 17:26 ` [PATCH 04/15] accel/tcg: Don't test for watchpoints for code read Richard Henderson
2021-06-21 18:29   ` Philippe Mathieu-Daudé
2021-06-19 17:26 ` [PATCH 05/15] accel/tcg: Handle page span access before i/o access Richard Henderson
2021-06-19 17:26 ` [PATCH 06/15] softmmu/memory: Inline memory_region_dispatch_read1 Richard Henderson
2021-06-21 18:25   ` Philippe Mathieu-Daudé
2021-06-19 17:26 ` [PATCH 07/15] softmmu/memory: Simplify access_with_adjusted_size interface Richard Henderson
2021-06-21 18:27   ` Philippe Mathieu-Daudé
2021-06-19 17:26 ` [PATCH 08/15] hw/net/e1000e: Fix size of io operations Richard Henderson
2021-06-19 17:26 ` [PATCH 09/15] hw/net/e1000e: Fix impl.min_access_size Richard Henderson
2021-06-21  7:20   ` Jason Wang
2021-06-19 17:26 ` [PATCH 10/15] hw/pci-host/q35: Improve blackhole_ops Richard Henderson
2021-06-21 18:31   ` Philippe Mathieu-Daudé
2021-06-19 17:26 ` [PATCH 11/15] hw/scsi/megasas: Fix megasas_mmio_ops sizes Richard Henderson
2021-06-19 17:26 ` [PATCH 12/15] hw/scsi/megasas: Improve megasas_queue_ops min_access_size Richard Henderson
2021-06-19 17:26 ` [PATCH 13/15] softmmu/memory: Disallow short writes Richard Henderson
2021-06-19 17:26 ` Richard Henderson [this message]
2021-06-19 17:26 ` [PATCH 15/15] RFC accel/tcg: Defer some unaligned accesses to memory subsystem Richard Henderson
2021-06-20 13:08 ` [PATCH 00/15] accel/tcg: Fix for #360 and other i/o alignment issues Mark Cave-Ayland
2021-06-20 14:33 ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210619172626.875885-15-richard.henderson@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=alex.bennee@linaro.org \
    --cc=f4bug@amsat.org \
    --cc=mark.cave-ayland@ilande.co.uk \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).