From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org
Subject: [PATCH 4/5] target/arm: Support more GM blocksizes
Date: Wed, 9 Aug 2023 19:35:47 -0700 [thread overview]
Message-ID: <20230810023548.412310-5-richard.henderson@linaro.org> (raw)
In-Reply-To: <20230810023548.412310-1-richard.henderson@linaro.org>
Support all of the easy GM block sizes.
Use direct memory operations, since the pointers are aligned.
While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
an atomic store of one nibble. This is not difficult, but there
is also no point in supporting it until required.
Note that cortex-a710 sets GM blocksize to match its cacheline
size of 64 bytes. I expect many implementations will also
match the cacheline, which makes 16 bytes very unlikely.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/mte_helper.c | 61 ++++++++++++++++++++++++++++++++-----
1 file changed, 53 insertions(+), 8 deletions(-)
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 3640c6e57f..6faf4e42d5 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -428,6 +428,8 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
int gm_bs = env_archcpu(env)->gm_blocksize;
int gm_bs_bytes = 4 << gm_bs;
void *tag_mem;
+ uint64_t ret;
+ int shift;
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
@@ -443,16 +445,39 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
/*
* The ordering of elements within the word corresponds to
- * a little-endian operation.
+ * a little-endian operation. Computation of shift comes from
+ *
+ * index = address<LOG2_TAG_GRANULE+3:LOG2_TAG_GRANULE>
+ * data<index*4+3:index*4> = tag
+ *
+ * Because of the alignment of ptr above, BS=6 has shift=0.
+ * All memory operations are aligned.
*/
switch (gm_bs) {
- case 6:
- /* 256 bytes -> 16 tags -> 64 result bits */
- return ldq_le_p(tag_mem);
- default:
+ case 2:
/* cpu configured with unsupported gm blocksize. */
g_assert_not_reached();
+ case 3:
+ /* 32 bytes -> 2 tags -> 8 result bits */
+ ret = *(uint8_t *)tag_mem;
+ break;
+ case 4:
+ /* 64 bytes -> 4 tags -> 16 result bits */
+ ret = cpu_to_le16(*(uint16_t *)tag_mem);
+ break;
+ case 5:
+ /* 128 bytes -> 8 tags -> 32 result bits */
+ ret = cpu_to_le32(*(uint32_t *)tag_mem);
+ break;
+ case 6:
+ /* 256 bytes -> 16 tags -> 64 result bits */
+ return cpu_to_le64(*(uint64_t *)tag_mem);
+ default:
+ /* cpu configured with invalid gm blocksize. */
+ g_assert_not_reached();
}
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
+ return ret << shift;
}
void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
@@ -462,6 +487,7 @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
int gm_bs = env_archcpu(env)->gm_blocksize;
int gm_bs_bytes = 4 << gm_bs;
void *tag_mem;
+ int shift;
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
@@ -480,14 +506,33 @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
/*
* The ordering of elements within the word corresponds to
- * a little-endian operation.
+ * a little-endian operation. See LDGM for comments on shift.
+ * All memory operations are aligned.
*/
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
+ val >>= shift;
switch (gm_bs) {
+ case 2:
+ /* cpu configured with unsupported gm blocksize. */
+ g_assert_not_reached();
+ case 3:
+ /* 32 bytes -> 2 tags -> 8 result bits */
+ *(uint8_t *)tag_mem = val;
+ break;
+ case 4:
+ /* 64 bytes -> 4 tags -> 16 result bits */
+ *(uint16_t *)tag_mem = cpu_to_le16(val);
+ break;
+ case 5:
+ /* 128 bytes -> 8 tags -> 32 result bits */
+ *(uint32_t *)tag_mem = cpu_to_le32(val);
+ break;
case 6:
- stq_le_p(tag_mem, val);
+ /* 256 bytes -> 16 tags -> 64 result bits */
+ *(uint64_t *)tag_mem = cpu_to_le64(val);
break;
default:
- /* cpu configured with unsupported gm blocksize. */
+ /* cpu configured with invalid gm blocksize. */
g_assert_not_reached();
}
}
--
2.34.1
next prev parent reply other threads:[~2023-08-10 2:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-10 2:35 [PATCH for-8.2 0/5] target/arm: Implement cortex-a710 Richard Henderson
2023-08-10 2:35 ` [PATCH 1/5] target/arm: Disable FEAT_TRF in neoverse-v1 Richard Henderson
2023-08-10 9:16 ` Peter Maydell
2023-08-10 2:35 ` [PATCH 2/5] target/arm: Reduce dcz_blocksize to uint8_t Richard Henderson
2023-08-10 14:09 ` Peter Maydell
2023-08-10 19:02 ` Richard Henderson
2023-08-10 2:35 ` [PATCH 3/5] target/arm: Allow cpu to configure GM blocksize Richard Henderson
2023-08-10 14:13 ` Peter Maydell
2023-08-10 2:35 ` Richard Henderson [this message]
2023-08-10 14:23 ` [PATCH 4/5] target/arm: Support more GM blocksizes Peter Maydell
2023-08-10 19:10 ` Richard Henderson
2023-08-10 2:35 ` [PATCH 5/5] target/arm: Implement cortex-a710 Richard Henderson
2023-08-10 4:11 ` Richard Henderson
2023-08-10 15:49 ` Peter Maydell
2023-08-10 17:05 ` Richard Henderson
2023-08-10 17:12 ` Peter Maydell
2023-08-14 8:49 ` Marcin Juszkiewicz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230810023548.412310-5-richard.henderson@linaro.org \
--to=richard.henderson@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).