From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org
Subject: [Qemu-devel] [PULL 17/23] tcg/s390: enable dynamic TLB sizing
Date: Mon, 28 Jan 2019 07:59:01 -0800 [thread overview]
Message-ID: <20190128155907.20607-18-richard.henderson@linaro.org> (raw)
In-Reply-To: <20190128155907.20607-1-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/s390/tcg-target.h | 2 +-
tcg/s390/tcg-target.inc.c | 45 +++++++++++++++++----------------------
2 files changed, 20 insertions(+), 27 deletions(-)
diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h
index 394b545369..357528dd97 100644
--- a/tcg/s390/tcg-target.h
+++ b/tcg/s390/tcg-target.h
@@ -27,7 +27,7 @@
#define TCG_TARGET_INSN_UNIT_SIZE 2
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 19
-#define TCG_TARGET_IMPLEMENTS_DYN_TLB 0
+#define TCG_TARGET_IMPLEMENTS_DYN_TLB 1
typedef enum TCGReg {
TCG_REG_R0 = 0,
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index 39ecf609a1..7db90b3bae 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1537,10 +1537,10 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
#if defined(CONFIG_SOFTMMU)
#include "tcg-ldst.inc.c"
-/* We're expecting to use a 20-bit signed offset on the tlb memory ops.
- Using the offset of the second entry in the last tlb table ensures
- that we can index all of the elements of the first entry. */
-QEMU_BUILD_BUG_ON(offsetof(CPUArchState, tlb_table[NB_MMU_MODES - 1][1])
+/* We're expecting to use a 20-bit signed offset on the tlb memory ops. */
+QEMU_BUILD_BUG_ON(offsetof(CPUArchState, tlb_mask[NB_MMU_MODES - 1])
+ > 0x7ffff);
+QEMU_BUILD_BUG_ON(offsetof(CPUArchState, tlb_table[NB_MMU_MODES - 1])
> 0x7ffff);
/* Load and compare a TLB entry, leaving the flags set. Loads the TLB
@@ -1552,48 +1552,41 @@ static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
unsigned a_bits = get_alignment_bits(opc);
unsigned s_mask = (1 << s_bits) - 1;
unsigned a_mask = (1 << a_bits) - 1;
+ int mask_off = offsetof(CPUArchState, tlb_mask[mem_index]);
+ int table_off = offsetof(CPUArchState, tlb_table[mem_index]);
int ofs, a_off;
uint64_t tlb_mask;
+ tcg_out_sh64(s, RSY_SRLG, TCG_REG_R2, addr_reg, TCG_REG_NONE,
+ TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
+ tcg_out_insn(s, RXY, NG, TCG_REG_R2, TCG_AREG0, TCG_REG_NONE, mask_off);
+ tcg_out_insn(s, RXY, AG, TCG_REG_R2, TCG_AREG0, TCG_REG_NONE, table_off);
+
/* For aligned accesses, we check the first byte and include the alignment
bits within the address. For unaligned access, we check that we don't
cross pages using the address of the last byte of the access. */
a_off = (a_bits >= s_bits ? 0 : s_mask - a_mask);
tlb_mask = (uint64_t)TARGET_PAGE_MASK | a_mask;
-
- if (s390_facilities & FACILITY_GEN_INST_EXT) {
- tcg_out_risbg(s, TCG_REG_R2, addr_reg,
- 64 - CPU_TLB_BITS - CPU_TLB_ENTRY_BITS,
- 63 - CPU_TLB_ENTRY_BITS,
- 64 + CPU_TLB_ENTRY_BITS - TARGET_PAGE_BITS, 1);
- if (a_off) {
- tcg_out_insn(s, RX, LA, TCG_REG_R3, addr_reg, TCG_REG_NONE, a_off);
- tgen_andi(s, TCG_TYPE_TL, TCG_REG_R3, tlb_mask);
- } else {
- tgen_andi_risbg(s, TCG_REG_R3, addr_reg, tlb_mask);
- }
+ if ((s390_facilities & FACILITY_GEN_INST_EXT) && a_off == 0) {
+ tgen_andi_risbg(s, TCG_REG_R3, addr_reg, tlb_mask);
} else {
- tcg_out_sh64(s, RSY_SRLG, TCG_REG_R2, addr_reg, TCG_REG_NONE,
- TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
tcg_out_insn(s, RX, LA, TCG_REG_R3, addr_reg, TCG_REG_NONE, a_off);
- tgen_andi(s, TCG_TYPE_I64, TCG_REG_R2,
- (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS);
tgen_andi(s, TCG_TYPE_TL, TCG_REG_R3, tlb_mask);
}
if (is_ld) {
- ofs = offsetof(CPUArchState, tlb_table[mem_index][0].addr_read);
+ ofs = offsetof(CPUTLBEntry, addr_read);
} else {
- ofs = offsetof(CPUArchState, tlb_table[mem_index][0].addr_write);
+ ofs = offsetof(CPUTLBEntry, addr_write);
}
if (TARGET_LONG_BITS == 32) {
- tcg_out_mem(s, RX_C, RXY_CY, TCG_REG_R3, TCG_REG_R2, TCG_AREG0, ofs);
+ tcg_out_insn(s, RX, C, TCG_REG_R3, TCG_REG_R2, TCG_REG_NONE, ofs);
} else {
- tcg_out_mem(s, 0, RXY_CG, TCG_REG_R3, TCG_REG_R2, TCG_AREG0, ofs);
+ tcg_out_insn(s, RXY, CG, TCG_REG_R3, TCG_REG_R2, TCG_REG_NONE, ofs);
}
- ofs = offsetof(CPUArchState, tlb_table[mem_index][0].addend);
- tcg_out_mem(s, 0, RXY_LG, TCG_REG_R2, TCG_REG_R2, TCG_AREG0, ofs);
+ tcg_out_insn(s, RXY, LG, TCG_REG_R2, TCG_REG_R2, TCG_REG_NONE,
+ offsetof(CPUTLBEntry, addend));
if (TARGET_LONG_BITS == 32) {
tgen_ext32u(s, TCG_REG_R3, addr_reg);
--
2.17.2
next prev parent reply other threads:[~2019-01-28 15:59 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-28 15:58 [Qemu-devel] [PULL 00/23] tcg queued patches Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 01/23] tcg: Add logical simplifications during gvec expand Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 02/23] tcg: Add gvec expanders for nand, nor, eqv Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 03/23] tcg: Add write_aofs to GVecGen4 Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 04/23] tcg: Add opcodes for vector saturated arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 05/23] tcg: Add opcodes for vector minmax arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 06/23] tcg/i386: Split subroutines out of tcg_expand_vec_op Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 07/23] tcg/i386: Implement vector saturating arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 08/23] tcg/i386: Implement vector minmax arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 09/23] tcg/aarch64: Implement vector saturating arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 10/23] tcg/aarch64: Implement vector minmax arithmetic Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 11/23] cputlb: do not evict empty entries to the vtlb Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 12/23] tcg: introduce dynamic TLB sizing Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 13/23] tcg/i386: enable " Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 14/23] tcg/aarch64: " Richard Henderson
2019-01-28 15:58 ` [Qemu-devel] [PULL 15/23] tcg/ppc: " Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 16/23] tcg/sparc: " Richard Henderson
2019-01-28 15:59 ` Richard Henderson [this message]
2019-01-28 15:59 ` [Qemu-devel] [PULL 18/23] tcg/riscv: " Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 19/23] tcg/arm: " Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 20/23] tcg/mips: Fix tcg_out_qemu_ld_slow_path Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 21/23] tcg/mips: enable dynamic TLB sizing Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 22/23] tcg/tci: " Richard Henderson
2019-01-28 15:59 ` [Qemu-devel] [PULL 23/23] cputlb: Remove static tlb sizing Richard Henderson
2019-01-28 18:44 ` [Qemu-devel] [PULL 00/23] tcg queued patches Peter Maydell
2019-01-31 17:53 ` no-reply
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190128155907.20607-18-richard.henderson@linaro.org \
--to=richard.henderson@linaro.org \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).