* [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only
@ 2019-02-04 13:21 Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 1/4] target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore Richard Henderson
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 13:21 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
Based-on: <20190204131228.25949-1-richard.henderson@linaro.org>
aka v3 BTI.
This is a prerequisite to implementing v8.5-MemTag.
r~
Richard Henderson (4):
target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore
target/arm: Clean TBI for data operations in the translator
target/arm: Compute TB_FLAGS for TBI for user-only
target/arm: Enable TBI for user-only
target/arm/cpu.h | 1 +
target/arm/internals.h | 21 ---
target/arm/translate.h | 3 +-
target/arm/cpu.c | 6 +
target/arm/helper.c | 14 +-
target/arm/translate-a64.c | 289 +++++++++++++++++++------------------
6 files changed, 168 insertions(+), 166 deletions(-)
--
2.17.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH v2 1/4] target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
@ 2019-02-04 13:21 ` Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 2/4] target/arm: Clean TBI for data operations in the translator Richard Henderson
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 13:21 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
Split out gen_top_byte_ignore in preparation of handling these
data accesses; the new tbflags field is not yet honored.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v2: Re-add some commentary wrt TBI bits.
---
target/arm/cpu.h | 1 +
target/arm/translate.h | 3 +-
target/arm/helper.c | 1 +
target/arm/translate-a64.c | 72 +++++++++++++++++++-------------------
4 files changed, 40 insertions(+), 37 deletions(-)
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 366ab97db3..029f6cd60c 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3058,6 +3058,7 @@ FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
FIELD(TBFLAG_A64, BT, 9, 1)
FIELD(TBFLAG_A64, BTYPE, 10, 2)
+FIELD(TBFLAG_A64, TBID, 12, 2)
static inline bool bswap_code(bool sctlr_b)
{
diff --git a/target/arm/translate.h b/target/arm/translate.h
index f73939d7b4..17748ddfb9 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -26,7 +26,8 @@ typedef struct DisasContext {
int user;
#endif
ARMMMUIdx mmu_idx; /* MMU index to use for normal loads/stores */
- uint8_t tbii; /* TBI1|TBI0 for EL0/1 or TBI for EL2/3 */
+ uint8_t tbii; /* TBI1|TBI0 for insns */
+ uint8_t tbid; /* TBI1|TBI0 for data */
bool ns; /* Use non-secure CPREG bank on access */
int fp_excp_el; /* FP exception EL or 0 if enabled */
int sve_excp_el; /* SVE exception EL or 0 if enabled */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index be0ec7de2a..25d8ec38f8 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -13767,6 +13767,7 @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
}
flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii);
+ flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid);
}
#endif
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 37077138e3..0b4a09ca1c 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -284,10 +284,10 @@ void gen_a64_set_pc_im(uint64_t val)
tcg_gen_movi_i64(cpu_pc, val);
}
-/* Load the PC from a generic TCG variable.
+/*
+ * Handle Top Byte Ignore (TBI) bits.
*
- * If address tagging is enabled via the TCR TBI bits, then loading
- * an address into the PC will clear out any tag in it:
+ * If address tagging is enabled via the TCR TBI bits:
* + for EL2 and EL3 there is only one TBI bit, and if it is set
* then the address is zero-extended, clearing bits [63:56]
* + for EL0 and EL1, TBI0 controls addresses with bit 55 == 0
@@ -295,45 +295,44 @@ void gen_a64_set_pc_im(uint64_t val)
* If the appropriate TBI bit is set for the address then
* the address is sign-extended from bit 55 into bits [63:56]
*
- * We can avoid doing this for relative-branches, because the
- * PC + offset can never overflow into the tag bits (assuming
- * that virtual addresses are less than 56 bits wide, as they
- * are currently), but we must handle it for branch-to-register.
+ * Here We have concatenated TBI{1,0} into tbi.
*/
-static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
+static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 dst,
+ TCGv_i64 src, int tbi)
{
- /* Note that TBII is TBI1:TBI0. */
- int tbi = s->tbii;
-
- if (s->current_el <= 1) {
- if (tbi != 0) {
- /* Sign-extend from bit 55. */
- tcg_gen_sextract_i64(cpu_pc, src, 0, 56);
-
- if (tbi != 3) {
- TCGv_i64 tcg_zero = tcg_const_i64(0);
-
- /*
- * The two TBI bits differ.
- * If tbi0, then !tbi1: only use the extension if positive.
- * if !tbi0, then tbi1: only use the extension if negative.
- */
- tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
- cpu_pc, cpu_pc, tcg_zero, cpu_pc, src);
- tcg_temp_free_i64(tcg_zero);
- }
- return;
- }
+ if (tbi == 0) {
+ /* Load unmodified address */
+ tcg_gen_mov_i64(dst, src);
+ } else if (s->current_el >= 2) {
+ /* FIXME: ARMv8.1-VHE S2 translation regime. */
+ /* Force tag byte to all zero */
+ tcg_gen_extract_i64(dst, src, 0, 56);
} else {
- if (tbi != 0) {
- /* Force tag byte to all zero */
- tcg_gen_extract_i64(cpu_pc, src, 0, 56);
- return;
+ /* Sign-extend from bit 55. */
+ tcg_gen_sextract_i64(dst, src, 0, 56);
+
+ if (tbi != 3) {
+ TCGv_i64 tcg_zero = tcg_const_i64(0);
+
+ /*
+ * The two TBI bits differ.
+ * If tbi0, then !tbi1: only use the extension if positive.
+ * if !tbi0, then tbi1: only use the extension if negative.
+ */
+ tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
+ dst, dst, tcg_zero, dst, src);
+ tcg_temp_free_i64(tcg_zero);
}
}
+}
- /* Load unmodified address */
- tcg_gen_mov_i64(cpu_pc, src);
+static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
+{
+ /*
+ * If address tagging is enabled for instructions via the TCR TBI bits,
+ * then loading an address into the PC will clear out any tag.
+ */
+ gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
}
typedef struct DisasCompare64 {
@@ -14018,6 +14017,7 @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
dc->tbii = FIELD_EX32(tb_flags, TBFLAG_A64, TBII);
+ dc->tbid = FIELD_EX32(tb_flags, TBFLAG_A64, TBID);
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
#if !defined(CONFIG_USER_ONLY)
dc->user = (dc->current_el == 0);
--
2.17.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH v2 2/4] target/arm: Clean TBI for data operations in the translator
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 1/4] target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore Richard Henderson
@ 2019-02-04 13:21 ` Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only Richard Henderson
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 13:21 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
This will allow TBI to be used in user-only mode, as well as
avoid ping-ponging the softmmu TLB when TBI is in use. It
will also enable other armv8 extensions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/translate-a64.c | 217 ++++++++++++++++++++-----------------
1 file changed, 116 insertions(+), 101 deletions(-)
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 0b4a09ca1c..27b90d5778 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -335,6 +335,18 @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
}
+/*
+ * Return a "clean" address for ADDR according to TBID.
+ * This is always a fresh temporary, as we need to be able to
+ * increment this independently of a dirty write-back address.
+ */
+static TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr)
+{
+ TCGv_i64 clean = new_tmp_a64(s);
+ gen_top_byte_ignore(s, clean, addr, s->tbid);
+ return clean;
+}
+
typedef struct DisasCompare64 {
TCGCond cond;
TCGv_i64 value;
@@ -2347,12 +2359,13 @@ static void gen_compare_and_swap(DisasContext *s, int rs, int rt,
TCGv_i64 tcg_rs = cpu_reg(s, rs);
TCGv_i64 tcg_rt = cpu_reg(s, rt);
int memidx = get_mem_index(s);
- TCGv_i64 addr = cpu_reg_sp(s, rn);
+ TCGv_i64 clean_addr;
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_gen_atomic_cmpxchg_i64(tcg_rs, addr, tcg_rs, tcg_rt, memidx,
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
+ tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt, memidx,
size | MO_ALIGN | s->be_data);
}
@@ -2363,12 +2376,13 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
TCGv_i64 s2 = cpu_reg(s, rs + 1);
TCGv_i64 t1 = cpu_reg(s, rt);
TCGv_i64 t2 = cpu_reg(s, rt + 1);
- TCGv_i64 addr = cpu_reg_sp(s, rn);
+ TCGv_i64 clean_addr;
int memidx = get_mem_index(s);
if (rn == 31) {
gen_check_sp_alignment(s);
}
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
if (size == 2) {
TCGv_i64 cmp = tcg_temp_new_i64();
@@ -2382,7 +2396,7 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
tcg_gen_concat32_i64(cmp, s2, s1);
}
- tcg_gen_atomic_cmpxchg_i64(cmp, addr, cmp, val, memidx,
+ tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx,
MO_64 | MO_ALIGN | s->be_data);
tcg_temp_free_i64(val);
@@ -2396,9 +2410,11 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
if (HAVE_CMPXCHG128) {
TCGv_i32 tcg_rs = tcg_const_i32(rs);
if (s->be_data == MO_LE) {
- gen_helper_casp_le_parallel(cpu_env, tcg_rs, addr, t1, t2);
+ gen_helper_casp_le_parallel(cpu_env, tcg_rs,
+ clean_addr, t1, t2);
} else {
- gen_helper_casp_be_parallel(cpu_env, tcg_rs, addr, t1, t2);
+ gen_helper_casp_be_parallel(cpu_env, tcg_rs,
+ clean_addr, t1, t2);
}
tcg_temp_free_i32(tcg_rs);
} else {
@@ -2414,10 +2430,10 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
TCGv_i64 zero = tcg_const_i64(0);
/* Load the two words, in memory order. */
- tcg_gen_qemu_ld_i64(d1, addr, memidx,
+ tcg_gen_qemu_ld_i64(d1, clean_addr, memidx,
MO_64 | MO_ALIGN_16 | s->be_data);
- tcg_gen_addi_i64(a2, addr, 8);
- tcg_gen_qemu_ld_i64(d2, addr, memidx, MO_64 | s->be_data);
+ tcg_gen_addi_i64(a2, clean_addr, 8);
+ tcg_gen_qemu_ld_i64(d2, clean_addr, memidx, MO_64 | s->be_data);
/* Compare the two words, also in memory order. */
tcg_gen_setcond_i64(TCG_COND_EQ, c1, d1, s1);
@@ -2427,7 +2443,7 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
/* If compare equal, write back new data, else write back old data. */
tcg_gen_movcond_i64(TCG_COND_NE, c1, c2, zero, t1, d1);
tcg_gen_movcond_i64(TCG_COND_NE, c2, c2, zero, t2, d2);
- tcg_gen_qemu_st_i64(c1, addr, memidx, MO_64 | s->be_data);
+ tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_64 | s->be_data);
tcg_gen_qemu_st_i64(c2, a2, memidx, MO_64 | s->be_data);
tcg_temp_free_i64(a2);
tcg_temp_free_i64(c1);
@@ -2480,7 +2496,7 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
int is_lasr = extract32(insn, 15, 1);
int o2_L_o1_o0 = extract32(insn, 21, 3) * 2 | is_lasr;
int size = extract32(insn, 30, 2);
- TCGv_i64 tcg_addr;
+ TCGv_i64 clean_addr;
switch (o2_L_o1_o0) {
case 0x0: /* STXR */
@@ -2491,8 +2507,8 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
if (is_lasr) {
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
- gen_store_exclusive(s, rs, rt, rt2, tcg_addr, size, false);
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
+ gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, false);
return;
case 0x4: /* LDXR */
@@ -2500,9 +2516,9 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
s->is_ldex = true;
- gen_load_exclusive(s, rt, rt2, tcg_addr, size, false);
+ gen_load_exclusive(s, rt, rt2, clean_addr, size, false);
if (is_lasr) {
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
}
@@ -2520,8 +2536,8 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
gen_check_sp_alignment(s);
}
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
- do_gpr_st(s, cpu_reg(s, rt), tcg_addr, size, true, rt,
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, size, true, rt,
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
return;
@@ -2536,8 +2552,8 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
- do_gpr_ld(s, cpu_reg(s, rt), tcg_addr, size, false, false, true, rt,
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, false, true, rt,
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
return;
@@ -2550,8 +2566,8 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
if (is_lasr) {
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
- gen_store_exclusive(s, rs, rt, rt2, tcg_addr, size, true);
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
+ gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, true);
return;
}
if (rt2 == 31
@@ -2568,9 +2584,9 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
s->is_ldex = true;
- gen_load_exclusive(s, rt, rt2, tcg_addr, size, true);
+ gen_load_exclusive(s, rt, rt2, clean_addr, size, true);
if (is_lasr) {
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
}
@@ -2619,7 +2635,7 @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
int opc = extract32(insn, 30, 2);
bool is_signed = false;
int size = 2;
- TCGv_i64 tcg_rt, tcg_addr;
+ TCGv_i64 tcg_rt, clean_addr;
if (is_vector) {
if (opc == 3) {
@@ -2641,17 +2657,17 @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
tcg_rt = cpu_reg(s, rt);
- tcg_addr = tcg_const_i64((s->pc - 4) + imm);
+ clean_addr = tcg_const_i64((s->pc - 4) + imm);
if (is_vector) {
- do_fp_ld(s, rt, tcg_addr, size);
+ do_fp_ld(s, rt, clean_addr, size);
} else {
/* Only unsigned 32bit loads target 32bit registers. */
bool iss_sf = opc != 0;
- do_gpr_ld(s, tcg_rt, tcg_addr, size, is_signed, false,
+ do_gpr_ld(s, tcg_rt, clean_addr, size, is_signed, false,
true, rt, iss_sf, false);
}
- tcg_temp_free_i64(tcg_addr);
+ tcg_temp_free_i64(clean_addr);
}
/*
@@ -2697,7 +2713,8 @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
bool postindex = false;
bool wback = false;
- TCGv_i64 tcg_addr; /* calculated address */
+ TCGv_i64 clean_addr, dirty_addr;
+
int size;
if (opc == 3) {
@@ -2753,23 +2770,23 @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
-
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
if (!postindex) {
- tcg_gen_addi_i64(tcg_addr, tcg_addr, offset);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
}
+ clean_addr = clean_data_tbi(s, dirty_addr);
if (is_vector) {
if (is_load) {
- do_fp_ld(s, rt, tcg_addr, size);
+ do_fp_ld(s, rt, clean_addr, size);
} else {
- do_fp_st(s, rt, tcg_addr, size);
+ do_fp_st(s, rt, clean_addr, size);
}
- tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
+ tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
if (is_load) {
- do_fp_ld(s, rt2, tcg_addr, size);
+ do_fp_ld(s, rt2, clean_addr, size);
} else {
- do_fp_st(s, rt2, tcg_addr, size);
+ do_fp_st(s, rt2, clean_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
@@ -2781,30 +2798,28 @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
/* Do not modify tcg_rt before recognizing any exception
* from the second load.
*/
- do_gpr_ld(s, tmp, tcg_addr, size, is_signed, false,
+ do_gpr_ld(s, tmp, clean_addr, size, is_signed, false,
false, 0, false, false);
- tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
- do_gpr_ld(s, tcg_rt2, tcg_addr, size, is_signed, false,
+ tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
+ do_gpr_ld(s, tcg_rt2, clean_addr, size, is_signed, false,
false, 0, false, false);
tcg_gen_mov_i64(tcg_rt, tmp);
tcg_temp_free_i64(tmp);
} else {
- do_gpr_st(s, tcg_rt, tcg_addr, size,
+ do_gpr_st(s, tcg_rt, clean_addr, size,
false, 0, false, false);
- tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
- do_gpr_st(s, tcg_rt2, tcg_addr, size,
+ tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
+ do_gpr_st(s, tcg_rt2, clean_addr, size,
false, 0, false, false);
}
}
if (wback) {
if (postindex) {
- tcg_gen_addi_i64(tcg_addr, tcg_addr, offset - (1 << size));
- } else {
- tcg_gen_subi_i64(tcg_addr, tcg_addr, 1 << size);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
}
- tcg_gen_mov_i64(cpu_reg_sp(s, rn), tcg_addr);
+ tcg_gen_mov_i64(cpu_reg_sp(s, rn), dirty_addr);
}
}
@@ -2841,7 +2856,7 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
bool post_index;
bool writeback;
- TCGv_i64 tcg_addr;
+ TCGv_i64 clean_addr, dirty_addr;
if (is_vector) {
size |= (opc & 2) << 1;
@@ -2892,17 +2907,18 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
if (!post_index) {
- tcg_gen_addi_i64(tcg_addr, tcg_addr, imm9);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, imm9);
}
+ clean_addr = clean_data_tbi(s, dirty_addr);
if (is_vector) {
if (is_store) {
- do_fp_st(s, rt, tcg_addr, size);
+ do_fp_st(s, rt, clean_addr, size);
} else {
- do_fp_ld(s, rt, tcg_addr, size);
+ do_fp_ld(s, rt, clean_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
@@ -2910,10 +2926,10 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
if (is_store) {
- do_gpr_st_memidx(s, tcg_rt, tcg_addr, size, memidx,
+ do_gpr_st_memidx(s, tcg_rt, clean_addr, size, memidx,
iss_valid, rt, iss_sf, false);
} else {
- do_gpr_ld_memidx(s, tcg_rt, tcg_addr, size,
+ do_gpr_ld_memidx(s, tcg_rt, clean_addr, size,
is_signed, is_extended, memidx,
iss_valid, rt, iss_sf, false);
}
@@ -2922,9 +2938,9 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
if (writeback) {
TCGv_i64 tcg_rn = cpu_reg_sp(s, rn);
if (post_index) {
- tcg_gen_addi_i64(tcg_addr, tcg_addr, imm9);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, imm9);
}
- tcg_gen_mov_i64(tcg_rn, tcg_addr);
+ tcg_gen_mov_i64(tcg_rn, dirty_addr);
}
}
@@ -2963,8 +2979,7 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
bool is_store = false;
bool is_extended = false;
- TCGv_i64 tcg_rm;
- TCGv_i64 tcg_addr;
+ TCGv_i64 tcg_rm, clean_addr, dirty_addr;
if (extract32(opt, 1, 1) == 0) {
unallocated_encoding(s);
@@ -2998,27 +3013,28 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
tcg_rm = read_cpu_reg(s, rm, 1);
ext_and_shift_reg(tcg_rm, tcg_rm, opt, shift ? size : 0);
- tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_rm);
+ tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
+ clean_addr = clean_data_tbi(s, dirty_addr);
if (is_vector) {
if (is_store) {
- do_fp_st(s, rt, tcg_addr, size);
+ do_fp_st(s, rt, clean_addr, size);
} else {
- do_fp_ld(s, rt, tcg_addr, size);
+ do_fp_ld(s, rt, clean_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
if (is_store) {
- do_gpr_st(s, tcg_rt, tcg_addr, size,
+ do_gpr_st(s, tcg_rt, clean_addr, size,
true, rt, iss_sf, false);
} else {
- do_gpr_ld(s, tcg_rt, tcg_addr, size,
+ do_gpr_ld(s, tcg_rt, clean_addr, size,
is_signed, is_extended,
true, rt, iss_sf, false);
}
@@ -3052,7 +3068,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
unsigned int imm12 = extract32(insn, 10, 12);
unsigned int offset;
- TCGv_i64 tcg_addr;
+ TCGv_i64 clean_addr, dirty_addr;
bool is_store;
bool is_signed = false;
@@ -3085,24 +3101,25 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
offset = imm12 << size;
- tcg_gen_addi_i64(tcg_addr, tcg_addr, offset);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
+ clean_addr = clean_data_tbi(s, dirty_addr);
if (is_vector) {
if (is_store) {
- do_fp_st(s, rt, tcg_addr, size);
+ do_fp_st(s, rt, clean_addr, size);
} else {
- do_fp_ld(s, rt, tcg_addr, size);
+ do_fp_ld(s, rt, clean_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
if (is_store) {
- do_gpr_st(s, tcg_rt, tcg_addr, size,
+ do_gpr_st(s, tcg_rt, clean_addr, size,
true, rt, iss_sf, false);
} else {
- do_gpr_ld(s, tcg_rt, tcg_addr, size, is_signed, is_extended,
+ do_gpr_ld(s, tcg_rt, clean_addr, size, is_signed, is_extended,
true, rt, iss_sf, false);
}
}
@@ -3128,7 +3145,7 @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
int rs = extract32(insn, 16, 5);
int rn = extract32(insn, 5, 5);
int o3_opc = extract32(insn, 12, 4);
- TCGv_i64 tcg_rn, tcg_rs;
+ TCGv_i64 tcg_rs, clean_addr;
AtomicThreeOpFn *fn;
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
@@ -3171,7 +3188,7 @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_rn = cpu_reg_sp(s, rn);
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
tcg_rs = read_cpu_reg(s, rs, true);
if (o3_opc == 1) { /* LDCLR */
@@ -3181,7 +3198,7 @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
/* The tcg atomic primitives are all full barriers. Therefore we
* can ignore the Acquire and Release bits of this instruction.
*/
- fn(cpu_reg(s, rt), tcg_rn, tcg_rs, get_mem_index(s),
+ fn(cpu_reg(s, rt), clean_addr, tcg_rs, get_mem_index(s),
s->be_data | size | MO_ALIGN);
}
@@ -3207,7 +3224,7 @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
bool is_wback = extract32(insn, 11, 1);
bool use_key_a = !extract32(insn, 23, 1);
int offset;
- TCGv_i64 tcg_addr, tcg_rt;
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt;
if (size != 3 || is_vector || !dc_isar_feature(aa64_pauth, s)) {
unallocated_encoding(s);
@@ -3217,29 +3234,31 @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
if (rn == 31) {
gen_check_sp_alignment(s);
}
- tcg_addr = read_cpu_reg_sp(s, rn, 1);
+ dirty_addr = read_cpu_reg_sp(s, rn, 1);
if (s->pauth_active) {
if (use_key_a) {
- gen_helper_autda(tcg_addr, cpu_env, tcg_addr, cpu_X[31]);
+ gen_helper_autda(dirty_addr, cpu_env, dirty_addr, cpu_X[31]);
} else {
- gen_helper_autdb(tcg_addr, cpu_env, tcg_addr, cpu_X[31]);
+ gen_helper_autdb(dirty_addr, cpu_env, dirty_addr, cpu_X[31]);
}
}
/* Form the 10-bit signed, scaled offset. */
offset = (extract32(insn, 22, 1) << 9) | extract32(insn, 12, 9);
offset = sextract32(offset << size, 0, 10 + size);
- tcg_gen_addi_i64(tcg_addr, tcg_addr, offset);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
+
+ /* Note that "clean" and "dirty" here refer to TBI not PAC. */
+ clean_addr = clean_data_tbi(s, dirty_addr);
tcg_rt = cpu_reg(s, rt);
-
- do_gpr_ld(s, tcg_rt, tcg_addr, size, /* is_signed */ false,
+ do_gpr_ld(s, tcg_rt, clean_addr, size, /* is_signed */ false,
/* extend */ false, /* iss_valid */ !is_wback,
/* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
if (is_wback) {
- tcg_gen_mov_i64(cpu_reg_sp(s, rn), tcg_addr);
+ tcg_gen_mov_i64(cpu_reg_sp(s, rn), dirty_addr);
}
}
@@ -3308,7 +3327,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
bool is_store = !extract32(insn, 22, 1);
bool is_postidx = extract32(insn, 23, 1);
bool is_q = extract32(insn, 30, 1);
- TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
+ TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
TCGMemOp endian = s->be_data;
int ebytes; /* bytes per element */
@@ -3391,8 +3410,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
elements = (is_q ? 16 : 8) / ebytes;
tcg_rn = cpu_reg_sp(s, rn);
- tcg_addr = tcg_temp_new_i64();
- tcg_gen_mov_i64(tcg_addr, tcg_rn);
+ clean_addr = clean_data_tbi(s, tcg_rn);
tcg_ebytes = tcg_const_i64(ebytes);
for (r = 0; r < rpt; r++) {
@@ -3402,14 +3420,15 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
for (xs = 0; xs < selem; xs++) {
int tt = (rt + r + xs) % 32;
if (is_store) {
- do_vec_st(s, tt, e, tcg_addr, size, endian);
+ do_vec_st(s, tt, e, clean_addr, size, endian);
} else {
- do_vec_ld(s, tt, e, tcg_addr, size, endian);
+ do_vec_ld(s, tt, e, clean_addr, size, endian);
}
- tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
+ tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
}
}
}
+ tcg_temp_free_i64(tcg_ebytes);
if (!is_store) {
/* For non-quad operations, setting a slice of the low
@@ -3427,13 +3446,11 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
if (is_postidx) {
if (rm == 31) {
- tcg_gen_mov_i64(tcg_rn, tcg_addr);
+ tcg_gen_addi_i64(tcg_rn, tcg_rn, rpt * elements * selem * ebytes);
} else {
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
}
}
- tcg_temp_free_i64(tcg_ebytes);
- tcg_temp_free_i64(tcg_addr);
}
/* AdvSIMD load/store single structure
@@ -3476,7 +3493,7 @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
bool replicate = false;
int index = is_q << 3 | S << 2 | size;
int ebytes, xs;
- TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
+ TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
if (extract32(insn, 31, 1)) {
unallocated_encoding(s);
@@ -3536,8 +3553,7 @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
}
tcg_rn = cpu_reg_sp(s, rn);
- tcg_addr = tcg_temp_new_i64();
- tcg_gen_mov_i64(tcg_addr, tcg_rn);
+ clean_addr = clean_data_tbi(s, tcg_rn);
tcg_ebytes = tcg_const_i64(ebytes);
for (xs = 0; xs < selem; xs++) {
@@ -3545,7 +3561,7 @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
/* Load and replicate to all elements */
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
+ tcg_gen_qemu_ld_i64(tcg_tmp, clean_addr,
get_mem_index(s), s->be_data + scale);
tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
(is_q + 1) * 8, vec_full_reg_size(s),
@@ -3554,24 +3570,23 @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
} else {
/* Load/store one element per register */
if (is_load) {
- do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
+ do_vec_ld(s, rt, index, clean_addr, scale, s->be_data);
} else {
- do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
+ do_vec_st(s, rt, index, clean_addr, scale, s->be_data);
}
}
- tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
+ tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
rt = (rt + 1) % 32;
}
+ tcg_temp_free_i64(tcg_ebytes);
if (is_postidx) {
if (rm == 31) {
- tcg_gen_mov_i64(tcg_rn, tcg_addr);
+ tcg_gen_addi_i64(tcg_rn, tcg_rn, selem * ebytes);
} else {
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
}
}
- tcg_temp_free_i64(tcg_ebytes);
- tcg_temp_free_i64(tcg_addr);
}
/* Loads and stores */
--
2.17.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 1/4] target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 2/4] target/arm: Clean TBI for data operations in the translator Richard Henderson
@ 2019-02-04 13:21 ` Richard Henderson
2019-02-04 14:45 ` Peter Maydell
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 4/4] target/arm: Enable " Richard Henderson
2019-02-04 14:14 ` [Qemu-devel] [PATCH v2 0/4] target/arm: Implement " Peter Maydell
4 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 13:21 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
Enables, but does not turn on, TBI for CONFIG_USER_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/internals.h | 21 ---------------------
target/arm/helper.c | 13 ++++++-------
2 files changed, 6 insertions(+), 28 deletions(-)
diff --git a/target/arm/internals.h b/target/arm/internals.h
index d01a3f9f44..a4bd1becb7 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -963,30 +963,9 @@ typedef struct ARMVAParameters {
bool using64k : 1;
} ARMVAParameters;
-#ifdef CONFIG_USER_ONLY
-static inline ARMVAParameters aa64_va_parameters_both(CPUARMState *env,
- uint64_t va,
- ARMMMUIdx mmu_idx)
-{
- return (ARMVAParameters) {
- /* 48-bit address space */
- .tsz = 16,
- /* We can't handle tagged addresses properly in user-only mode */
- .tbi = false,
- };
-}
-
-static inline ARMVAParameters aa64_va_parameters(CPUARMState *env,
- uint64_t va,
- ARMMMUIdx mmu_idx, bool data)
-{
- return aa64_va_parameters_both(env, va, mmu_idx);
-}
-#else
ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
ARMMMUIdx mmu_idx);
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
ARMMMUIdx mmu_idx, bool data);
-#endif
#endif
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 25d8ec38f8..222253a3a3 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7197,7 +7197,7 @@ uint32_t HELPER(rbit)(uint32_t x)
return revbit32(x);
}
-#if defined(CONFIG_USER_ONLY)
+#ifdef CONFIG_USER_ONLY
/* These should probably raise undefined insn exceptions. */
void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
@@ -9571,6 +9571,7 @@ void arm_cpu_do_interrupt(CPUState *cs)
cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
}
}
+#endif /* !CONFIG_USER_ONLY */
/* Return the exception level which controls this address translation regime */
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
@@ -9732,6 +9733,7 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
}
}
+#ifndef CONFIG_USER_ONLY
/* Translate section/page access permissions to page
* R/W protection flags
*
@@ -10419,6 +10421,7 @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
}
+#endif /* !CONFIG_USER_ONLY */
ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
ARMMMUIdx mmu_idx)
@@ -10490,6 +10493,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
return ret;
}
+#ifndef CONFIG_USER_ONLY
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
ARMMMUIdx mmu_idx)
{
@@ -13746,11 +13750,7 @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
*pc = env->pc;
flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
-#ifndef CONFIG_USER_ONLY
- /*
- * Get control bits for tagged addresses. Note that the
- * translator only uses this for instruction addresses.
- */
+ /* Get control bits for tagged addresses. */
{
ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1);
@@ -13769,7 +13769,6 @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii);
flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid);
}
-#endif
if (cpu_isar_feature(aa64_sve, cpu)) {
int sve_el = sve_exception_el(env, current_el);
--
2.17.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [PATCH v2 4/4] target/arm: Enable TBI for user-only
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
` (2 preceding siblings ...)
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only Richard Henderson
@ 2019-02-04 13:21 ` Richard Henderson
2019-02-04 14:14 ` [Qemu-devel] [PATCH v2 0/4] target/arm: Implement " Peter Maydell
4 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 13:21 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
This has been enabled in the linux kernel since v3.11
(commit d50240a5f6cea, 2013-09-03,
"arm64: mm: permit use of tagged pointers at EL0").
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/cpu.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 3874dc9875..edf6e0e1f1 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -200,6 +200,12 @@ static void arm_cpu_reset(CPUState *s)
env->vfp.zcr_el[1] = cpu->sve_max_vq - 1;
env->vfp.zcr_el[2] = env->vfp.zcr_el[1];
env->vfp.zcr_el[3] = env->vfp.zcr_el[1];
+ /*
+ * Enable TBI0 and TBI1. While the real kernel only enables TBI0,
+ * turning on both here will produce smaller code and otherwise
+ * make no difference to the user-level emulation.
+ */
+ env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
#else
/* Reset into the highest available EL */
if (arm_feature(env, ARM_FEATURE_EL3)) {
--
2.17.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
` (3 preceding siblings ...)
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 4/4] target/arm: Enable " Richard Henderson
@ 2019-02-04 14:14 ` Peter Maydell
4 siblings, 0 replies; 8+ messages in thread
From: Peter Maydell @ 2019-02-04 14:14 UTC (permalink / raw)
To: Richard Henderson; +Cc: QEMU Developers
On Mon, 4 Feb 2019 at 13:21, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Based-on: <20190204131228.25949-1-richard.henderson@linaro.org>
> aka v3 BTI.
>
> This is a prerequisite to implementing v8.5-MemTag.
>
>
> r~
>
Applied to target-arm.next, thanks.
-- PMM
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only Richard Henderson
@ 2019-02-04 14:45 ` Peter Maydell
2019-02-04 14:53 ` Richard Henderson
0 siblings, 1 reply; 8+ messages in thread
From: Peter Maydell @ 2019-02-04 14:45 UTC (permalink / raw)
To: Richard Henderson; +Cc: QEMU Developers
On Mon, 4 Feb 2019 at 13:21, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Enables, but does not turn on, TBI for CONFIG_USER_ONLY.
>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
This patch turns out to break compilation of the linux-user
targets with clang, which is stricter than gcc about warning about
unused static functions.
I propose to squash in the following fixup patch:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index c99b69074a5..935c6f9bfa7 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9601,6 +9601,8 @@ static inline uint32_t regime_el(CPUARMState
*env, ARMMMUIdx mmu_idx)
}
}
+#ifndef CONFIG_USER_ONLY
+
/* Return the SCTLR value which controls this address translation regime */
static inline uint32_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
{
@@ -9656,6 +9658,22 @@ static inline bool
regime_translation_big_endian(CPUARMState *env,
return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0;
}
+/* Return the TTBR associated with this translation regime */
+static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
+ int ttbrn)
+{
+ if (mmu_idx == ARMMMUIdx_S2NS) {
+ return env->cp15.vttbr_el2;
+ }
+ if (ttbrn == 0) {
+ return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
+ } else {
+ return env->cp15.ttbr1_el[regime_el(env, mmu_idx)];
+ }
+}
+
+#endif /* !CONFIG_USER_ONLY */
+
/* Return the TCR controlling this translation regime */
static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
{
@@ -9676,20 +9694,6 @@ static inline ARMMMUIdx
stage_1_mmu_idx(ARMMMUIdx mmu_idx)
return mmu_idx;
}
-/* Return the TTBR associated with this translation regime */
-static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
- int ttbrn)
-{
- if (mmu_idx == ARMMMUIdx_S2NS) {
- return env->cp15.vttbr_el2;
- }
- if (ttbrn == 0) {
- return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
- } else {
- return env->cp15.ttbr1_el[regime_el(env, mmu_idx)];
- }
-}
-
/* Return true if the translation regime is using LPAE format page tables */
static inline bool regime_using_lpae_format(CPUARMState *env,
ARMMMUIdx mmu_idx)
@@ -9715,6 +9719,7 @@ bool arm_s1_regime_using_lpae_format(CPUARMState
*env, ARMMMUIdx mmu_idx)
return regime_using_lpae_format(env, mmu_idx);
}
+#ifndef CONFIG_USER_ONLY
static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
{
switch (mmu_idx) {
@@ -9733,7 +9738,6 @@ static inline bool regime_is_user(CPUARMState
*env, ARMMMUIdx mmu_idx)
}
}
-#ifndef CONFIG_USER_ONLY
/* Translate section/page access permissions to page
* R/W protection flags
*
thanks
-- PMM
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only
2019-02-04 14:45 ` Peter Maydell
@ 2019-02-04 14:53 ` Richard Henderson
0 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2019-02-04 14:53 UTC (permalink / raw)
To: Peter Maydell; +Cc: QEMU Developers
On 2/4/19 2:45 PM, Peter Maydell wrote:
> On Mon, 4 Feb 2019 at 13:21, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> Enables, but does not turn on, TBI for CONFIG_USER_ONLY.
>>
>> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>
> This patch turns out to break compilation of the linux-user
> targets with clang, which is stricter than gcc about warning about
> unused static functions.
>
> I propose to squash in the following fixup patch:
Thanks, that works for me.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-02-04 14:53 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-02-04 13:21 [Qemu-devel] [PATCH v2 0/4] target/arm: Implement TBI for user-only Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 1/4] target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 2/4] target/arm: Clean TBI for data operations in the translator Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 3/4] target/arm: Compute TB_FLAGS for TBI for user-only Richard Henderson
2019-02-04 14:45 ` Peter Maydell
2019-02-04 14:53 ` Richard Henderson
2019-02-04 13:21 ` [Qemu-devel] [PATCH v2 4/4] target/arm: Enable " Richard Henderson
2019-02-04 14:14 ` [Qemu-devel] [PATCH v2 0/4] target/arm: Implement " Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).