* [PULL 0/7] target-arm queue
@ 2022-11-04 11:35 Peter Maydell
2022-11-04 11:35 ` [PULL 1/7] hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel Peter Maydell
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw)
To: qemu-devel
Hi; this pull request has a collection of bug fixes for rc0.
The big one is the trusted firmware boot regression fix.
thanks
-- PMM
The following changes since commit ece5f8374d0416a339f0c0a9399faa2c42d4ad6f:
Merge tag 'linux-user-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging (2022-11-03 10:55:05 -0400)
are available in the Git repository at:
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221104
for you to fetch changes up to cead7fa4c06087c86c67c5ce815cc1ff0bfeac3a:
target/arm: Two fixes for secure ptw (2022-11-04 10:58:58 +0000)
----------------------------------------------------------------
target-arm queue:
* Fix regression booting Trusted Firmware
* Honor HCR_E2H and HCR_TGE in ats_write64()
* Copy the entire vector in DO_ZIP
* Fix Privileged Access Never (PAN) for aarch32
* Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
* Set SCR_EL3.HXEn when direct booting kernel
* Set SME and SVE EL3 vector lengths when direct booting kernel
----------------------------------------------------------------
Ake Koomsin (1):
target/arm: Honor HCR_E2H and HCR_TGE in ats_write64()
Peter Maydell (3):
hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel
hw/arm/boot: Set SCR_EL3.HXEn when booting kernel
target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
Richard Henderson (2):
target/arm: Copy the entire vector in DO_ZIP
target/arm: Two fixes for secure ptw
Timofey Kutergin (1):
target/arm: Fix Privileged Access Never (PAN) for aarch32
hw/arm/boot.c | 5 ++++
target/arm/helper.c | 64 +++++++++++++++++++++++++++++--------------------
target/arm/ptw.c | 50 ++++++++++++++++++++++++++++----------
target/arm/sve_helper.c | 4 ++--
4 files changed, 83 insertions(+), 40 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread* [PULL 1/7] hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 2/7] hw/arm/boot: Set SCR_EL3.HXEn " Peter Maydell ` (6 subsequent siblings) 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel When we direct boot a kernel on a CPU which emulates EL3, we need to set up the EL3 system registers as the Linux kernel documentation specifies: https://www.kernel.org/doc/Documentation/arm64/booting.rst For SVE and SME this includes: - ZCR_EL3.LEN must be initialised to the same value for all CPUs the kernel is executed on. - SMCR_EL3.LEN must be initialised to the same value for all CPUs the kernel will execute on. Although we are technically compliant with this, the "same value" we currently use by default is the reset value of 0. This will end up forcing the guest kernel's SVE and SME vector length to be only the smallest supported length. Initialize the vector length fields to their maximum possible value, which is 0xf. If the implementation doesn't actually support that vector length then the effective vector length will be constrained down to the maximum supported value at point of use. This allows the guest to use all the vector lengths the emulated CPU supports (by programming the _EL2 and _EL1 versions of these registers.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221027140207.413084-2-peter.maydell@linaro.org --- hw/arm/boot.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/hw/arm/boot.c b/hw/arm/boot.c index b106f314685..17d38260faf 100644 --- a/hw/arm/boot.c +++ b/hw/arm/boot.c @@ -764,10 +764,12 @@ static void do_cpu_reset(void *opaque) } if (cpu_isar_feature(aa64_sve, cpu)) { env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK; + env->vfp.zcr_el[3] = 0xf; } if (cpu_isar_feature(aa64_sme, cpu)) { env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK; env->cp15.scr_el3 |= SCR_ENTP2; + env->vfp.smcr_el[3] = 0xf; } /* AArch64 kernels never boot in secure mode */ assert(!info->secure_boot); -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 2/7] hw/arm/boot: Set SCR_EL3.HXEn when booting kernel 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell 2022-11-04 11:35 ` [PULL 1/7] hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 3/7] target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB Peter Maydell ` (5 subsequent siblings) 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel When we direct boot a kernel on a CPU which emulates EL3, we need to set up the EL3 system registers as the Linux kernel documentation specifies: https://www.kernel.org/doc/Documentation/arm64/booting.rst For CPUs with FEAT_HCX support this includes: - SCR_EL3.HXEn (bit 38) must be initialised to 0b1. but we forgot to do this when implementing FEAT_HCX, which would mean that a guest trying to access the HCRX_EL2 register would crash. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221027140207.413084-3-peter.maydell@linaro.org --- hw/arm/boot.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/hw/arm/boot.c b/hw/arm/boot.c index 17d38260faf..15c2bf1867f 100644 --- a/hw/arm/boot.c +++ b/hw/arm/boot.c @@ -771,6 +771,9 @@ static void do_cpu_reset(void *opaque) env->cp15.scr_el3 |= SCR_ENTP2; env->vfp.smcr_el[3] = 0xf; } + if (cpu_isar_feature(aa64_hcx, cpu)) { + env->cp15.scr_el3 |= SCR_HXEN; + } /* AArch64 kernels never boot in secure mode */ assert(!info->secure_boot); /* This hook is only supported for AArch32 currently: -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 3/7] target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell 2022-11-04 11:35 ` [PULL 1/7] hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel Peter Maydell 2022-11-04 11:35 ` [PULL 2/7] hw/arm/boot: Set SCR_EL3.HXEn " Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 4/7] target/arm: Fix Privileged Access Never (PAN) for aarch32 Peter Maydell ` (4 subsequent siblings) 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel The HCR_EL2.TTLB bit is supposed to trap all EL1 execution of TLB maintenance instructions. However we have added new TLB insns for FEAT_TLBIOS and FEAT_TLBIRANGE, and forgot to set their accessfn to access_ttlb. Add the missing accessfns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> --- target/arm/helper.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index b070a20f1ad..efbdc657a2d 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6717,51 +6717,51 @@ static const ARMCPRegInfo pauth_reginfo[] = { static const ARMCPRegInfo tlbirange_reginfo[] = { { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1is_write }, { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1_write }, { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1_write }, { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1_write }, { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_rvae1_write }, { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2, @@ -6832,27 +6832,27 @@ static const ARMCPRegInfo tlbirange_reginfo[] = { static const ARMCPRegInfo tlbios_reginfo[] = { { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vmalle1is_write }, { .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vmalle1is_write }, { .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0, -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 4/7] target/arm: Fix Privileged Access Never (PAN) for aarch32 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell ` (2 preceding siblings ...) 2022-11-04 11:35 ` [PULL 3/7] target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 5/7] target/arm: Copy the entire vector in DO_ZIP Peter Maydell ` (3 subsequent siblings) 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel From: Timofey Kutergin <tkutergin@gmail.com> When we implemented the PAN support we theoretically wanted to support it for both AArch32 and AArch64, but in practice several bugs made it essentially unusable with an AArch32 guest. Fix all those problems: - Use CPSR.PAN to check for PAN state in aarch32 mode - throw permission fault during address translation when PAN is enabled and kernel tries to access user acessible page - ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN). Signed-off-by: Timofey Kutergin <tkutergin@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221027112619.2205229-1-tkutergin@gmail.com [PMM: tweak commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- target/arm/helper.c | 13 +++++++++++-- target/arm/ptw.c | 35 ++++++++++++++++++++++++++++++----- 2 files changed, 41 insertions(+), 7 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index efbdc657a2d..077581187e7 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -11003,6 +11003,15 @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate) } #endif +static bool arm_pan_enabled(CPUARMState *env) +{ + if (is_a64(env)) { + return env->pstate & PSTATE_PAN; + } else { + return env->uncached_cpsr & CPSR_PAN; + } +} + ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) { ARMMMUIdx idx; @@ -11023,7 +11032,7 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) } break; case 1: - if (env->pstate & PSTATE_PAN) { + if (arm_pan_enabled(env)) { idx = ARMMMUIdx_E10_1_PAN; } else { idx = ARMMMUIdx_E10_1; @@ -11032,7 +11041,7 @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) case 2: /* Note that TGE does not apply at EL2. */ if (arm_hcr_el2_eff(env) & HCR_E2H) { - if (env->pstate & PSTATE_PAN) { + if (arm_pan_enabled(env)) { idx = ARMMMUIdx_E20_2_PAN; } else { idx = ARMMMUIdx_E20_2; diff --git a/target/arm/ptw.c b/target/arm/ptw.c index 58a7bbda505..e04dccff44f 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -503,12 +503,11 @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx, * @mmu_idx: MMU index indicating required translation regime * @ap: The 3-bit access permissions (AP[2:0]) * @domain_prot: The 2-bit domain access permissions + * @is_user: TRUE if accessing from PL0 */ -static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, - int ap, int domain_prot) +static int ap_to_rw_prot_is_user(CPUARMState *env, ARMMMUIdx mmu_idx, + int ap, int domain_prot, bool is_user) { - bool is_user = regime_is_user(env, mmu_idx); - if (domain_prot == 3) { return PAGE_READ | PAGE_WRITE; } @@ -552,6 +551,20 @@ static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, } } +/* + * Translate section/page access permissions to page R/W protection flags + * @env: CPUARMState + * @mmu_idx: MMU index indicating required translation regime + * @ap: The 3-bit access permissions (AP[2:0]) + * @domain_prot: The 2-bit domain access permissions + */ +static int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, + int ap, int domain_prot) +{ + return ap_to_rw_prot_is_user(env, mmu_idx, ap, domain_prot, + regime_is_user(env, mmu_idx)); +} + /* * Translate section/page access permissions to page R/W protection flags. * @ap: The 2-bit simple AP (AP[2:1]) @@ -720,6 +733,7 @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw, hwaddr phys_addr; uint32_t dacr; bool ns; + int user_prot; /* Pagetable walk. */ /* Lookup l1 descriptor. */ @@ -831,8 +845,10 @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw, goto do_fault; } result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); + user_prot = simple_ap_to_rw_prot_is_user(ap >> 1, 1); } else { result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); + user_prot = ap_to_rw_prot_is_user(env, mmu_idx, ap, domain_prot, 1); } if (result->f.prot && !xn) { result->f.prot |= PAGE_EXEC; @@ -842,6 +858,14 @@ static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw, fi->type = ARMFault_Permission; goto do_fault; } + if (regime_is_pan(env, mmu_idx) && + !regime_is_user(env, mmu_idx) && + user_prot && + access_type != MMU_INST_FETCH) { + /* Privileged Access Never fault */ + fi->type = ARMFault_Permission; + goto do_fault; + } } if (ns) { /* The NS bit will (as required by the architecture) have no effect if @@ -2773,7 +2797,8 @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw, if (regime_using_lpae_format(env, mmu_idx)) { return get_phys_addr_lpae(env, ptw, address, access_type, false, result, fi); - } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { + } else if (arm_feature(env, ARM_FEATURE_V7) || + regime_sctlr(env, mmu_idx) & SCTLR_XP) { return get_phys_addr_v6(env, ptw, address, access_type, result, fi); } else { return get_phys_addr_v5(env, ptw, address, access_type, result, fi); -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 5/7] target/arm: Copy the entire vector in DO_ZIP 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell ` (3 preceding siblings ...) 2022-11-04 11:35 ` [PULL 4/7] target/arm: Fix Privileged Access Never (PAN) for aarch32 Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 6/7] target/arm: Honor HCR_E2H and HCR_TGE in ats_write64() Peter Maydell ` (2 subsequent siblings) 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel From: Richard Henderson <richard.henderson@linaro.org> With odd_ofs set, we weren't copying enough data. Fixes: 09eb6d7025d1 ("target/arm: Move sve zip high_ofs into simd_data") Reported-by: Idan Horowitz <idan.horowitz@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221031054144.3574-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- target/arm/sve_helper.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index 3d0d2987cd0..1afeadf9c85 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -3366,10 +3366,10 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \ /* We produce output faster than we consume input. \ Therefore we must be mindful of possible overlap. */ \ if (unlikely((vn - vd) < (uintptr_t)oprsz)) { \ - vn = memcpy(&tmp_n, vn, oprsz_2); \ + vn = memcpy(&tmp_n, vn, oprsz); \ } \ if (unlikely((vm - vd) < (uintptr_t)oprsz)) { \ - vm = memcpy(&tmp_m, vm, oprsz_2); \ + vm = memcpy(&tmp_m, vm, oprsz); \ } \ for (i = 0; i < oprsz_2; i += sizeof(TYPE)) { \ *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + odd_ofs + H(i)); \ -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 6/7] target/arm: Honor HCR_E2H and HCR_TGE in ats_write64() 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell ` (4 preceding siblings ...) 2022-11-04 11:35 ` [PULL 5/7] target/arm: Copy the entire vector in DO_ZIP Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-04 11:35 ` [PULL 7/7] target/arm: Two fixes for secure ptw Peter Maydell 2022-11-05 12:34 ` [PULL 0/7] target-arm queue Stefan Hajnoczi 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel From: Ake Koomsin <ake@igel.co.jp> We need to check HCR_E2H and HCR_TGE to select the right MMU index for the correct translation regime. To check for EL2&0 translation regime: - For S1E0*, S1E1* and S12E* ops, check both HCR_E2H and HCR_TGE - For S1E2* ops, check only HCR_E2H Signed-off-by: Ake Koomsin <ake@igel.co.jp> Message-id: 20221101064250.12444-1-ake@igel.co.jp Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- target/arm/helper.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index 077581187e7..d8c8223ec38 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -3501,19 +3501,22 @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; ARMMMUIdx mmu_idx; int secure = arm_is_secure_below_el3(env); + uint64_t hcr_el2 = arm_hcr_el2_eff(env); + bool regime_e20 = (hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE); switch (ri->opc2 & 6) { case 0: switch (ri->opc1) { case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */ if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) { - mmu_idx = ARMMMUIdx_Stage1_E1_PAN; + mmu_idx = regime_e20 ? + ARMMMUIdx_E20_2_PAN : ARMMMUIdx_Stage1_E1_PAN; } else { - mmu_idx = ARMMMUIdx_Stage1_E1; + mmu_idx = regime_e20 ? ARMMMUIdx_E20_2 : ARMMMUIdx_Stage1_E1; } break; case 4: /* AT S1E2R, AT S1E2W */ - mmu_idx = ARMMMUIdx_E2; + mmu_idx = hcr_el2 & HCR_E2H ? ARMMMUIdx_E20_2 : ARMMMUIdx_E2; break; case 6: /* AT S1E3R, AT S1E3W */ mmu_idx = ARMMMUIdx_E3; @@ -3524,13 +3527,13 @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, } break; case 2: /* AT S1E0R, AT S1E0W */ - mmu_idx = ARMMMUIdx_Stage1_E0; + mmu_idx = regime_e20 ? ARMMMUIdx_E20_0 : ARMMMUIdx_Stage1_E0; break; case 4: /* AT S12E1R, AT S12E1W */ - mmu_idx = ARMMMUIdx_E10_1; + mmu_idx = regime_e20 ? ARMMMUIdx_E20_2 : ARMMMUIdx_E10_1; break; case 6: /* AT S12E0R, AT S12E0W */ - mmu_idx = ARMMMUIdx_E10_0; + mmu_idx = regime_e20 ? ARMMMUIdx_E20_0 : ARMMMUIdx_E10_0; break; default: g_assert_not_reached(); -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 7/7] target/arm: Two fixes for secure ptw 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell ` (5 preceding siblings ...) 2022-11-04 11:35 ` [PULL 6/7] target/arm: Honor HCR_E2H and HCR_TGE in ats_write64() Peter Maydell @ 2022-11-04 11:35 ` Peter Maydell 2022-11-05 12:34 ` [PULL 0/7] target-arm queue Stefan Hajnoczi 7 siblings, 0 replies; 9+ messages in thread From: Peter Maydell @ 2022-11-04 11:35 UTC (permalink / raw) To: qemu-devel From: Richard Henderson <richard.henderson@linaro.org> Reversed the sense of non-secure in get_phys_addr_lpae, and failed to initialize attrs.secure for ARMMMUIdx_Phys_S. Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- target/arm/ptw.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/target/arm/ptw.c b/target/arm/ptw.c index e04dccff44f..3745ac97234 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -1381,7 +1381,7 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw, descaddr |= (address >> (stride * (4 - level))) & indexmask; descaddr &= ~7ULL; nstable = extract32(tableattrs, 4, 1); - if (!nstable) { + if (nstable) { /* * Stage2_S -> Stage2 or Phys_S -> Phys_NS * Assert that the non-secure idx are even, and relative order. @@ -2695,6 +2695,13 @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw, bool is_secure = ptw->in_secure; ARMMMUIdx s1_mmu_idx; + /* + * The page table entries may downgrade secure to non-secure, but + * cannot upgrade an non-secure translation regime's attributes + * to secure. + */ + result->f.attrs.secure = is_secure; + switch (mmu_idx) { case ARMMMUIdx_Phys_S: case ARMMMUIdx_Phys_NS: @@ -2736,12 +2743,6 @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw, break; } - /* - * The page table entries may downgrade secure to non-secure, but - * cannot upgrade an non-secure translation regime's attributes - * to secure. - */ - result->f.attrs.secure = is_secure; result->f.attrs.user = regime_is_user(env, mmu_idx); /* -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PULL 0/7] target-arm queue 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell ` (6 preceding siblings ...) 2022-11-04 11:35 ` [PULL 7/7] target/arm: Two fixes for secure ptw Peter Maydell @ 2022-11-05 12:34 ` Stefan Hajnoczi 7 siblings, 0 replies; 9+ messages in thread From: Stefan Hajnoczi @ 2022-11-05 12:34 UTC (permalink / raw) To: Peter Maydell; +Cc: qemu-devel [-- Attachment #1: Type: text/plain, Size: 115 bytes --] Applied, thanks. Please update the changelog at https://wiki.qemu.org/ChangeLog/7.2 for any user-visible changes. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-11-07 14:11 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-11-04 11:35 [PULL 0/7] target-arm queue Peter Maydell 2022-11-04 11:35 ` [PULL 1/7] hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel Peter Maydell 2022-11-04 11:35 ` [PULL 2/7] hw/arm/boot: Set SCR_EL3.HXEn " Peter Maydell 2022-11-04 11:35 ` [PULL 3/7] target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB Peter Maydell 2022-11-04 11:35 ` [PULL 4/7] target/arm: Fix Privileged Access Never (PAN) for aarch32 Peter Maydell 2022-11-04 11:35 ` [PULL 5/7] target/arm: Copy the entire vector in DO_ZIP Peter Maydell 2022-11-04 11:35 ` [PULL 6/7] target/arm: Honor HCR_E2H and HCR_TGE in ats_write64() Peter Maydell 2022-11-04 11:35 ` [PULL 7/7] target/arm: Two fixes for secure ptw Peter Maydell 2022-11-05 12:34 ` [PULL 0/7] target-arm queue Stefan Hajnoczi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).