qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: Peter Maydell <peter.maydell@linaro.org>,
	qemu-arm@nongnu.org, qemu-devel@nongnu.org
Cc: Andrew Jones <drjones@redhat.com>, Alexander Graf <agraf@csgraf.de>
Subject: Re: [PATCH 4/6] target/arm: Unindent unnecessary else-clause
Date: Sun, 6 Feb 2022 11:20:34 +1100	[thread overview]
Message-ID: <0f2b9fe3-329c-d0de-aa37-a0e2242cbf6b@linaro.org> (raw)
In-Reply-To: <20220204165506.2846058-5-peter.maydell@linaro.org>

On 2/5/22 03:55, Peter Maydell wrote:
> Now that the if() branch of the condition in aarch64_max_initfn()
> returns early, we don't need to keep the rest of the code in
> the function inside an else block. Remove the else, unindenting
> that code.
> 
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>   target/arm/cpu64.c | 288 +++++++++++++++++++++++----------------------
>   1 file changed, 145 insertions(+), 143 deletions(-)
> 
> diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
> index ae2e431247f..bc25a2567bf 100644
> --- a/target/arm/cpu64.c
> +++ b/target/arm/cpu64.c
> @@ -707,176 +707,178 @@ static void aarch64_host_initfn(Object *obj)
>   static void aarch64_max_initfn(Object *obj)
>   {
>       ARMCPU *cpu = ARM_CPU(obj);
> +    uint64_t t;
> +    uint32_t u;
>   
>       if (kvm_enabled()) {
>           /* With KVM, '-cpu max' is identical to '-cpu host' */
>           aarch64_host_initfn(obj);
>           return;
> -    } else {
> -        uint64_t t;
> -        uint32_t u;
> -        aarch64_a57_initfn(obj);
> +    }


Could move the init of cpu afterward.  It's a runtime call to verify the qom class, and 
we'll wind up doing that again inside aarch64_host_initfn.  But either way,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

>   
> -        /*
> -         * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real
> -         * one and try to apply errata workarounds or use impdef features we
> -         * don't provide.
> -         * An IMPLEMENTER field of 0 means "reserved for software use";
> -         * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers
> -         * to see which features are present";
> -         * the VARIANT, PARTNUM and REVISION fields are all implementation
> -         * defined and we choose to define PARTNUM just in case guest
> -         * code needs to distinguish this QEMU CPU from other software
> -         * implementations, though this shouldn't be needed.
> -         */
> -        t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0);
> -        t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf);
> -        t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q');
> -        t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0);
> -        t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
> -        cpu->midr = t;
> +    /* '-cpu max' for TCG: we currently do this as "A57 with extra things" */
>   
> -        t = cpu->isar.id_aa64isar0;
> -        t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
> -        t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
> -        t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
> -        t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
> -        t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
> -        cpu->isar.id_aa64isar0 = t;
> +    aarch64_a57_initfn(obj);
>   
> -        t = cpu->isar.id_aa64isar1;
> -        t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
> -        t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
> -        t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
> -        cpu->isar.id_aa64isar1 = t;
> +    /*
> +     * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real
> +     * one and try to apply errata workarounds or use impdef features we
> +     * don't provide.
> +     * An IMPLEMENTER field of 0 means "reserved for software use";
> +     * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers
> +     * to see which features are present";
> +     * the VARIANT, PARTNUM and REVISION fields are all implementation
> +     * defined and we choose to define PARTNUM just in case guest
> +     * code needs to distinguish this QEMU CPU from other software
> +     * implementations, though this shouldn't be needed.
> +     */
> +    t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0);
> +    t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf);
> +    t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q');
> +    t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0);
> +    t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
> +    cpu->midr = t;
>   
> -        t = cpu->isar.id_aa64pfr0;
> -        t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
> -        t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
> -        t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
> -        t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
> -        t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
> -        cpu->isar.id_aa64pfr0 = t;
> +    t = cpu->isar.id_aa64isar0;
> +    t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
> +    t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
> +    t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
> +    t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
> +    t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
> +    cpu->isar.id_aa64isar0 = t;
>   
> -        t = cpu->isar.id_aa64pfr1;
> -        t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
> -        t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
> -        /*
> -         * Begin with full support for MTE. This will be downgraded to MTE=0
> -         * during realize if the board provides no tag memory, much like
> -         * we do for EL2 with the virtualization=on property.
> -         */
> -        t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
> -        cpu->isar.id_aa64pfr1 = t;
> +    t = cpu->isar.id_aa64isar1;
> +    t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
> +    t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
> +    t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
> +    cpu->isar.id_aa64isar1 = t;
>   
> -        t = cpu->isar.id_aa64mmfr0;
> -        t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 5); /* PARange: 48 bits */
> -        cpu->isar.id_aa64mmfr0 = t;
> +    t = cpu->isar.id_aa64pfr0;
> +    t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
> +    t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
> +    t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
> +    t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
> +    t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
> +    cpu->isar.id_aa64pfr0 = t;
>   
> -        t = cpu->isar.id_aa64mmfr1;
> -        t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
> -        t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
> -        t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
> -        t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
> -        t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
> -        t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
> -        cpu->isar.id_aa64mmfr1 = t;
> +    t = cpu->isar.id_aa64pfr1;
> +    t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
> +    t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
> +    /*
> +     * Begin with full support for MTE. This will be downgraded to MTE=0
> +     * during realize if the board provides no tag memory, much like
> +     * we do for EL2 with the virtualization=on property.
> +     */
> +    t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
> +    cpu->isar.id_aa64pfr1 = t;
>   
> -        t = cpu->isar.id_aa64mmfr2;
> -        t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
> -        t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
> -        t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
> -        cpu->isar.id_aa64mmfr2 = t;
> +    t = cpu->isar.id_aa64mmfr0;
> +    t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 5); /* PARange: 48 bits */
> +    cpu->isar.id_aa64mmfr0 = t;
>   
> -        t = cpu->isar.id_aa64zfr0;
> -        t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2);  /* PMULL */
> -        t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
> -        t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
> -        cpu->isar.id_aa64zfr0 = t;
> +    t = cpu->isar.id_aa64mmfr1;
> +    t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
> +    t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
> +    t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
> +    t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
> +    t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
> +    t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
> +    cpu->isar.id_aa64mmfr1 = t;
>   
> -        /* Replicate the same data to the 32-bit id registers.  */
> -        u = cpu->isar.id_isar5;
> -        u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
> -        u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
> -        u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
> -        u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
> -        u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
> -        u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
> -        cpu->isar.id_isar5 = u;
> +    t = cpu->isar.id_aa64mmfr2;
> +    t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
> +    t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
> +    t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
> +    cpu->isar.id_aa64mmfr2 = t;
>   
> -        u = cpu->isar.id_isar6;
> -        u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, DP, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, SB, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
> -        u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
> -        cpu->isar.id_isar6 = u;
> +    t = cpu->isar.id_aa64zfr0;
> +    t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2);  /* PMULL */
> +    t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
> +    t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
> +    cpu->isar.id_aa64zfr0 = t;
>   
> -        u = cpu->isar.id_pfr0;
> -        u = FIELD_DP32(u, ID_PFR0, DIT, 1);
> -        cpu->isar.id_pfr0 = u;
> +    /* Replicate the same data to the 32-bit id registers.  */
> +    u = cpu->isar.id_isar5;
> +    u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
> +    u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
> +    u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
> +    u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
> +    u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
> +    u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
> +    cpu->isar.id_isar5 = u;
>   
> -        u = cpu->isar.id_pfr2;
> -        u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
> -        cpu->isar.id_pfr2 = u;
> +    u = cpu->isar.id_isar6;
> +    u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, DP, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, SB, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
> +    u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
> +    cpu->isar.id_isar6 = u;
>   
> -        u = cpu->isar.id_mmfr3;
> -        u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
> -        cpu->isar.id_mmfr3 = u;
> +    u = cpu->isar.id_pfr0;
> +    u = FIELD_DP32(u, ID_PFR0, DIT, 1);
> +    cpu->isar.id_pfr0 = u;
>   
> -        u = cpu->isar.id_mmfr4;
> -        u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
> -        u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
> -        u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
> -        u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
> -        cpu->isar.id_mmfr4 = u;
> +    u = cpu->isar.id_pfr2;
> +    u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
> +    cpu->isar.id_pfr2 = u;
>   
> -        t = cpu->isar.id_aa64dfr0;
> -        t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
> -        cpu->isar.id_aa64dfr0 = t;
> +    u = cpu->isar.id_mmfr3;
> +    u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
> +    cpu->isar.id_mmfr3 = u;
>   
> -        u = cpu->isar.id_dfr0;
> -        u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
> -        cpu->isar.id_dfr0 = u;
> +    u = cpu->isar.id_mmfr4;
> +    u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
> +    u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
> +    u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
> +    u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
> +    cpu->isar.id_mmfr4 = u;
>   
> -        u = cpu->isar.mvfr1;
> -        u = FIELD_DP32(u, MVFR1, FPHP, 3);      /* v8.2-FP16 */
> -        u = FIELD_DP32(u, MVFR1, SIMDHP, 2);    /* v8.2-FP16 */
> -        cpu->isar.mvfr1 = u;
> +    t = cpu->isar.id_aa64dfr0;
> +    t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
> +    cpu->isar.id_aa64dfr0 = t;
> +
> +    u = cpu->isar.id_dfr0;
> +    u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
> +    cpu->isar.id_dfr0 = u;
> +
> +    u = cpu->isar.mvfr1;
> +    u = FIELD_DP32(u, MVFR1, FPHP, 3);      /* v8.2-FP16 */
> +    u = FIELD_DP32(u, MVFR1, SIMDHP, 2);    /* v8.2-FP16 */
> +    cpu->isar.mvfr1 = u;
>   
>   #ifdef CONFIG_USER_ONLY
> -        /* For usermode -cpu max we can use a larger and more efficient DCZ
> -         * blocksize since we don't have to follow what the hardware does.
> -         */
> -        cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
> -        cpu->dcz_blocksize = 7; /*  512 bytes */
> +    /* For usermode -cpu max we can use a larger and more efficient DCZ
> +     * blocksize since we don't have to follow what the hardware does.
> +     */
> +    cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
> +    cpu->dcz_blocksize = 7; /*  512 bytes */
>   #endif
>   
> -        bitmap_fill(cpu->sve_vq_supported, ARM_MAX_VQ);
> -    }
> +    bitmap_fill(cpu->sve_vq_supported, ARM_MAX_VQ);
>   
>       aarch64_add_pauth_properties(obj);
>       aarch64_add_sve_properties(obj);



  reply	other threads:[~2022-02-06  0:24 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-04 16:55 [PATCH 0/6] target/arm: -cpu host/max KVM and HVF fixes Peter Maydell
2022-02-04 16:55 ` [PATCH 1/6] target/arm: Move '-cpu host' code to cpu64.c Peter Maydell
2022-02-06  0:11   ` Richard Henderson
2022-02-04 16:55 ` [PATCH 2/6] target/arm: Use aarch64_cpu_register() for 'host' CPU type Peter Maydell
2022-02-06  0:14   ` Richard Henderson
2022-02-04 16:55 ` [PATCH 3/6] target/arm: Make KVM -cpu max exactly like -cpu host Peter Maydell
2022-02-06  0:16   ` Richard Henderson
2022-02-04 16:55 ` [PATCH 4/6] target/arm: Unindent unnecessary else-clause Peter Maydell
2022-02-06  0:20   ` Richard Henderson [this message]
2022-02-04 16:55 ` [PATCH 5/6] target/arm: Fix '-cpu max' for HVF Peter Maydell
2022-02-06  0:21   ` Richard Henderson
2022-02-04 16:55 ` [PATCH 6/6] target/arm: Support PAuth extension for hvf Peter Maydell
2022-02-06  0:26   ` Richard Henderson
2022-02-06 10:41     ` Peter Maydell
2022-02-06 18:46 ` [PATCH 0/6] target/arm: -cpu host/max KVM and HVF fixes Philippe Mathieu-Daudé via
2022-02-09 10:30 ` Andrew Jones
2022-02-09 12:49 ` Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f2b9fe3-329c-d0de-aa37-a0e2242cbf6b@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=agraf@csgraf.de \
    --cc=drjones@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).