* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 13:30 ` Marc Zyngier
@ 2017-01-13 13:46 ` Christoffer Dall
2017-01-13 13:57 ` Marc Zyngier
2017-01-13 14:42 ` Mark Rutland
2017-01-13 14:56 ` Suzuki K Poulose
2 siblings, 1 reply; 19+ messages in thread
From: Christoffer Dall @ 2017-01-13 13:46 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>
> On 13/01/17 12:36, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> >> From: Jintack Lim <jintack@cs.columbia.edu>
> >>
> >> Current KVM world switch code is unintentionally setting wrong bits to
> >> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
> >> timer. Bit positions of CNTHCTL_EL2 are changing depending on
> >> HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
> >> not set, but they are 11th and 10th bits respectively when E2H is set.
> >>
> >> In fact, on VHE we only need to set those bits once, not for every world
> >> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
> >> 1, which makes those bits have no effect for the host kernel execution.
> >> So we just set those bits once for guests, and that's it.
> >>
> >> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm/include/asm/virt.h | 5 +++++
> >> arch/arm/kvm/arm.c | 3 +++
> >> arch/arm64/include/asm/virt.h | 9 +++++++++
> >> include/kvm/arm_arch_timer.h | 1 +
> >> virt/kvm/arm/arch_timer.c | 23 +++++++++++++++++++++++
> >> virt/kvm/arm/hyp/timer-sr.c | 33 +++++++++++++++++++++------------
> >> 6 files changed, 62 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
> >> index a2e75b8..6dae195 100644
> >> --- a/arch/arm/include/asm/virt.h
> >> +++ b/arch/arm/include/asm/virt.h
> >> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
> >> return false;
> >> }
> >>
> >> +static inline bool has_vhe(void)
> >> +{
> >> + return false;
> >> +}
> >> +
> >> /* The section containing the hypervisor idmap text */
> >> extern char __hyp_idmap_text_start[];
> >> extern char __hyp_idmap_text_end[];
> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >> index 1167678..9d74464 100644
> >> --- a/arch/arm/kvm/arm.c
> >> +++ b/arch/arm/kvm/arm.c
> >> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
> >> __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
> >> __cpu_init_stage2();
> >>
> >> + if (is_kernel_in_hyp_mode())
> >> + kvm_timer_init_vhe();
> >> +
> >> kvm_arm_init_debug();
> >> }
> >>
> >> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> >> index fea1073..439f6b5 100644
> >> --- a/arch/arm64/include/asm/virt.h
> >> +++ b/arch/arm64/include/asm/virt.h
> >> @@ -47,6 +47,7 @@
> >> #include <asm/ptrace.h>
> >> #include <asm/sections.h>
> >> #include <asm/sysreg.h>
> >> +#include <asm/cpufeature.h>
> >>
> >> /*
> >> * __boot_cpu_mode records what mode CPUs were booted in.
> >> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
> >> return read_sysreg(CurrentEL) == CurrentEL_EL2;
> >> }
> >>
> >> +static inline bool has_vhe(void)
> >> +{
> >> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >> + return true;
> >> +
> >> + return false;
> >> +}
> >> +
> >
> > I was experimenting with using has_vhe for some of the optimization code
> > I was writing, and I saw a hyp crash as a result. That made me wonder
> > if this is really safe in Hyp mode?
> >
> > Specifically, there is no guarantee that this will actually be inlined
> > in the caller, right? At least that's what I can gather from trying to
> > understand the semantics of the inline keyword in the GCC manual.
>
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.
>
> > Further, are we guaranteed that the static branch gets compiled into
> > something that doesn't actually look at cpu_hwcap_keys, which is not
> > mapped in hyp mode?
>
> Here's the disassembly:
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0: f9400001 ldr x1, [x0]
> ffff000008ad01d4: 9240bc21 and x1, x1, #0xffffffffffff
> ffff000008ad01d8: d503201f nop
> ffff000008ad01dc: d503201f nop
> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> ffff000008ad01e4: 927ef842 and x2, x2, #0xfffffffffffffffd
> ffff000008ad01e8: b2400042 orr x2, x2, #0x1
> ffff000008ad01ec: d51ce102 msr cnthctl_el2, x2
> ffff000008ad01f0: d2834002 mov x2, #0x1a00 // #6656
> ffff000008ad01f4: 8b020000 add x0, x0, x2
> ffff000008ad01f8: 91038002 add x2, x0, #0xe0
> ffff000008ad01fc: 39425443 ldrb w3, [x2,#149]
> ffff000008ad0200: 34000103 cbz w3, ffff000008ad0220 <__timer_restore_state+0x50>
> ffff000008ad0204: f945a821 ldr x1, [x1,#2896]
> ffff000008ad0208: d51ce061 msr cntvoff_el2, x1
> ffff000008ad020c: f9400441 ldr x1, [x2,#8]
> ffff000008ad0210: d51be341 msr cntv_cval_el0, x1
> ffff000008ad0214: d5033fdf isb
> ffff000008ad0218: b940e000 ldr w0, [x0,#224]
> ffff000008ad021c: d51be320 msr cntv_ctl_el0, x0
> ffff000008ad0220: d65f03c0 ret
>
> The static branch resolves as such when VHE is enabled (taken from
> a running model):
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0: f9400001 ldr x1, [x0]
> ffff000008ad01d4: 9240bc21 nop
> ffff000008ad01d8: d503201f nop
> ffff000008ad01dc: d503201f b ffff000008ad01f0
> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> [...]
>
> That's using a toolchain that supports the "asm goto" feature that is used
> to implement static branches (and that's guaranteed not to generate any
> memory access other than the code patching itself).
>
> Now, with a toolchain that doesn't support this, such as gcc 4.8:
Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
because I cannot seem to reproduce this at the moment.
>
> ffff000008aa5168 <__timer_restore_state>:
> ffff000008aa5168: f9400001 ldr x1, [x0]
> ffff000008aa516c: 9240bc21 and x1, x1, #0xffffffffffff
> ffff000008aa5170: d503201f nop
> ffff000008aa5174: f00038a2 adrp x2, ffff0000091bc000 <reset_devices>
> ffff000008aa5178: 9113e042 add x2, x2, #0x4f8
> ffff000008aa517c: b9402c42 ldr w2, [x2,#44]
> ffff000008aa5180: 6b1f005f cmp w2, wzr
> ffff000008aa5184: 540000ac b.gt ffff000008aa5198 <__timer_restore_state+0x30>
> ffff000008aa5188: d53ce102 mrs x2, cnthctl_el2
> ffff000008aa518c: 927ef842 and x2, x2, #0xfffffffffffffffd
> ffff000008aa5190: b2400042 orr x2, x2, #0x1
> ffff000008aa5194: d51ce102 msr cnthctl_el2, x2
> ffff000008aa5198: 91400402 add x2, x0, #0x1, lsl #12
> ffff000008aa519c: 396dd443 ldrb w3, [x2,#2933]
> ffff000008aa51a0: 34000103 cbz w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> ffff000008aa51a4: f945a821 ldr x1, [x1,#2896]
> ffff000008aa51a8: d51ce061 msr cntvoff_el2, x1
> ffff000008aa51ac: f9457441 ldr x1, [x2,#2792]
> ffff000008aa51b0: d51be341 msr cntv_cval_el0, x1
> ffff000008aa51b4: d5033fdf isb
> ffff000008aa51b8: b95ae000 ldr w0, [x0,#6880]
> ffff000008aa51bc: d51be320 msr cntv_ctl_el0, x0
> ffff000008aa51c0: d65f03c0 ret
>
> This is now controlled by some date located at FFFF0000091BC524:
>
> maz at approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>
> vmlinux: file format elf64-littleaarch64
>
> Sections:
> Idx Name Size VMA LMA File off Algn
> [...]
> 23 .bss 000da348 ffff0000091b8000 ffff0000091b8000 01147a00 2**12
> ALLOC
>
> That's the BSS, which we do map in HYP (fairly recent).
But we cannot map the BSS at the same address though, right? So
wouldn't this actually fail?
>
> But maybe we should have have some stronger guarantees that we'll
> always get things inlined, and that the "const" side is enforced:
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
> }
>
> /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)
> {
> - if (num >= ARM64_NCAPS)
> - return false;
> + BUILD_BUG_ON(!__builtin_constant_p(num));
> + BUILD_BUG_ON(num >= ARM64_NCAPS);
> +
> return static_branch_unlikely(&cpu_hwcap_keys[num]);
> }
>
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
> return read_sysreg(CurrentEL) == CurrentEL_EL2;
> }
>
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
> {
> if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> return true;
>
>
> But that's probably another patch or two. Thoughts?
>
Yes, if something needs fixing there, it should be a separate patch.
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 13:46 ` Christoffer Dall
@ 2017-01-13 13:57 ` Marc Zyngier
2017-01-13 14:14 ` Christoffer Dall
0 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2017-01-13 13:57 UTC (permalink / raw)
To: linux-arm-kernel
On 13/01/17 13:46, Christoffer Dall wrote:
> On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>
>> On 13/01/17 12:36, Christoffer Dall wrote:
>>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>>
>>>> Current KVM world switch code is unintentionally setting wrong bits to
>>>> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
>>>> timer. Bit positions of CNTHCTL_EL2 are changing depending on
>>>> HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
>>>> not set, but they are 11th and 10th bits respectively when E2H is set.
>>>>
>>>> In fact, on VHE we only need to set those bits once, not for every world
>>>> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
>>>> 1, which makes those bits have no effect for the host kernel execution.
>>>> So we just set those bits once for guests, and that's it.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>> arch/arm/include/asm/virt.h | 5 +++++
>>>> arch/arm/kvm/arm.c | 3 +++
>>>> arch/arm64/include/asm/virt.h | 9 +++++++++
>>>> include/kvm/arm_arch_timer.h | 1 +
>>>> virt/kvm/arm/arch_timer.c | 23 +++++++++++++++++++++++
>>>> virt/kvm/arm/hyp/timer-sr.c | 33 +++++++++++++++++++++------------
>>>> 6 files changed, 62 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
>>>> index a2e75b8..6dae195 100644
>>>> --- a/arch/arm/include/asm/virt.h
>>>> +++ b/arch/arm/include/asm/virt.h
>>>> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>> return false;
>>>> }
>>>>
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> + return false;
>>>> +}
>>>> +
>>>> /* The section containing the hypervisor idmap text */
>>>> extern char __hyp_idmap_text_start[];
>>>> extern char __hyp_idmap_text_end[];
>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>> index 1167678..9d74464 100644
>>>> --- a/arch/arm/kvm/arm.c
>>>> +++ b/arch/arm/kvm/arm.c
>>>> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
>>>> __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>>>> __cpu_init_stage2();
>>>>
>>>> + if (is_kernel_in_hyp_mode())
>>>> + kvm_timer_init_vhe();
>>>> +
>>>> kvm_arm_init_debug();
>>>> }
>>>>
>>>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
>>>> index fea1073..439f6b5 100644
>>>> --- a/arch/arm64/include/asm/virt.h
>>>> +++ b/arch/arm64/include/asm/virt.h
>>>> @@ -47,6 +47,7 @@
>>>> #include <asm/ptrace.h>
>>>> #include <asm/sections.h>
>>>> #include <asm/sysreg.h>
>>>> +#include <asm/cpufeature.h>
>>>>
>>>> /*
>>>> * __boot_cpu_mode records what mode CPUs were booted in.
>>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>> return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>>> }
>>>>
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>>> + return true;
>>>> +
>>>> + return false;
>>>> +}
>>>> +
>>>
>>> I was experimenting with using has_vhe for some of the optimization code
>>> I was writing, and I saw a hyp crash as a result. That made me wonder
>>> if this is really safe in Hyp mode?
>>>
>>> Specifically, there is no guarantee that this will actually be inlined
>>> in the caller, right? At least that's what I can gather from trying to
>>> understand the semantics of the inline keyword in the GCC manual.
>>
>> Indeed, there is no strict guarantee that this is enforced. We should
>> probably have __always_inline instead. But having checked the generated
>> code for __timer_restore_state, the function is definitely inlined
>> (gcc 6.2). Happy to queue an extra patch changing that.
>>
>>> Further, are we guaranteed that the static branch gets compiled into
>>> something that doesn't actually look at cpu_hwcap_keys, which is not
>>> mapped in hyp mode?
>>
>> Here's the disassembly:
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0: f9400001 ldr x1, [x0]
>> ffff000008ad01d4: 9240bc21 and x1, x1, #0xffffffffffff
>> ffff000008ad01d8: d503201f nop
>> ffff000008ad01dc: d503201f nop
>> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
>> ffff000008ad01e4: 927ef842 and x2, x2, #0xfffffffffffffffd
>> ffff000008ad01e8: b2400042 orr x2, x2, #0x1
>> ffff000008ad01ec: d51ce102 msr cnthctl_el2, x2
>> ffff000008ad01f0: d2834002 mov x2, #0x1a00 // #6656
>> ffff000008ad01f4: 8b020000 add x0, x0, x2
>> ffff000008ad01f8: 91038002 add x2, x0, #0xe0
>> ffff000008ad01fc: 39425443 ldrb w3, [x2,#149]
>> ffff000008ad0200: 34000103 cbz w3, ffff000008ad0220 <__timer_restore_state+0x50>
>> ffff000008ad0204: f945a821 ldr x1, [x1,#2896]
>> ffff000008ad0208: d51ce061 msr cntvoff_el2, x1
>> ffff000008ad020c: f9400441 ldr x1, [x2,#8]
>> ffff000008ad0210: d51be341 msr cntv_cval_el0, x1
>> ffff000008ad0214: d5033fdf isb
>> ffff000008ad0218: b940e000 ldr w0, [x0,#224]
>> ffff000008ad021c: d51be320 msr cntv_ctl_el0, x0
>> ffff000008ad0220: d65f03c0 ret
>>
>> The static branch resolves as such when VHE is enabled (taken from
>> a running model):
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0: f9400001 ldr x1, [x0]
>> ffff000008ad01d4: 9240bc21 nop
>> ffff000008ad01d8: d503201f nop
>> ffff000008ad01dc: d503201f b ffff000008ad01f0
>> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
>> [...]
>>
>> That's using a toolchain that supports the "asm goto" feature that is used
>> to implement static branches (and that's guaranteed not to generate any
>> memory access other than the code patching itself).
>>
>> Now, with a toolchain that doesn't support this, such as gcc 4.8:
>
> Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
> because I cannot seem to reproduce this at the moment.
>
>>
>> ffff000008aa5168 <__timer_restore_state>:
>> ffff000008aa5168: f9400001 ldr x1, [x0]
>> ffff000008aa516c: 9240bc21 and x1, x1, #0xffffffffffff
>> ffff000008aa5170: d503201f nop
>> ffff000008aa5174: f00038a2 adrp x2, ffff0000091bc000 <reset_devices>
>> ffff000008aa5178: 9113e042 add x2, x2, #0x4f8
>> ffff000008aa517c: b9402c42 ldr w2, [x2,#44]
>> ffff000008aa5180: 6b1f005f cmp w2, wzr
>> ffff000008aa5184: 540000ac b.gt ffff000008aa5198 <__timer_restore_state+0x30>
>> ffff000008aa5188: d53ce102 mrs x2, cnthctl_el2
>> ffff000008aa518c: 927ef842 and x2, x2, #0xfffffffffffffffd
>> ffff000008aa5190: b2400042 orr x2, x2, #0x1
>> ffff000008aa5194: d51ce102 msr cnthctl_el2, x2
>> ffff000008aa5198: 91400402 add x2, x0, #0x1, lsl #12
>> ffff000008aa519c: 396dd443 ldrb w3, [x2,#2933]
>> ffff000008aa51a0: 34000103 cbz w3, ffff000008aa51c0 <__timer_restore_state+0x58>
>> ffff000008aa51a4: f945a821 ldr x1, [x1,#2896]
>> ffff000008aa51a8: d51ce061 msr cntvoff_el2, x1
>> ffff000008aa51ac: f9457441 ldr x1, [x2,#2792]
>> ffff000008aa51b0: d51be341 msr cntv_cval_el0, x1
>> ffff000008aa51b4: d5033fdf isb
>> ffff000008aa51b8: b95ae000 ldr w0, [x0,#6880]
>> ffff000008aa51bc: d51be320 msr cntv_ctl_el0, x0
>> ffff000008aa51c0: d65f03c0 ret
>>
>> This is now controlled by some date located at FFFF0000091BC524:
>>
>> maz at approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>>
>> vmlinux: file format elf64-littleaarch64
>>
>> Sections:
>> Idx Name Size VMA LMA File off Algn
>> [...]
>> 23 .bss 000da348 ffff0000091b8000 ffff0000091b8000 01147a00 2**12
>> ALLOC
>>
>> That's the BSS, which we do map in HYP (fairly recent).
>
> But we cannot map the BSS at the same address though, right? So
> wouldn't this actually fail?
We map it at the same relative offset, and use adrp to get the base
address (PC relative). So whatever context we're in, we should be OK.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 13:57 ` Marc Zyngier
@ 2017-01-13 14:14 ` Christoffer Dall
0 siblings, 0 replies; 19+ messages in thread
From: Christoffer Dall @ 2017-01-13 14:14 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jan 13, 2017 at 01:57:23PM +0000, Marc Zyngier wrote:
> On 13/01/17 13:46, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> >> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
> >>
> >> On 13/01/17 12:36, Christoffer Dall wrote:
> >>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> >>>> From: Jintack Lim <jintack@cs.columbia.edu>
> >>>>
> >>>> Current KVM world switch code is unintentionally setting wrong bits to
> >>>> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
> >>>> timer. Bit positions of CNTHCTL_EL2 are changing depending on
> >>>> HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
> >>>> not set, but they are 11th and 10th bits respectively when E2H is set.
> >>>>
> >>>> In fact, on VHE we only need to set those bits once, not for every world
> >>>> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
> >>>> 1, which makes those bits have no effect for the host kernel execution.
> >>>> So we just set those bits once for guests, and that's it.
> >>>>
> >>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> ---
> >>>> arch/arm/include/asm/virt.h | 5 +++++
> >>>> arch/arm/kvm/arm.c | 3 +++
> >>>> arch/arm64/include/asm/virt.h | 9 +++++++++
> >>>> include/kvm/arm_arch_timer.h | 1 +
> >>>> virt/kvm/arm/arch_timer.c | 23 +++++++++++++++++++++++
> >>>> virt/kvm/arm/hyp/timer-sr.c | 33 +++++++++++++++++++++------------
> >>>> 6 files changed, 62 insertions(+), 12 deletions(-)
> >>>>
> >>>> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
> >>>> index a2e75b8..6dae195 100644
> >>>> --- a/arch/arm/include/asm/virt.h
> >>>> +++ b/arch/arm/include/asm/virt.h
> >>>> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>>> return false;
> >>>> }
> >>>>
> >>>> +static inline bool has_vhe(void)
> >>>> +{
> >>>> + return false;
> >>>> +}
> >>>> +
> >>>> /* The section containing the hypervisor idmap text */
> >>>> extern char __hyp_idmap_text_start[];
> >>>> extern char __hyp_idmap_text_end[];
> >>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >>>> index 1167678..9d74464 100644
> >>>> --- a/arch/arm/kvm/arm.c
> >>>> +++ b/arch/arm/kvm/arm.c
> >>>> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
> >>>> __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
> >>>> __cpu_init_stage2();
> >>>>
> >>>> + if (is_kernel_in_hyp_mode())
> >>>> + kvm_timer_init_vhe();
> >>>> +
> >>>> kvm_arm_init_debug();
> >>>> }
> >>>>
> >>>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> >>>> index fea1073..439f6b5 100644
> >>>> --- a/arch/arm64/include/asm/virt.h
> >>>> +++ b/arch/arm64/include/asm/virt.h
> >>>> @@ -47,6 +47,7 @@
> >>>> #include <asm/ptrace.h>
> >>>> #include <asm/sections.h>
> >>>> #include <asm/sysreg.h>
> >>>> +#include <asm/cpufeature.h>
> >>>>
> >>>> /*
> >>>> * __boot_cpu_mode records what mode CPUs were booted in.
> >>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>>> return read_sysreg(CurrentEL) == CurrentEL_EL2;
> >>>> }
> >>>>
> >>>> +static inline bool has_vhe(void)
> >>>> +{
> >>>> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >>>> + return true;
> >>>> +
> >>>> + return false;
> >>>> +}
> >>>> +
> >>>
> >>> I was experimenting with using has_vhe for some of the optimization code
> >>> I was writing, and I saw a hyp crash as a result. That made me wonder
> >>> if this is really safe in Hyp mode?
> >>>
> >>> Specifically, there is no guarantee that this will actually be inlined
> >>> in the caller, right? At least that's what I can gather from trying to
> >>> understand the semantics of the inline keyword in the GCC manual.
> >>
> >> Indeed, there is no strict guarantee that this is enforced. We should
> >> probably have __always_inline instead. But having checked the generated
> >> code for __timer_restore_state, the function is definitely inlined
> >> (gcc 6.2). Happy to queue an extra patch changing that.
> >>
> >>> Further, are we guaranteed that the static branch gets compiled into
> >>> something that doesn't actually look at cpu_hwcap_keys, which is not
> >>> mapped in hyp mode?
> >>
> >> Here's the disassembly:
> >>
> >> ffff000008ad01d0 <__timer_restore_state>:
> >> ffff000008ad01d0: f9400001 ldr x1, [x0]
> >> ffff000008ad01d4: 9240bc21 and x1, x1, #0xffffffffffff
> >> ffff000008ad01d8: d503201f nop
> >> ffff000008ad01dc: d503201f nop
> >> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> >> ffff000008ad01e4: 927ef842 and x2, x2, #0xfffffffffffffffd
> >> ffff000008ad01e8: b2400042 orr x2, x2, #0x1
> >> ffff000008ad01ec: d51ce102 msr cnthctl_el2, x2
> >> ffff000008ad01f0: d2834002 mov x2, #0x1a00 // #6656
> >> ffff000008ad01f4: 8b020000 add x0, x0, x2
> >> ffff000008ad01f8: 91038002 add x2, x0, #0xe0
> >> ffff000008ad01fc: 39425443 ldrb w3, [x2,#149]
> >> ffff000008ad0200: 34000103 cbz w3, ffff000008ad0220 <__timer_restore_state+0x50>
> >> ffff000008ad0204: f945a821 ldr x1, [x1,#2896]
> >> ffff000008ad0208: d51ce061 msr cntvoff_el2, x1
> >> ffff000008ad020c: f9400441 ldr x1, [x2,#8]
> >> ffff000008ad0210: d51be341 msr cntv_cval_el0, x1
> >> ffff000008ad0214: d5033fdf isb
> >> ffff000008ad0218: b940e000 ldr w0, [x0,#224]
> >> ffff000008ad021c: d51be320 msr cntv_ctl_el0, x0
> >> ffff000008ad0220: d65f03c0 ret
> >>
> >> The static branch resolves as such when VHE is enabled (taken from
> >> a running model):
> >>
> >> ffff000008ad01d0 <__timer_restore_state>:
> >> ffff000008ad01d0: f9400001 ldr x1, [x0]
> >> ffff000008ad01d4: 9240bc21 nop
> >> ffff000008ad01d8: d503201f nop
> >> ffff000008ad01dc: d503201f b ffff000008ad01f0
> >> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> >> [...]
> >>
> >> That's using a toolchain that supports the "asm goto" feature that is used
> >> to implement static branches (and that's guaranteed not to generate any
> >> memory access other than the code patching itself).
> >>
> >> Now, with a toolchain that doesn't support this, such as gcc 4.8:
> >
> > Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
> > because I cannot seem to reproduce this at the moment.
> >
> >>
> >> ffff000008aa5168 <__timer_restore_state>:
> >> ffff000008aa5168: f9400001 ldr x1, [x0]
> >> ffff000008aa516c: 9240bc21 and x1, x1, #0xffffffffffff
> >> ffff000008aa5170: d503201f nop
> >> ffff000008aa5174: f00038a2 adrp x2, ffff0000091bc000 <reset_devices>
> >> ffff000008aa5178: 9113e042 add x2, x2, #0x4f8
> >> ffff000008aa517c: b9402c42 ldr w2, [x2,#44]
> >> ffff000008aa5180: 6b1f005f cmp w2, wzr
> >> ffff000008aa5184: 540000ac b.gt ffff000008aa5198 <__timer_restore_state+0x30>
> >> ffff000008aa5188: d53ce102 mrs x2, cnthctl_el2
> >> ffff000008aa518c: 927ef842 and x2, x2, #0xfffffffffffffffd
> >> ffff000008aa5190: b2400042 orr x2, x2, #0x1
> >> ffff000008aa5194: d51ce102 msr cnthctl_el2, x2
> >> ffff000008aa5198: 91400402 add x2, x0, #0x1, lsl #12
> >> ffff000008aa519c: 396dd443 ldrb w3, [x2,#2933]
> >> ffff000008aa51a0: 34000103 cbz w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> >> ffff000008aa51a4: f945a821 ldr x1, [x1,#2896]
> >> ffff000008aa51a8: d51ce061 msr cntvoff_el2, x1
> >> ffff000008aa51ac: f9457441 ldr x1, [x2,#2792]
> >> ffff000008aa51b0: d51be341 msr cntv_cval_el0, x1
> >> ffff000008aa51b4: d5033fdf isb
> >> ffff000008aa51b8: b95ae000 ldr w0, [x0,#6880]
> >> ffff000008aa51bc: d51be320 msr cntv_ctl_el0, x0
> >> ffff000008aa51c0: d65f03c0 ret
> >>
> >> This is now controlled by some date located at FFFF0000091BC524:
> >>
> >> maz at approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
> >>
> >> vmlinux: file format elf64-littleaarch64
> >>
> >> Sections:
> >> Idx Name Size VMA LMA File off Algn
> >> [...]
> >> 23 .bss 000da348 ffff0000091b8000 ffff0000091b8000 01147a00 2**12
> >> ALLOC
> >>
> >> That's the BSS, which we do map in HYP (fairly recent).
> >
> > But we cannot map the BSS at the same address though, right? So
> > wouldn't this actually fail?
>
> We map it at the same relative offset, and use adrp to get the base
> address (PC relative). So whatever context we're in, we should be OK.
>
Ah, right, I'll be shutting up now then.
(Will make a not to go back and carefully examine exactly why this
failed for me.)
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 13:30 ` Marc Zyngier
2017-01-13 13:46 ` Christoffer Dall
@ 2017-01-13 14:42 ` Mark Rutland
2017-01-13 14:55 ` Mark Rutland
2017-01-13 14:56 ` Suzuki K Poulose
2 siblings, 1 reply; 19+ messages in thread
From: Mark Rutland @ 2017-01-13 14:42 UTC (permalink / raw)
To: linux-arm-kernel
Hi,
On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>
> On 13/01/17 12:36, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> >> +static inline bool has_vhe(void)
> >> +{
> >> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >> + return true;
> >> +
> >> + return false;
> >> +}
> >> +
> >
> > I was experimenting with using has_vhe for some of the optimization code
> > I was writing, and I saw a hyp crash as a result. That made me wonder
> > if this is really safe in Hyp mode?
> >
> > Specifically, there is no guarantee that this will actually be inlined
> > in the caller, right? At least that's what I can gather from trying to
> > understand the semantics of the inline keyword in the GCC manual.
>
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.
> > Further, are we guaranteed that the static branch gets compiled into
> > something that doesn't actually look at cpu_hwcap_keys, which is not
> > mapped in hyp mode?
If I disable CONFIG_JUMP_LABEL (which lives under "General setup", with
teh title "Optimize very unlikely/likely branches"), I see adrp; add;
ldr sequences accessing cpu_hwcap_keys when using cpus_have_const_cap()
in hyp code, even with the patch below.
Do we have the whole kernel image mapped around hyp, so that this would
work by relative offset? Do we have a guarantee that adrp+add is used?
Thanks,
Mark.
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
> }
>
> /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)
> {
> - if (num >= ARM64_NCAPS)
> - return false;
> + BUILD_BUG_ON(!__builtin_constant_p(num));
> + BUILD_BUG_ON(num >= ARM64_NCAPS);
> +
> return static_branch_unlikely(&cpu_hwcap_keys[num]);
> }
>
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
> return read_sysreg(CurrentEL) == CurrentEL_EL2;
> }
>
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
> {
> if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> return true;
>
>
> But that's probably another patch or two. Thoughts?
>
> M.
> --
> Jazz is not dead. It just smells funny...
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 14:42 ` Mark Rutland
@ 2017-01-13 14:55 ` Mark Rutland
0 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2017-01-13 14:55 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jan 13, 2017 at 02:42:04PM +0000, Mark Rutland wrote:
> On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> > On 13/01/17 12:36, Christoffer Dall wrote:
> > > Further, are we guaranteed that the static branch gets compiled into
> > > something that doesn't actually look at cpu_hwcap_keys, which is not
> > > mapped in hyp mode?
>
> If I disable CONFIG_JUMP_LABEL (which lives under "General setup", with
> teh title "Optimize very unlikely/likely branches"), I see adrp; add;
> ldr sequences accessing cpu_hwcap_keys when using cpus_have_const_cap()
> in hyp code, even with the patch below.
Looking again, that's the same sequence Marc mentioned, as it falls in
the BSS. I just happened to be looking at the unlinked .o file rather
than the vmlinux.
Sorry for the noise.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 13:30 ` Marc Zyngier
2017-01-13 13:46 ` Christoffer Dall
2017-01-13 14:42 ` Mark Rutland
@ 2017-01-13 14:56 ` Suzuki K Poulose
2017-01-13 19:04 ` Jintack Lim
2017-01-16 13:30 ` Marc Zyngier
2 siblings, 2 replies; 19+ messages in thread
From: Suzuki K Poulose @ 2017-01-13 14:56 UTC (permalink / raw)
To: linux-arm-kernel
On 13/01/17 13:30, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>
> On 13/01/17 12:36, Christoffer Dall wrote:
>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>
...
>>> /*
>>> * __boot_cpu_mode records what mode CPUs were booted in.
>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>> return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>> }
>>>
>>> +static inline bool has_vhe(void)
>>> +{
>>> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>> + return true;
>>> +
>>> + return false;
>>> +}
>>> +
>>
>> I was experimenting with using has_vhe for some of the optimization code
>> I was writing, and I saw a hyp crash as a result. That made me wonder
>> if this is really safe in Hyp mode?
>>
>> Specifically, there is no guarantee that this will actually be inlined
>> in the caller, right? At least that's what I can gather from trying to
>> understand the semantics of the inline keyword in the GCC manual.
>
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.
>
>> Further, are we guaranteed that the static branch gets compiled into
>> something that doesn't actually look at cpu_hwcap_keys, which is not
>> mapped in hyp mode?
>
> Here's the disassembly:
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0: f9400001 ldr x1, [x0]
> ffff000008ad01d4: 9240bc21 and x1, x1, #0xffffffffffff
> ffff000008ad01d8: d503201f nop
> ffff000008ad01dc: d503201f nop
> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> ffff000008ad01e4: 927ef842 and x2, x2, #0xfffffffffffffffd
> ffff000008ad01e8: b2400042 orr x2, x2, #0x1
> ffff000008ad01ec: d51ce102 msr cnthctl_el2, x2
> ffff000008ad01f0: d2834002 mov x2, #0x1a00 // #6656
> ffff000008ad01f4: 8b020000 add x0, x0, x2
> ffff000008ad01f8: 91038002 add x2, x0, #0xe0
> ffff000008ad01fc: 39425443 ldrb w3, [x2,#149]
> ffff000008ad0200: 34000103 cbz w3, ffff000008ad0220 <__timer_restore_state+0x50>
> ffff000008ad0204: f945a821 ldr x1, [x1,#2896]
> ffff000008ad0208: d51ce061 msr cntvoff_el2, x1
> ffff000008ad020c: f9400441 ldr x1, [x2,#8]
> ffff000008ad0210: d51be341 msr cntv_cval_el0, x1
> ffff000008ad0214: d5033fdf isb
> ffff000008ad0218: b940e000 ldr w0, [x0,#224]
> ffff000008ad021c: d51be320 msr cntv_ctl_el0, x0
> ffff000008ad0220: d65f03c0 ret
>
> The static branch resolves as such when VHE is enabled (taken from
> a running model):
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0: f9400001 ldr x1, [x0]
> ffff000008ad01d4: 9240bc21 nop
> ffff000008ad01d8: d503201f nop
> ffff000008ad01dc: d503201f b ffff000008ad01f0
> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
> [...]
>
> That's using a toolchain that supports the "asm goto" feature that is used
> to implement static branches (and that's guaranteed not to generate any
> memory access other than the code patching itself).
>
> Now, with a toolchain that doesn't support this, such as gcc 4.8:
>
> ffff000008aa5168 <__timer_restore_state>:
> ffff000008aa5168: f9400001 ldr x1, [x0]
> ffff000008aa516c: 9240bc21 and x1, x1, #0xffffffffffff
> ffff000008aa5170: d503201f nop
> ffff000008aa5174: f00038a2 adrp x2, ffff0000091bc000 <reset_devices>
> ffff000008aa5178: 9113e042 add x2, x2, #0x4f8
> ffff000008aa517c: b9402c42 ldr w2, [x2,#44]
> ffff000008aa5180: 6b1f005f cmp w2, wzr
> ffff000008aa5184: 540000ac b.gt ffff000008aa5198 <__timer_restore_state+0x30>
> ffff000008aa5188: d53ce102 mrs x2, cnthctl_el2
> ffff000008aa518c: 927ef842 and x2, x2, #0xfffffffffffffffd
> ffff000008aa5190: b2400042 orr x2, x2, #0x1
> ffff000008aa5194: d51ce102 msr cnthctl_el2, x2
> ffff000008aa5198: 91400402 add x2, x0, #0x1, lsl #12
> ffff000008aa519c: 396dd443 ldrb w3, [x2,#2933]
> ffff000008aa51a0: 34000103 cbz w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> ffff000008aa51a4: f945a821 ldr x1, [x1,#2896]
> ffff000008aa51a8: d51ce061 msr cntvoff_el2, x1
> ffff000008aa51ac: f9457441 ldr x1, [x2,#2792]
> ffff000008aa51b0: d51be341 msr cntv_cval_el0, x1
> ffff000008aa51b4: d5033fdf isb
> ffff000008aa51b8: b95ae000 ldr w0, [x0,#6880]
> ffff000008aa51bc: d51be320 msr cntv_ctl_el0, x0
> ffff000008aa51c0: d65f03c0 ret
>
> This is now controlled by some date located at FFFF0000091BC524:
>
> maz at approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>
> vmlinux: file format elf64-littleaarch64
>
> Sections:
> Idx Name Size VMA LMA File off Algn
> [...]
> 23 .bss 000da348 ffff0000091b8000 ffff0000091b8000 01147a00 2**12
> ALLOC
>
> That's the BSS, which we do map in HYP (fairly recent).
>
> But maybe we should have have some stronger guarantees that we'll
> always get things inlined, and that the "const" side is enforced:
Agreed.
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
> }
>
> /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)
I think we should have the above change and make it inline always.
> {
> - if (num >= ARM64_NCAPS)
> - return false;
> + BUILD_BUG_ON(!__builtin_constant_p(num));
This is not needed, as the compilation would fail if num is not a constant with
static key code.
> + BUILD_BUG_ON(num >= ARM64_NCAPS);
> +
Also, I think it would be good to return false for caps > the ARM64_NCAPS, in sync
with the non-const version.
> return static_branch_unlikely(&cpu_hwcap_keys[num]);
> }
>
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
> return read_sysreg(CurrentEL) == CurrentEL_EL2;
> }
>
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
> {
> if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> return true;
>
>
> But that's probably another patch or two. Thoughts?
With the above changes, please feel free to add :
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 14:56 ` Suzuki K Poulose
@ 2017-01-13 19:04 ` Jintack Lim
2017-01-16 13:30 ` Marc Zyngier
1 sibling, 0 replies; 19+ messages in thread
From: Jintack Lim @ 2017-01-13 19:04 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jan 13, 2017 at 9:56 AM, Suzuki K Poulose
<Suzuki.Poulose@arm.com> wrote:
> On 13/01/17 13:30, Marc Zyngier wrote:
>>
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>
>> On 13/01/17 12:36, Christoffer Dall wrote:
>>>
>>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>>>
>>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>>
> ...
>
>
>>>> /*
>>>> * __boot_cpu_mode records what mode CPUs were booted in.
>>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>> return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>>> }
>>>>
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> + if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>>> + return true;
>>>> +
>>>> + return false;
>>>> +}
>>>> +
>>>
>>>
>>> I was experimenting with using has_vhe for some of the optimization code
>>> I was writing, and I saw a hyp crash as a result. That made me wonder
>>> if this is really safe in Hyp mode?
>>>
>>> Specifically, there is no guarantee that this will actually be inlined
>>> in the caller, right? At least that's what I can gather from trying to
>>> understand the semantics of the inline keyword in the GCC manual.
>>
>>
>> Indeed, there is no strict guarantee that this is enforced. We should
>> probably have __always_inline instead. But having checked the generated
>> code for __timer_restore_state, the function is definitely inlined
>> (gcc 6.2). Happy to queue an extra patch changing that.
>>
>>> Further, are we guaranteed that the static branch gets compiled into
>>> something that doesn't actually look at cpu_hwcap_keys, which is not
>>> mapped in hyp mode?
>>
>>
>> Here's the disassembly:
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0: f9400001 ldr x1, [x0]
>> ffff000008ad01d4: 9240bc21 and x1, x1, #0xffffffffffff
>> ffff000008ad01d8: d503201f nop
>> ffff000008ad01dc: d503201f nop
>> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
>> ffff000008ad01e4: 927ef842 and x2, x2,
>> #0xfffffffffffffffd
>> ffff000008ad01e8: b2400042 orr x2, x2, #0x1
>> ffff000008ad01ec: d51ce102 msr cnthctl_el2, x2
>> ffff000008ad01f0: d2834002 mov x2, #0x1a00
>> // #6656
>> ffff000008ad01f4: 8b020000 add x0, x0, x2
>> ffff000008ad01f8: 91038002 add x2, x0, #0xe0
>> ffff000008ad01fc: 39425443 ldrb w3, [x2,#149]
>> ffff000008ad0200: 34000103 cbz w3, ffff000008ad0220
>> <__timer_restore_state+0x50>
>> ffff000008ad0204: f945a821 ldr x1, [x1,#2896]
>> ffff000008ad0208: d51ce061 msr cntvoff_el2, x1
>> ffff000008ad020c: f9400441 ldr x1, [x2,#8]
>> ffff000008ad0210: d51be341 msr cntv_cval_el0, x1
>> ffff000008ad0214: d5033fdf isb
>> ffff000008ad0218: b940e000 ldr w0, [x0,#224]
>> ffff000008ad021c: d51be320 msr cntv_ctl_el0, x0
>> ffff000008ad0220: d65f03c0 ret
>>
>> The static branch resolves as such when VHE is enabled (taken from
>> a running model):
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0: f9400001 ldr x1, [x0]
>> ffff000008ad01d4: 9240bc21 nop
>> ffff000008ad01d8: d503201f nop
>> ffff000008ad01dc: d503201f b ffff000008ad01f0
>> ffff000008ad01e0: d53ce102 mrs x2, cnthctl_el2
>> [...]
>>
>> That's using a toolchain that supports the "asm goto" feature that is used
>> to implement static branches (and that's guaranteed not to generate any
>> memory access other than the code patching itself).
>>
>> Now, with a toolchain that doesn't support this, such as gcc 4.8:
>>
>> ffff000008aa5168 <__timer_restore_state>:
>> ffff000008aa5168: f9400001 ldr x1, [x0]
>> ffff000008aa516c: 9240bc21 and x1, x1, #0xffffffffffff
>> ffff000008aa5170: d503201f nop
>> ffff000008aa5174: f00038a2 adrp x2, ffff0000091bc000
>> <reset_devices>
>> ffff000008aa5178: 9113e042 add x2, x2, #0x4f8
>> ffff000008aa517c: b9402c42 ldr w2, [x2,#44]
>> ffff000008aa5180: 6b1f005f cmp w2, wzr
>> ffff000008aa5184: 540000ac b.gt ffff000008aa5198
>> <__timer_restore_state+0x30>
>> ffff000008aa5188: d53ce102 mrs x2, cnthctl_el2
>> ffff000008aa518c: 927ef842 and x2, x2,
>> #0xfffffffffffffffd
>> ffff000008aa5190: b2400042 orr x2, x2, #0x1
>> ffff000008aa5194: d51ce102 msr cnthctl_el2, x2
>> ffff000008aa5198: 91400402 add x2, x0, #0x1, lsl #12
>> ffff000008aa519c: 396dd443 ldrb w3, [x2,#2933]
>> ffff000008aa51a0: 34000103 cbz w3, ffff000008aa51c0
>> <__timer_restore_state+0x58>
>> ffff000008aa51a4: f945a821 ldr x1, [x1,#2896]
>> ffff000008aa51a8: d51ce061 msr cntvoff_el2, x1
>> ffff000008aa51ac: f9457441 ldr x1, [x2,#2792]
>> ffff000008aa51b0: d51be341 msr cntv_cval_el0, x1
>> ffff000008aa51b4: d5033fdf isb
>> ffff000008aa51b8: b95ae000 ldr w0, [x0,#6880]
>> ffff000008aa51bc: d51be320 msr cntv_ctl_el0, x0
>> ffff000008aa51c0: d65f03c0 ret
>>
>> This is now controlled by some date located at FFFF0000091BC524:
>>
>> maz at approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>>
>> vmlinux: file format elf64-littleaarch64
>>
>> Sections:
>> Idx Name Size VMA LMA File off
>> Algn
>> [...]
>> 23 .bss 000da348 ffff0000091b8000 ffff0000091b8000 01147a00
>> 2**12
>> ALLOC
>>
>> That's the BSS, which we do map in HYP (fairly recent).
>>
>> But maybe we should have have some stronger guarantees that we'll
>> always get things inlined, and that the "const" side is enforced:
>
>
> Agreed.
>
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h
>> b/arch/arm64/include/asm/cpufeature.h
>> index b4989df..4710469 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int
>> num)
>> }
>>
>> /* System capability check for constant caps */
>> -static inline bool cpus_have_const_cap(int num)
>> +static __always_inline bool cpus_have_const_cap(int num)
>
>
> I think we should have the above change and make it inline always.
>
>> {
>> - if (num >= ARM64_NCAPS)
>> - return false;
>> + BUILD_BUG_ON(!__builtin_constant_p(num));
>
>
> This is not needed, as the compilation would fail if num is not a constant
> with
> static key code.
>
>> + BUILD_BUG_ON(num >= ARM64_NCAPS);
>> +
>
>
> Also, I think it would be good to return false for caps > the ARM64_NCAPS,
> in sync
> with the non-const version.
>
>
>> return static_branch_unlikely(&cpu_hwcap_keys[num]);
>> }
>>
>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
>> index 439f6b5..1257701 100644
>> --- a/arch/arm64/include/asm/virt.h
>> +++ b/arch/arm64/include/asm/virt.h
>> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
>> return read_sysreg(CurrentEL) == CurrentEL_EL2;
>> }
>>
>> -static inline bool has_vhe(void)
>> +static __always_inline bool has_vhe(void)
>> {
>> if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>> return true;
>>
I'm fine with the above change.
Thanks,
Jintack
>>
>> But that's probably another patch or two. Thoughts?
>
>
> With the above changes, please feel free to add :
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-13 14:56 ` Suzuki K Poulose
2017-01-13 19:04 ` Jintack Lim
@ 2017-01-16 13:30 ` Marc Zyngier
2017-01-16 14:11 ` Marc Zyngier
1 sibling, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2017-01-16 13:30 UTC (permalink / raw)
To: linux-arm-kernel
On 13/01/17 14:56, Suzuki K Poulose wrote:
> On 13/01/17 13:30, Marc Zyngier wrote:
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>
[...]
>> But maybe we should have have some stronger guarantees that we'll
>> always get things inlined, and that the "const" side is enforced:
>
> Agreed.
>
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index b4989df..4710469 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>> }
>>
>> /* System capability check for constant caps */
>> -static inline bool cpus_have_const_cap(int num)
>> +static __always_inline bool cpus_have_const_cap(int num)
>
> I think we should have the above change and make it inline always.
>
>> {
>> - if (num >= ARM64_NCAPS)
>> - return false;
>> + BUILD_BUG_ON(!__builtin_constant_p(num));
>
> This is not needed, as the compilation would fail if num is not a constant with
> static key code.
>
>> + BUILD_BUG_ON(num >= ARM64_NCAPS);
>> +
>
> Also, I think it would be good to return false for caps > the ARM64_NCAPS, in sync
> with the non-const version.
But what's the semantic? It means we're accessing a capability that
doesn't exist, which looks like a major bug in my book. Is there any
valid use case for this?
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-16 13:30 ` Marc Zyngier
@ 2017-01-16 14:11 ` Marc Zyngier
2017-01-16 14:19 ` Suzuki K Poulose
0 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2017-01-16 14:11 UTC (permalink / raw)
To: linux-arm-kernel
On 16/01/17 13:30, Marc Zyngier wrote:
> On 13/01/17 14:56, Suzuki K Poulose wrote:
>> On 13/01/17 13:30, Marc Zyngier wrote:
>>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>>
>
> [...]
>
>>> But maybe we should have have some stronger guarantees that we'll
>>> always get things inlined, and that the "const" side is enforced:
>>
>> Agreed.
>>
>>>
>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>> index b4989df..4710469 100644
>>> --- a/arch/arm64/include/asm/cpufeature.h
>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>>> }
>>>
>>> /* System capability check for constant caps */
>>> -static inline bool cpus_have_const_cap(int num)
>>> +static __always_inline bool cpus_have_const_cap(int num)
>>
>> I think we should have the above change and make it inline always.
>>
>>> {
>>> - if (num >= ARM64_NCAPS)
>>> - return false;
>>> + BUILD_BUG_ON(!__builtin_constant_p(num));
>>
>> This is not needed, as the compilation would fail if num is not a constant with
>> static key code.
I also just checked this, and it doesn't fail if the compiler doesn't
directly supports jump labels (we then fallback to the static key being
a standard memory access).
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 2/3] KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems
2017-01-16 14:11 ` Marc Zyngier
@ 2017-01-16 14:19 ` Suzuki K Poulose
0 siblings, 0 replies; 19+ messages in thread
From: Suzuki K Poulose @ 2017-01-16 14:19 UTC (permalink / raw)
To: linux-arm-kernel
On 16/01/17 14:11, Marc Zyngier wrote:
> On 16/01/17 13:30, Marc Zyngier wrote:
>> On 13/01/17 14:56, Suzuki K Poulose wrote:
>>> On 13/01/17 13:30, Marc Zyngier wrote:
>>>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>>>
>>
>> [...]
>>
>>>> But maybe we should have have some stronger guarantees that we'll
>>>> always get things inlined, and that the "const" side is enforced:
>>>
>>> Agreed.
>>>
>>>>
>>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>>> index b4989df..4710469 100644
>>>> --- a/arch/arm64/include/asm/cpufeature.h
>>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>>>> }
>>>>
>>>> /* System capability check for constant caps */
>>>> -static inline bool cpus_have_const_cap(int num)
>>>> +static __always_inline bool cpus_have_const_cap(int num)
>>>
>>> I think we should have the above change and make it inline always.
>>>
>>>> {
>>>> - if (num >= ARM64_NCAPS)
>>>> - return false;
>>>> + BUILD_BUG_ON(!__builtin_constant_p(num));
>>>
>>> This is not needed, as the compilation would fail if num is not a constant with
>>> static key code.
>
> I also just checked this, and it doesn't fail if the compiler doesn't
> directly supports jump labels (we then fallback to the static key being
> a standard memory access).
Ah, I missed that part of the story. Sorry about that. Please go ahead with the
changes. I had a similar check in my first version and was dropped later with a
similar review comment. We hadn't considered older tool chain.
Suzuki
^ permalink raw reply [flat|nested] 19+ messages in thread