From: Gavin Shan <gshan@redhat.com>
To: Marc Zyngier <maz@kernel.org>, pbonzini@redhat.com
Cc: Mark Rutland <mark.rutland@arm.com>,
aarcange@redhat.com, drjones@redhat.com, suzuki.poulose@arm.com,
catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
eric.auger@redhat.com, james.morse@arm.com, shan.gavin@gmail.com,
will@kernel.org, kvmarm@lists.cs.columbia.edu,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH RFCv2 7/9] kvm/arm64: Support async page fault
Date: Thu, 28 May 2020 16:32:09 +1000 [thread overview]
Message-ID: <924ee966-7412-f9ff-c2b0-598e4abbb05c@redhat.com> (raw)
In-Reply-To: <28c74819f42306e66370ddaf88f16918@kernel.org>
Hi Marc,
On 5/27/20 5:37 PM, Marc Zyngier wrote:
> On 2020-05-27 05:05, Gavin Shan wrote:
[...]
>>>> +struct kvm_vcpu_pv_apf_data {
>>>> + __u32 reason;
>>>> + __u8 pad[60];
>>>> + __u32 enabled;
>>>> +};
>>>
>>> What's all the padding for?
>>>
>>
>> The padding is ensure the @reason and @enabled in different cache
>> line. @reason is shared by host/guest, while @enabled is almostly
>> owned by guest.
>
> So you are assuming that a cache line is at most 64 bytes.
> It is actualy implementation defined, and you can probe for it
> by looking at the CTR_EL0 register. There are implementations
> ranging from 32 to 256 bytes in the wild, and let's not mention
> broken big-little implementations here.
>
> [...]
>
Ok, Thanks for your comments and hints.
>>>> +bool kvm_arch_can_inject_async_page_not_present(struct kvm_vcpu *vcpu)
>>>> +{
>>>> + u64 vbar, pc;
>>>> + u32 val;
>>>> + int ret;
>>>> +
>>>> + if (!(vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED))
>>>> + return false;
>>>> +
>>>> + if (vcpu->arch.apf.send_user_only && vcpu_mode_priv(vcpu))
>>>> + return false;
>>>> +
>>>> + /* Pending page fault, which ins't acknowledged by guest */
>>>> + ret = kvm_async_pf_read_cache(vcpu, &val);
>>>> + if (ret || val)
>>>> + return false;
>>>> +
>>>> + /*
>>>> + * Events can't be injected through data abort because it's
>>>> + * going to clobber ELR_EL1, which might not consued (or saved)
>>>> + * by guest yet.
>>>> + */
>>>> + vbar = vcpu_read_sys_reg(vcpu, VBAR_EL1);
>>>> + pc = *vcpu_pc(vcpu);
>>>> + if (pc >= vbar && pc < (vbar + vcpu->arch.apf.no_fault_inst_range))
>>>> + return false;
>>>
>>> Ah, so that's when this `no_fault_inst_range` is for.
>>>
>>> As-is this is not sufficient, and we'll need t be extremely careful
>>> here.
>>>
>>> The vectors themselves typically only have a small amount of stub code,
>>> and the bulk of the non-reentrant exception entry work happens
>>> elsewhere, in a mixture of assembly and C code that isn't even virtually
>>> contiguous with either the vectors or itself.
>>>
>>> It's possible in theory that code in modules (or perhaps in eBPF JIT'd
>>> code) that isn't safe to take a fault from, so even having a contiguous
>>> range controlled by the kernel isn't ideal.
>>>
>>> How does this work on x86?
>>>
>>
>> Yeah, here we just provide a mechanism to forbid injecting data abort. The
>> range is fed by guest through HVC call. So I think it's guest related issue.
>> You had more comments about this in PATCH[9]. I will explain a bit more there.
>>
>> x86 basically relies on EFLAGS[IF] flag. The async page fault can be injected
>> if it's on. Otherwise, it's forbidden. It's workable because exception is
>> special interrupt to x86 if I'm correct.
>>
>> return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
>> !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
>> (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
>
> I really wish this was relying on an architected exception delivery
> mechanism that can be blocked by the guest itself (PSTATE.{I,F,A}).
> Trying to guess based on the PC won't fly. But these signals are
> pretty hard to multiplex with anything else. Like any form of
> non-architected exception injection, I don't see a good path forward
> unless we start considering something like SDEI.
>
> M.
As Paolo mentioned in another reply. There are two types of notifications
sent from host to guest: page_not_present and page_ready. The page_not_present
notification should be delivered synchronously while page_ready can be
delivered asynchronously. He also suggested to reserve a ESR (or DFSC) subclass
for page_not_present. For page_ready, it can be delivered by interrupt, like PPI.
x86 is changing the code to deliver page_ready by interrupt, which was done by
exception previously.
when we use interrupt, instead of exception for page_ready. We won't need the
game of guessing PC.
I assume you prefer to use SDEI for page_not_present, correct? In that case,
what's the current status of SDEI? I mean it has been fully or partially
supported, or we need develop it from the scratch :)
Thanks,
Gavin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-05-28 6:32 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-08 3:29 [PATCH RFCv2 0/9] kvm/arm64: Support Async Page Fault Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 1/9] arm64: Probe for the presence of KVM hypervisor services during boot Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 2/9] arm/arm64: KVM: Advertise KVM UID to guests via SMCCC Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 3/9] kvm/arm64: Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() Gavin Shan
2020-05-26 10:42 ` Mark Rutland
2020-05-27 2:43 ` Gavin Shan
2020-05-27 7:20 ` Marc Zyngier
2020-05-28 6:34 ` Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 4/9] kvm/arm64: Detach ESR operator from vCPU struct Gavin Shan
2020-05-26 10:51 ` Mark Rutland
2020-05-27 2:55 ` Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 5/9] kvm/arm64: Replace hsr with esr Gavin Shan
2020-05-26 10:45 ` Mark Rutland
2020-05-27 2:56 ` Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 6/9] kvm/arm64: Export kvm_handle_user_mem_abort() with prefault mode Gavin Shan
2020-05-26 10:58 ` Mark Rutland
2020-05-27 3:01 ` Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 7/9] kvm/arm64: Support async page fault Gavin Shan
2020-05-26 12:34 ` Mark Rutland
2020-05-27 4:05 ` Gavin Shan
2020-05-27 7:37 ` Marc Zyngier
2020-05-28 6:32 ` Gavin Shan [this message]
2020-05-08 3:29 ` [PATCH RFCv2 8/9] kernel/sched: Add cpu_rq_is_locked() Gavin Shan
2020-05-08 3:29 ` [PATCH RFCv2 9/9] arm64: Support async page fault Gavin Shan
2020-05-26 12:56 ` Mark Rutland
2020-05-27 6:48 ` Paolo Bonzini
2020-05-28 6:14 ` Gavin Shan
2020-05-28 7:03 ` Marc Zyngier
2020-05-28 10:53 ` Paolo Bonzini
2020-05-28 10:48 ` Paolo Bonzini
2020-05-28 23:02 ` Gavin Shan
2020-05-29 9:41 ` Marc Zyngier
2020-05-29 11:11 ` Paolo Bonzini
2020-05-31 12:44 ` Marc Zyngier
2020-06-01 9:21 ` Paolo Bonzini
2020-06-02 5:44 ` Gavin Shan
2020-05-25 23:39 ` [PATCH RFCv2 0/9] kvm/arm64: Support Async Page Fault Gavin Shan
2020-05-26 13:09 ` Mark Rutland
2020-05-27 2:39 ` Gavin Shan
2020-05-27 7:48 ` Marc Zyngier
2020-05-27 16:10 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=924ee966-7412-f9ff-c2b0-598e4abbb05c@redhat.com \
--to=gshan@redhat.com \
--cc=aarcange@redhat.com \
--cc=catalin.marinas@arm.com \
--cc=drjones@redhat.com \
--cc=eric.auger@redhat.com \
--cc=james.morse@arm.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=pbonzini@redhat.com \
--cc=shan.gavin@gmail.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).