public inbox for kvmarm@lists.cs.columbia.edu
 help / color / mirror / Atom feed
From: Mario Smarduch <m.smarduch@samsung.com>
To: Marc Zyngier <marc.zyngier@arm.com>
Cc: "kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>
Subject: Re: Advice on HYP interface for AsyncPF
Date: Fri, 10 Apr 2015 16:45:10 -0700	[thread overview]
Message-ID: <55286086.3040007@samsung.com> (raw)
In-Reply-To: <55278FA1.8040502@arm.com>

On 04/10/2015 01:53 AM, Marc Zyngier wrote:
> On 10/04/15 03:36, Mario Smarduch wrote:
>> On 04/09/2015 12:57 AM, Marc Zyngier wrote:
>>> On Thu, 9 Apr 2015 02:46:54 +0100
>>> Mario Smarduch <m.smarduch@samsung.com> wrote:
>>>
>>> Hi Mario,
>>>
>>>> I'm working with AsyncPF, and currently using
>>>> hyp call to communicate guest GFN for host to inject
>>>> virtual abort - page not available/page available.
>>>>
>>>> Currently only PSCI makes use of that interface,
>>>> (handle_hvc()) can we overload interface with additional
>>>> hyp calls in this case pass guest gfn? Set arg0
>>>> to some range outside of PSCI use.
>>>
>>> I can't see a reason why we wouldn't open handle_hvc() to other
>>> paravirtualized services. But this has to be done with extreme caution:
>>>
>>> - This becomes an ABI between host and guest
>>> - We need a discovery protocol
>>> - We need to make sure other hypervisors don't reuse the same function
>>>   number for other purposes
>>>
>>> Maybe we should adopt Xen's idea of a hypervisor node in DT where we
>>> would describe the various services? How will that work with ACPI?
>>>
>>> Coming back to AsyncPF, and purely out of curiosity: why do you need a
>>> HYP entry point? From what I remember, AsyncPF works by injecting a
>>> fault in the guest when the page is found not present or made
>>> available, with the GFN being stored in a per-vcpu memory location.
>>>
>>> Am I missing something obvious? Or have I just displayed my ignorance on
>>> this subject? ;-)
>> Hi Marc,
>>
>> Or it might be me :)
>>
>> But I'm thinking Guest and host need to agree on some per-vcpu
>> guest memory for KVM to write PV-fault type, and Guest to read
>> the PV-fault type, ack it, i.e. Having the guest allocate the per-cpu
>> PV-fault memory and inform KVM with its GPA via hyp call is one
>> approach I was thinking off.
> 
> Ah, I see what you mean. I was only looking at the runtime aspect of
> things, and didn't consider the (all important) setup stage.
> 
>> I was looking through x86 that's based on CPUID extended with
>> PV feature support. In the guest if the ASYNC PF feature is enabled
>> it writes GPA to ASYNC PF MSR that's resolved in KVM (x86 folks
>> can correct if I'm off here).
>>
>> I'm wondering if we could build on this concept maybe PV ID_* registers,
>> to discover existence of ASYNC PF feature?
> 
> I suppose we could do something similar with the ImpDef encoding space
> (i.e. what is trapped using HCR_EL2.TIDCP). The main issue with that is
> to be able to safely carve out a range that will never be used by any HW
> implementation, ever. I can't really see how we enforce this.

I was thinking of a virtual ID register, populated with PV
features when vcpu is initialized. PV Guest would discover
PV features via MMIO read of PV ID reg. This probably could
have it's own range of features, independent of HW.

> 
> Also, it will have the exact same cost as a hypercall, so maybe it is
> bit of a moot point. Anyway, this is "just" a matter of being able to
> describe the feature to the guest (and it seems like this is the real
> controversial aspect)...

Yes I don't quite see the dividing line between
hyp call and CPU ID scheme. Need lot more thinking.

Thanks,
  Mario


> 
> Thanks,
> 
> 	M.
> 

      reply	other threads:[~2015-04-10 23:37 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-09  1:46 Advice on HYP interface for AsyncPF Mario Smarduch
2015-04-09  7:57 ` Marc Zyngier
2015-04-09 12:06   ` Andrew Jones
2015-04-09 12:48     ` Mark Rutland
2015-04-09 13:43       ` Andrew Jones
2015-04-09 14:00         ` Mark Rutland
2015-04-09 14:22           ` Andrew Jones
2015-04-09 14:37             ` Mark Rutland
2015-04-09 14:54               ` Andrew Jones
2015-04-09 15:20                 ` Mark Rutland
2015-04-09 19:01                   ` Andrew Jones
2015-04-13 10:46                     ` Mark Rutland
2015-04-13 12:52                       ` Andrew Jones
2015-04-09 14:48             ` Mark Rutland
2015-04-09 13:35     ` Christoffer Dall
2015-04-09 13:59       ` Andrew Jones
2015-04-09 14:22         ` Christoffer Dall
2015-04-09 14:42           ` Andrew Jones
2015-04-10  2:36   ` Mario Smarduch
2015-04-10  8:53     ` Marc Zyngier
2015-04-10 23:45       ` Mario Smarduch [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55286086.3040007@samsung.com \
    --to=m.smarduch@samsung.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=marc.zyngier@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox