kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andre Przywara <andre.przywara@arm.com>
To: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>,
	Eric Auger <eric.auger@linaro.org>
Cc: Christoffer Dall <christoffer.dall@linaro.org>,
	Russell King <linux@arm.linux.org.uk>,
	"open list:KERNEL VIRTUAL MA..." <kvm@vger.kernel.org>,
	open list <linux-kernel@vger.kernel.org>,
	Gleb Natapov <gleb@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	VirtualOpenSystems Technical Team <tech@virtualopensystems.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>
Subject: Re: [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
Date: Thu, 13 Nov 2014 15:31:00 +0000	[thread overview]
Message-ID: <5464CEB4.8010607@arm.com> (raw)
In-Reply-To: <CADDJ2=OLkE36N_QyW==qL5oWu=t4YY0BZ1ekUPv_-Y7gga7xVw@mail.gmail.com>



On 13/11/14 15:02, Nikolay Nikolaev wrote:
> On Thu, Nov 13, 2014 at 4:23 PM, Eric Auger <eric.auger@linaro.org> wrote:
>> On 11/13/2014 03:16 PM, Eric Auger wrote:
>>> On 11/13/2014 11:45 AM, Nikolay Nikolaev wrote:
>>>> On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall
>>>> <christoffer.dall@linaro.org> wrote:
>>>>> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
>>>>>> Hello,
>>>>>>
>>>>>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
>>>>>> <christoffer.dall@linaro.org> wrote:
>>>>>>>
>>>>>>> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>>>>>>>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>>>>>>>> handle the MMIO access through any registered read/write callbacks. This
>>>>>>>> is a dependency for eventfd support (ioeventfd and irqfd).
>>>>>>>>
>>>>>>>> However, accesses to the VGIC are still left implemented independently,
>>>>>>>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>>>>>>>>
>>>>>>>> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
>>>>>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>>>>>>>> ---
>>>>>>>>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>>>>>>>>  virt/kvm/arm/vgic.c |  5 ++++-
>>>>>>>>  2 files changed, 36 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>>>>>>>> index 4cb5a93..1d17831 100644
>>>>>>>> --- a/arch/arm/kvm/mmio.c
>>>>>>>> +++ b/arch/arm/kvm/mmio.c
>>>>>>>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>>>>>>       return 0;
>>>>>>>>  }
>>>>>>>>
>>>>>>>> +/**
>>>>>>>> + * kvm_handle_mmio - handle an in-kernel MMIO access
>>>>>>>> + * @vcpu:    pointer to the vcpu performing the access
>>>>>>>> + * @run:     pointer to the kvm_run structure
>>>>>>>> + * @mmio:    pointer to the data describing the access
>>>>>>>> + *
>>>>>>>> + * returns true if the MMIO access has been performed in kernel space,
>>>>>>>> + * and false if it needs to be emulated in user space.
>>>>>>>> + */
>>>>>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>>> +             struct kvm_exit_mmio *mmio)
>>>>>>>> +{
>>>>>>>> +     int ret;
>>>>>>>> +     if (mmio->is_write) {
>>>>>>>> +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>>>> +                             mmio->len, &mmio->data);
>>>>>>>> +
>>>>>>>> +     } else {
>>>>>>>> +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>>>> +                             mmio->len, &mmio->data);
>>>>>>>> +     }
>>>>>>>> +     if (!ret) {
>>>>>>>> +             kvm_prepare_mmio(run, mmio);
>>>>>>>> +             kvm_handle_mmio_return(vcpu, run);
>>>>>>>> +     }
>>>>>>>> +
>>>>>>>> +     return !ret;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>>>                phys_addr_t fault_ipa)
>>>>>>>>  {
>>>>>>>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>>>       if (vgic_handle_mmio(vcpu, run, &mmio))
>>>>>>>>               return 1;
>>>>>>>>
>>>>>>>> +     if (handle_kernel_mmio(vcpu, run, &mmio))
>>>>>>>> +             return 1;
>>>>>>>> +
>>>>>>
>>>>>>
>>>>>> We're reconsidering ioeventfds patchseries and we tried to evaluate
>>>>>> what you suggested here.
>>>>>>
>>>>>>>
>>>>>>> this special-casing of the vgic is now really terrible.  Is there
>>>>>>> anything holding you back from doing the necessary restructure of the
>>>>>>> kvm_bus_io_*() API instead?
>>>>>>
>>>>>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
>>>>>> it), but is not directly related to the these patches.
>>>>>> Of course it can be justified if we do it in the context of removing
>>>>>> vgic_handle_mmio and leaving only handle_kernel_mmio.
>>>>>>
>>>>>>>
>>>>>>> That would allow us to get rid of the ugly
>>>>>>> Fix it! in the vgic driver as well.
>>>>>>
>>>>>> Going through the vgic_handle_mmio we see that it will require large
>>>>>> refactoring:
>>>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>>>> registered as a separate device
>> Re-correcting Andre's address, sorry:
>> Hi Nikolay, Andre,
>>
>> what does mandate to register 15 devices? Isn't possible to register a
>> single kvm_io_device covering the whole distributor range [base, base +
>> KVM_VGIC_V2_DIST_SIZE] (current code) and in associated
>> kvm_io_device_ops read/write locate the addressed range and do the same
>> as what is done in current vgic_handle_mmio? Isn't it done that way for
> 
> Well, then we'll actually get slower mmio processing. Instead of calling
> vgic_handle_mmio in io_mem_abort, we'll be calling kvm_io_bus_write.
> This just adds another level of translation (i.e. find the kvm_io_ device)
> and the underlying vgic code will remain almost the same.

Agreed. That was one possibility I came around also, but I think it
defeats the purpose of the rework, which is mostly to get rid of the
GIC's private MMIO dispatching code, right?

But honestly I would happily sacrifice "performance" for easier VGIC
code - especially if one thinks about security for instance. Though I
think that another memory reference doesn't really matter in this
context ;-)

>> the ioapic? what do I miss?
> I looked quickly in the ioapic code, and if I get it right there are no "ranges'
> like what we have with the GIC. They have this regselect/regwindow concept
> and they seem to have much less "registers" to handle. GIC seems a lot more
> complex in terms of MMIO interface.

Right, that was my impression, too. IOAPIC isn't really comparable to
the GIC in this respect. That's why I was going away from this rework,
since I thought that the kvm_io_bus API wasn't really meant for such
beasts as the GIC.

Cheers,
Andre.

> 
> regards,
> Nikolay Nikolaev
> 
>>
>> Thanks
>>
>> Best Regards
>>
>> Eric
>>>>>>  - the handler of each range should be split into read and write
>>>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>>>
>>>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>>>
>>>>>> We have 3 questions:
>>>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>>>> architectures too?
>>>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>>>> touches a lot of code)?
>>>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>>>> current state?
>>>>>>
>>>>> Not sure how the latter question is relevant to this, but check with
>>>>> Andre who recently looked at this as well and decided that for GICv3 the
>>>>> only sane thing was to remove that comment for the gic.
>>>> @Andre - what's your experience with the GICv3 and MMIO handling,
>>>> anything specific?
>>>>>
>>>>> I don't recall the details of what you were trying to accomplish here
>>>>> (it's been 8 months or so) but the surely the vgic handling code should
>>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>>>> suggested), unless you come back and tell me that that would involve a
>>>>> complete rewrite of the vgic code.
>>>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>>>> but it will touch a lot of it  - all MMIO access handlers and the
>>>> supporting functions.
>>>> We're ready to spend the effort. My question is  - is this acceptable?
>>>>
>>>> regards,
>>>> Nikolay Nikolaev
>>>> Virtual Open Systems
>>>>>
>>>>> -Christoffer
>>>> _______________________________________________
>>>> kvmarm mailing list
>>>> kvmarm@lists.cs.columbia.edu
>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>>
>>>
>>
> 

  parent reply	other threads:[~2014-11-13 15:31 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
2014-03-28 19:09   ` Christoffer Dall
2014-03-29 17:34     ` Paolo Bonzini
2014-11-10 15:09     ` Nikolay Nikolaev
2014-11-10 16:27       ` Christoffer Dall
2014-11-13 10:45         ` Nikolay Nikolaev
2014-11-13 11:20           ` Christoffer Dall
2014-11-13 11:20             ` Christoffer Dall
2014-11-13 11:37             ` Marc Zyngier
2014-11-13 11:52               ` Andre Przywara
2014-11-13 12:29                 ` Nikolay Nikolaev
2014-11-13 12:52                   ` Andre Przywara
2014-11-13 14:16           ` Eric Auger
2014-11-13 14:23             ` Eric Auger
2014-11-13 15:02               ` Nikolay Nikolaev
2014-11-13 15:13                 ` Christoffer Dall
2014-11-13 15:31                 ` Andre Przywara [this message]
2014-11-13 16:07                   ` Eric Auger
2014-03-13 15:57 ` [RFC PATCH 2/4] KVM: irqfd should depend on CONFIG_HAVE_KVM_IRQ_ROUTING Antonios Motakis
2014-03-13 15:57 ` [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd Antonios Motakis
2014-03-13 15:57 ` [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD Antonios Motakis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5464CEB4.8010607@arm.com \
    --to=andre.przywara@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=eric.auger@linaro.org \
    --cc=gleb@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=n.nikolaev@virtualopensystems.com \
    --cc=pbonzini@redhat.com \
    --cc=tech@virtualopensystems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).