From: andre.przywara@arm.com (Andre Przywara)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
Date: Thu, 13 Nov 2014 11:52:58 +0000 [thread overview]
Message-ID: <54649B9A.20605@arm.com> (raw)
In-Reply-To: <546497E4.8040100@arm.com>
Hi Nikolay,
On 13/11/14 11:37, Marc Zyngier wrote:
> [fixing Andre's email address]
>
> On 13/11/14 11:20, Christoffer Dall wrote:
>> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
>>
>> [...]
>>
>>>>>
>>>>> Going through the vgic_handle_mmio we see that it will require large
>>>>> refactoring:
>>>>> - there are 15 MMIO ranges for the vgic now - each should be
>>>>> registered as a separate device
>>>>> - the handler of each range should be split into read and write
>>>>> - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>>
>>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>>
>>>>> We have 3 questions:
>>>>> - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>>> architectures too?
>>>>> - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>>> touches a lot of code)?
>>>>> - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>>> current state?
>>>>>
>>>> Not sure how the latter question is relevant to this, but check with
>>>> Andre who recently looked at this as well and decided that for GICv3 the
>>>> only sane thing was to remove that comment for the gic.
>>> @Andre - what's your experience with the GICv3 and MMIO handling,
>>> anything specific?
>>>>
>>>> I don't recall the details of what you were trying to accomplish here
>>>> (it's been 8 months or so) but the surely the vgic handling code should
>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>>> suggested), unless you come back and tell me that that would involve a
>>>> complete rewrite of the vgic code.
>>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>>> but it will touch a lot of it - all MMIO access handlers and the
>>> supporting functions.
>>> We're ready to spend the effort. My question is - is this acceptable?
>>>
>> I certainly appreciate the offer to do this work, but it's hard to say
>> at this point if it is worth it.
>>
>> What I was trying to say above is that Andre looked at this, and came to
>> the conclusion that it is not worth it.
>>
>> Marc, what are your thoughts?
>
> Same here, I rely on Andre's view that it was not very useful. Now, it
> would be good to see a mock-up of the patches and find out:
Seconded, can you send a pointer to the VGIC rework patches mentioned?
> - if it is a major improvement for the general quality of the code
> - if that allow us to *delete* a lot of code (if it is just churn, I'm
> not really interested)
> - if it helps or hinders further developments that are currently in the
> pipeline
>
> Andre, can you please share your findings? I don't remember the
> specifics of the discussion we had a few months ago...
1) Given the date in the reply I sense that your patches are from March
this year or earlier. So this is based on VGIC code from March, which
predates Marc's vgic_dyn changes that just went in 3.18-rc1? His patches
introduced another member to struct mmio_range to check validity of
accesses with a reduced number of SPIs supported (.bits_per_irq).
So is this covered in your rework?
2)
>>> - there are 15 MMIO ranges for the vgic now - each should be
Well, the GICv3 emulation adds 41 new ranges. Not sure if this still fits.
>>> registered as a separate device
I found this fact a show-stopper when looking at this a month ago.
Somehow it feels wrong to register a bunch of pseudo-devices. I could go
with registering a small number of regions (one distributor, two
redistributor regions for instance), but not handling every single of
the 41 + 15 register "groups" as a device.
Also I wasn't sure if we had to expose some of the vGIC structures to
the other KVM code layers.
But I am open to any suggestions (as long as they go in _after_ my
vGICv3 series ;-) - so looking forward to some repo to see how it looks
like.
Cheers,
Andre.
next prev parent reply other threads:[~2014-11-13 11:52 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
2014-03-28 19:09 ` Christoffer Dall
2014-03-29 17:34 ` Paolo Bonzini
2014-11-10 15:09 ` Nikolay Nikolaev
2014-11-10 16:27 ` Christoffer Dall
2014-11-13 10:45 ` Nikolay Nikolaev
2014-11-13 11:20 ` Christoffer Dall
2014-11-13 11:20 ` Christoffer Dall
2014-11-13 11:37 ` Marc Zyngier
2014-11-13 11:52 ` Andre Przywara [this message]
2014-11-13 12:29 ` Nikolay Nikolaev
2014-11-13 12:52 ` Andre Przywara
2014-11-13 14:16 ` Eric Auger
2014-11-13 14:23 ` Eric Auger
2014-03-13 15:57 ` [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd Antonios Motakis
2014-03-13 15:57 ` [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD Antonios Motakis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54649B9A.20605@arm.com \
--to=andre.przywara@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).