From: Arianna Avanzini <avanzini.arianna@gmail.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
Ian Campbell <Ian.Campbell@citrix.com>
Cc: paolo.valente@unimore.it, Keir Fraser <keir@xen.org>,
stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
xen-devel@lists.xen.org, julien.grall@citrix.com,
etrudeau@broadcom.com, Jan Beulich <JBeulich@suse.com>,
viktor.kleinik@globallogic.com
Subject: Re: [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall
Date: Fri, 14 Mar 2014 13:39:04 +0100 [thread overview]
Message-ID: <5322F868.2060803@gmail.com> (raw)
In-Reply-To: <1394799318.4159.183.camel@Solace>
On 03/14/2014 01:15 PM, Dario Faggioli wrote:
> On ven, 2014-03-14 at 09:46 +0000, Ian Campbell wrote:
>> On Thu, 2014-03-13 at 19:37 +0100, Dario Faggioli wrote:
>
>>> Exactly, that may very well be the case. It may not, in Arianna's case,
>>
>> Does Arianna's OS support device tree then? I had thought not.
>>
> It does not. The "it's not the case" was referred to the hardcoded
> addresses of the IO registers, as she's for now using the 1:1 mapping.
> But as I said, that can change, and that's why I was arguing in favor of
> your solution of being able to specify a PFN too.
>
>> If it
>> doesn't support device tree then it is necessarily tied to a specific
>> version of Xen, since in the absence of DT it must hardcode some
>> addresses.
>>
> That's fine, at least for now, for that OS, as well as, I'm quite sure,
> for many embedded OSes, should they want to run on Xen.
>
>>> but it well can be true for others, or even for her, in future.
>>>
>>> Therefore, I keep failing to see why to prevent this to be the case.
>>
>> Prevent what to be the case?
>>
> Julien was saying he does not want the extended "iomem=[MFN,NR@PFN]"
> syntax. In particular, he does not want the "@PFN" part, which I happen
> to like. :-)
>
>>> In Arianna's case, it think it would be more than fine to implement it
>>> that way, and call it from within the OS, isn't this the case, Arianna?
>>
>> It's certainly an option, and it would make a lot of the toolstack side
>> issues moot but I'm not at all sure it is the right answer. In
>> particular although it might be easy to bodge a mapping into many OSes I
>> can imagine getting such a think into something generic like Linux might
>> be more tricky -- in which case perhaps the toolstack should be taking
>> care of it, and that does have a certain appeal from the simplicity of
>> the guest interface side of things.
>>
> If the toolstack needs to support iomem, yes.
>
> A way to see it could be that, as of now, we have embedded OSes asking
> for it already, and, at the same time, it's unlikely that more generic
> system such as Linux would want something similar, for one because
> they'll have full DT/ACPI support for this.
>
> The fact that, if what you say below is true, "iomem" does not work at
> all, and no one complained from the Linux world so far, seems to me to
>
>>> One thing I don't see right now is, in the in-kernel case, what we
>>> should do when finding the "iomem=[]" option in a config file.
>>
>> Even for an x86 HVM guest with iomem there is no call to
>> xc_domain_memory_mapping (even from qemu) it is called only for PCI
>> passthrough. I've no idea if/how iomem=[] works for x86 HVM guests.
>> Based on a cursory glance it looks to me like it wouldn't and if it did
>> work it would have the same problems wrt where to map it as we have
>> today with the ARM guests, except perhaps on a PC the sorts of things
>> you would pass through with this can be done 1:1 because they are legacy
>> PC peripherals etc.
>>
> Aha, interesting! As said to Julien, I tried to look for how these iomem
> regions get to the xc_domain_memory_map in QEMU and I also found nothing
> (except for PCI passthrough stuff)... I was assuming I was missing
> something... apparently, I wasn't. :-)
>
> I can try (even just out of curiosity!) to dig in the tree and see what
> happened with this feature...
>
> BTW, doesn't this make this discussion even more relevant? I mean, if
> it's there but it's not working, shouldn't we make it work (for some
> definition of "work"), if we decide it's useful, or kill it, if not?
>
>> I think we just don't know what the right answer is yet and I'm unlikely
>> to be able to say directly what the right answer is. I'm hoping that
>> people who want to use this low level functionality can provide a
>> consistent story for how it should work (a "design" if you like) to
>> which I can say "yes, that seems sensible" or "hmm, that seems odd
>> because of X". At the moment X is "the 1:1 mapping seems undesirable to
>> me". There have been some suggestions for how to fix that, someone with
>> a horse in the race should have a think about it and provide an
>> iteration on the design until we are happy with it.
>>
> Again, I'll look more into the history of this feature.
>
> In the meantime, the man page entry says:
>
> "Allow guest to access specific hardware I/O memory pages.
> B<IOMEM_START> is a physical page number. B<NUM_PAGES> is the number of
> pages beginning with B<START_PAGE> to allow access. Both values must be
> given in hexadecimal.
>
> It is recommended to use this option only for trusted VMs under
> administrator control."
>
> For one, the "Allow guest to access" there leaves a lot of room for
> interpretation, I think. It doesn't say anything about mapping, so one
> can interpret it as 'this only grant you mapping permission, up to you
> to map'. However, it says "access", so one can interpret it as 'if I can
> access it, it's mapped already'.
>
> Wherever this discussion will land, I think that, if we keep this
> option, we should make the documentation less ambiguous.
>
> That being said, allow me to play, in the rest of this e-mail, the role
> of one which expects the mapping to be actually instated by just
> specifying "iomem=[]" in the config file, which is (correct me guys if
> I'm wrong) what Eric, Viktor and Arianna thought when reading the entry
> above.
>
> So:
> 1. it says nothing about where the mapping ends up in guest's memory,
> 2. it looks quite clear (to me?) that this is a raw/low level feature,
> to be used under controlled conditions (which an embedded product
> often is)
>
> To me, a legitimate use case is this: I want to run version X of my non
> DT capable OS on version Z of Xen, on release K of board B. In such
> configuration, the GPIO controller is at MFN 0x000abcd, and I want only
> VM V to have direct access to it (board B dos not have an IOMMU).
>
> I would also assume that one is in full control of the guest address
> space, so it is be ok to hardcode the addresses of registers somewhere.
> Of course, that may be an issue, when putting Xen underneath, as Xen's
> mangling with the such address space can cause troubles.
>
> Arianna, can you confirm that this is pretty much the case of Erika, or
> amend what I did get wrong?
>
I confirm that this is the case of ERIKA Enterprise, whose domU port is intended
to have exclusive access to some peripherals' I/O memory ranges. I also confirm
that, as you wrote, ERIKA doesn't currently support DT parsing and only relies
on hardcoded I/O memory addresses.
> I certainly don't claim to have the right answer but, in the described
> scenario, either:
> 1) the combination of iomem=[ MFN,NR@PFN ]", defaulting to 1:1 if
> "@PFN is missing, and e820_host
> 2) calling (the equivalent of) XEN_DOMCTL_memory_map from the guest
> kernel
>
> would be good solutions, to the point that I think we could even support
> both. The main point being that, I think, even in the worst case, any
> erroneous usage of either, would "just" destroy the guest, and that's
> acceptable.
>
> Actually, if going for 1), I think (when both the pieces are there)
> things should be pretty safe. Furthermore, implementing 1) seems to me
> the only way of having the "iomem=[]" parameter causing both permissions
> granting and mapping. Downside is (libxl side of) the implementation
> indeed looks cumbersome.
>
> If going for _only_ 2), then "iomem=[]" would just be there to ensure
> the future mapping operation to be successful, i.e., for granting
> mapping rights, as it's doing right now. It would be up to the guest
> kernel to make sure the MFN it is trying to map are consistent with what
> was specified in "iomem=[]". Given the use case we're talking about, I
> don't think this is an unreasonable request, as far as we make the iomem
> man entry more clearly stating this.
>
> Again, Arianna, do you confirm both solution are possible for you?
>
Yes, I believe both solutions are applicable as far as it concerns my use case.
> I certainly agree that the thread could benefit from some opinion from
> people actually wanting to use this. In addition to Arianna, I have
> pinged Eirc and Viktor repeatedly... let's see if they have time to let
> us all know something more about their own needs and requirements wrt
> this.
>
>>> Also, just trying to recap, for Arianna's sake, moving the
>>> implementation of the DOMCTL in common code (and implementing the
>>> missing bits to make it works properly, of course) is still something we
>>> want, right?
>>
>> *If* we go the route of having the kernel make the mapping then there is
>> no need, is there?
>>
> Yeah, well, me not being sure is the reason I asked... And Julien said
> he thinks we still want it... :-P
>
> As I was saying above, I think there is room for both, but I don't mind
> picking up one. However, if we want to fix iomem=[] and go as far as
> having it doing the mapping, then I think we all agree we need the
> DOMCTL.
>
> So, looks like the discussion resolves to something like:
> - do we need the DOMCTL for other purposes than iomem=[] ?
> - if no, what do we want to do with iomem=[] ?
>
> Sorry to have brought you all deep down into this can of worms! :-/
>
> Regards,
> Dario
>
--
/*
* Arianna Avanzini
* avanzini.arianna@gmail.com
* 73628@studenti.unimore.it
*/
next prev parent reply other threads:[~2014-03-14 12:39 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-10 8:25 [RFC PATCH v2 0/3] Implement the XEN_DOMCTL_memory_mapping hypercall for ARM Arianna Avanzini
2014-03-10 8:25 ` [RFC PATCH v2 1/3] arch, arm: allow dom0 access to I/O memory of mapped devices Arianna Avanzini
2014-03-10 11:30 ` Julien Grall
2014-03-11 0:49 ` Arianna Avanzini
2014-03-13 15:27 ` Ian Campbell
2014-03-13 15:40 ` Julien Grall
2014-03-10 8:25 ` [RFC PATCH v2 2/3] arch, arm: add the XEN_DOMCTL_memory_mapping hypercall Arianna Avanzini
2014-03-10 12:03 ` Julien Grall
2014-03-11 1:20 ` Arianna Avanzini
2014-03-13 15:29 ` Ian Campbell
2014-03-13 15:36 ` Jan Beulich
2014-03-13 15:51 ` Dario Faggioli
2014-03-13 15:57 ` Ian Campbell
2014-03-13 16:08 ` Jan Beulich
2014-03-10 8:25 ` [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall Arianna Avanzini
2014-03-13 15:27 ` Ian Campbell
2014-03-13 15:34 ` Julien Grall
2014-03-13 15:49 ` Ian Campbell
2014-03-13 16:36 ` Dario Faggioli
2014-03-13 16:47 ` Julien Grall
2014-03-13 17:32 ` Ian Campbell
2014-03-13 18:37 ` Dario Faggioli
2014-03-13 20:29 ` Julien Grall
2014-03-14 9:55 ` Dario Faggioli
2014-03-14 9:46 ` Ian Campbell
2014-03-14 12:00 ` Julien Grall
2014-03-14 12:15 ` Dario Faggioli
2014-03-14 12:39 ` Arianna Avanzini [this message]
2014-03-14 12:49 ` Ian Campbell
2014-03-14 15:10 ` Stefano Stabellini
2014-03-14 15:45 ` Dario Faggioli
2014-03-14 16:19 ` Ian Campbell
2014-03-14 16:25 ` Dario Faggioli
2014-03-14 18:39 ` Eric Trudeau
2014-03-17 9:37 ` Ian Campbell
2014-03-13 15:43 ` Jan Beulich
2014-03-13 15:51 ` Ian Campbell
2014-03-13 16:53 ` Dario Faggioli
2014-03-13 17:04 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5322F868.2060803@gmail.com \
--to=avanzini.arianna@gmail.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=dario.faggioli@citrix.com \
--cc=etrudeau@broadcom.com \
--cc=julien.grall@citrix.com \
--cc=julien.grall@linaro.org \
--cc=keir@xen.org \
--cc=paolo.valente@unimore.it \
--cc=stefano.stabellini@eu.citrix.com \
--cc=tim@xen.org \
--cc=viktor.kleinik@globallogic.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).