xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Peng Fan <van.freenix@gmail.com>
Cc: Peng Fan <peng.fan@nxp.com>,
	sstabellini@kernel.org, xen-devel@lists.xen.org
Subject: Re: [PATCH V1] xen/arm: domain_build: introduce dom0_lowmem bootargs
Date: Thu, 15 Sep 2016 09:50:31 +0100	[thread overview]
Message-ID: <5b278cc7-5038-914c-b826-262b44e91c49@arm.com> (raw)
In-Reply-To: <20160915082646.GR16305@toto>

Hi,

On 15/09/2016 09:26, Edgar E. Iglesias wrote:
> On Thu, Sep 15, 2016 at 08:20:33AM +0800, Peng Fan wrote:
>> Hi Edgar,
>> On Wed, Sep 14, 2016 at 04:16:58PM +0200, Edgar E. Iglesias wrote:
>>> On Wed, Sep 14, 2016 at 08:40:09PM +0800, Peng Fan wrote:
>>>> On Wed, Sep 14, 2016 at 01:34:10PM +0100, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 14/09/16 13:18, Peng Fan wrote:
>>>>>> Hello Julien,
>>>>>>
>>>>>> On Wed, Sep 14, 2016 at 01:06:01PM +0100, Julien Grall wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 14/09/16 13:03, Peng Fan wrote:
>>>>>>>> Hello Julien,
>>>>>>>
>>>>>>> Hello Peng,
>>>>>>>
>>>>>>>> On Wed, Sep 14, 2016 at 11:47:10AM +0100, Julien Grall wrote:
>>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> On 14/09/16 08:41, Peng Fan wrote:
>>>>>>>>>> On Wed, Sep 14, 2016 at 08:23:24AM +0100, Julien Grall wrote:
>>>>>>>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>>>>>>>> index 35ab08d..cc71e6f 100644
>>>>>>>>>> --- a/xen/arch/arm/domain_build.c
>>>>>>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>>>>>>> @@ -28,6 +28,8 @@
>>>>>>>>>>
>>>>>>>>>> static unsigned int __initdata opt_dom0_max_vcpus;
>>>>>>>>>> integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>>>>>>>>>> +static bool_t __initdata opt_dom0_use_lowmem;
>>>>>>>>>> +boolean_param("dom0_use_lowmem", opt_dom0_use_lowmem);
>>>>>>>>>>
>>>>>>>>>> int dom0_11_mapping = 1;
>>>>>>>>>>
>>>>>>>>>> @@ -244,7 +246,7 @@ static void allocate_memory(struct domain *d, struct
>>>>>>>>>> kernel_info *kinfo)
>>>>>>>>>>   unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
>>>>>>>>>>   int i;
>>>>>>>>>> 	
>>>>>>>>>> -    bool_t lowmem = is_32bit_domain(d);
>>>>>>>>>> +    bool_t lowmem = is_32bit_domain(d) || opt_dom0_use_lowmem;
>>>>>>>>>>   unsigned int bits;
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Pass "dom0_use_lowmem=1" to xen to allocate lowmem as much as possible.
>>>>>>>>>
>>>>>>>>> Again, what is the benefit to have a command line option for that?
>>>>>>>>
>>>>>>>> Then you prefer directly change "bool_t lowmem = is_32bit_domain(d);" to "bool_t lowmem = true" ?
>>>>>>>> I just want to give user a choice.
>>>>>>>
>>>>>>> We don't add new command line parameter just because they look cool to have.
>>>>>>> So far, you did not explain why it would be good to let the choice to the
>>>>>>> user and how it could be used.
>>>>>>
>>>>>> I have not try, if there is no lowmem.
>>>>>>
>>>>>> I have not look into alloc_domheap_pages.
>>>>>> I am not sure whether there is such a platform or not,
>>>>>> just thinking if there is soc that dram memory starts from 4GB, and there is no dram
>>>>>> below 4GB. If we still can get memory when lowmem is true, I am ok to change directly assign
>>>>>> lowmem with value true. Anyway I have not look into the internals of domheap and
>>>>>> not sure whether there is such a platform that no lowmem (:-
>>>>>
>>>>> We cannot exclude this possibility. However, the only reason that Xen is
>>>>> requiring to allocate a bank below 4GB for 32-bit domain is to handle
>>>>> non-LPAE kernel.
>>>>
>>>> Now also need to handle device that have DMA limitation -:)
>>>
>>> Hi Peng,
>>>
>>> Doesn't your platform have an IOMMU/SMMU?
>>
>> We have SMMU. This is not related to SMMU. Dom0 use 1:1 mapping and no SMMU involved,
>> the physical memory assigned to Dom0 maybe higher than 4GB, but Some IPs only
>> supports 32bits DMA in Dom0. Then assign a 64bits dma address to a device only supports 32
>> bits device in Linux will hang the device or else.
>
> Well, I think it is somewhat related to the IOMMU.
>
> If your SMMU supports S1 + S2 translations, allthough not supported
> by Xen/ARM today, we could support nested SMMU's so that Dom0 could
> get it's own private S1 portion of the SMMU.
>
> Another option is to perhaps join into the efforts of PV-IOMMU
> and try to see if it would work for Xen on ARM:
> https://lists.xen.org/archives/html/xen-devel/2016-02/msg01428.html
>
> For platforms that have an IOMMU, I think both of these options may
> solve the use-case of dom0 using 32bit DMA devs without any lowmem.
> In addition to that, these features would be very nice as they also
> enable DMA API isolation and VFIO user-space drivers in dom0.
> Spending time on these kind of options seems worthwhile to me.

There is a third option. The direct memory mapping for DOM0 is a 
workaround for platform where not all DMA-capable devices are protected 
by an SMMU.

If you know the platform does not have such devices, it would be 
possible to remove the direct memory mapping.

In this case, Xen will allocate the RAM anywhere in the physical memory 
but will be mapped in DOM0 using the memory region find the device tree. 
(e.g if the first bank is below 4GB, DOM0 will have memory mapped below 
4GB).

You would also have to remove specific xen_dma_ops (swiotlb-xen 
callback) as the device will be fully protected (and no workaround is 
necessary).

I suggested a generic way few years ago [1] that is a step towards 
removing direct mapping. But i never had the time to continue it.

>
> With 32bit DMA devs, without an IOMMU, lowmem becomes critical but
> such systems are not really secure as has already been mentioned.
> I'm not sure it's worth introducing workarounds/hax for such
> systems.

Well, Dom0 (aka hardware domain) is a trusted domain as it owns most of 
the devices. There are so many other way to screw up the platform. So it 
would be fine to allocate lowmem for that domain.

The SMMU is only mandatory for any device passthrough to all the other 
domain.

Regards,

[1] https://lists.xen.org/archives/html/xen-devel/2014-02/msg01897.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-09-15  8:50 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-14  5:12 [PATCH V1] xen/arm: domain_build: introduce dom0_lowmem bootargs Peng Fan
2016-09-14  7:23 ` Julien Grall
2016-09-14  7:41   ` Peng Fan
2016-09-14 10:47     ` Julien Grall
2016-09-14 12:03       ` Peng Fan
2016-09-14 12:06         ` Julien Grall
2016-09-14 12:18           ` Peng Fan
2016-09-14 12:34             ` Julien Grall
2016-09-14 12:40               ` Peng Fan
2016-09-14 14:16                 ` Edgar E. Iglesias
2016-09-15  0:20                   ` Peng Fan
2016-09-15  8:26                     ` Edgar E. Iglesias
2016-09-15  8:50                       ` Julien Grall [this message]
2016-09-15 11:12                       ` Peng Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5b278cc7-5038-914c-b826-262b44e91c49@arm.com \
    --to=julien.grall@arm.com \
    --cc=edgar.iglesias@gmail.com \
    --cc=peng.fan@nxp.com \
    --cc=sstabellini@kernel.org \
    --cc=van.freenix@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).