xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Chen, Tiejun" <tiejun.chen@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	JBeulich@suse.com, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
	yang.z.zhang@intel.com, kevin.tian@intel.com
Cc: xen-devel@lists.xen.org
Subject: Re: [v4][PATCH 2/9] xen:x86: define a new hypercall to get RMRR mappings
Date: Tue, 26 Aug 2014 11:12:14 +0800	[thread overview]
Message-ID: <53FBFB0E.8030305@intel.com> (raw)
In-Reply-To: <53FB2715.8050303@citrix.com>

On 2014/8/25 20:07, Andrew Cooper wrote:
> On 25/08/14 12:21, Chen, Tiejun wrote:
>> On 2014/8/22 18:53, Andrew Cooper wrote:
>>> On 22/08/14 11:09, Tiejun Chen wrote:
>>>> We need this new hypercall to get RMRR mapping for VM.
>>>>
>>>> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
>>>> ---
>>>>    xen/arch/x86/mm.c           | 71
>>>> +++++++++++++++++++++++++++++++++++++++++++++
>>>>    xen/include/public/memory.h | 37 ++++++++++++++++++++++-
>>>>    2 files changed, 107 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>>> index d23cb3f..e0d6650 100644
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -123,6 +123,7 @@
>>>>    #include <asm/setup.h>
>>>>    #include <asm/fixmap.h>
>>>>    #include <asm/pci.h>
>>>> +#include <asm/acpi.h>
>>>>
>>>>    /* Mapping of the fixmap space needed early. */
>>>>    l1_pgentry_t __attribute__ ((__section__ (".bss.page_aligned")))
>>>> @@ -4842,6 +4843,76 @@ long arch_memory_op(unsigned long cmd,
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>            return rc;
>>>>        }
>>>>
>>>> +    case XENMEM_reserved_device_memory_map:
>>>> +    {
>>>> +        struct xen_reserved_device_memory_map map;
>>>> +        XEN_GUEST_HANDLE(xen_reserved_device_memory_t) buffer;
>>>> +        XEN_GUEST_HANDLE_PARAM(xen_reserved_device_memory_t)
>>>> buffer_param;
>>>> +        unsigned int i = 0;
>>>> +        static unsigned int nr_entries = 0;
>>>> +        static struct xen_reserved_device_memory *rmrr_map;
>>>
>>> Absolutely not.  This hypercall can easy be run concurrently.
>>>
>>>> +        struct acpi_rmrr_unit *rmrr;
>>>> +
>>>> +        if ( copy_from_guest(&map, arg, 1) )
>>>> +            return -EFAULT;
>>>> +
>>>> +        if ( !nr_entries )
>>>> +            /* Currently we just need to cover RMRR. */
>>>> +            list_for_each_entry( rmrr, &acpi_rmrr_units, list )
>>>> +                nr_entries++;
>>>
>>> Maintain a global count as entries are added/removed from this list.  It
>>> is a waste of time recounting this list for each hypercall.
>>
>> Are you saying push this 'nr_entries' as a global count somewhere? I
>> guess I can set this when we first construct acpi_rmrr_units in ACPI
>> stuff.
>
> Not named "nr_entries", but yes.  It is constant after boot.
>
>>
>>>
>>>> +
>>>> +        if ( !nr_entries )
>>>> +                return -ENOENT;
>>>> +        else
>>>> +        {
>>>> +            if ( rmrr_map == NULL )
>>>> +            {
>>>> +                rmrr_map = xmalloc_array(xen_reserved_device_memory_t,
>>>> +                                         nr_entries);
>>>
>>> You can do all of this without any memory allocation.
>>

What is your way without any memory allocation? Do you mean I should 
predefine a static array here? But how to predetermine the size?

Or you mean I should do something with one rmrr_map, like this,

	struct xen_mem_reserved_device_memory rmrr_map;

	list_for_each_entry( rmrr, &acpi_rmrr_units, list )
	{
		rmrr_map.start_pfn = ...;
		rmrr_map.nr_pages = ...;
		if ( copy_to_guest_offset(buffer, i, &rmrr_map, 1) )			return -EFAULT;
		i++;
	}


>> I will check this.
>
> It is easy...
>
>>
>>>
>>>> +                if ( rmrr_map == NULL )
>>>> +                {
>>>> +                    return -ENOMEM;
>>>> +                }
>>>> +
>>>> +                list_for_each_entry( rmrr, &acpi_rmrr_units, list )
>>>> +                {
>>>> +                    rmrr_map[i].pfn = rmrr->base_address >>
>>>> PAGE_SHIFT;
>>>> +                    rmrr_map[i].count = PAGE_ALIGN(rmrr->end_address -
>>>> +
>>>> rmrr->base_address) /
>>>> +                                                   PAGE_SIZE;
>>>> +                    i++;
>
> In this loop, construct one on the stack and copy_to_guest, breaking if
> there is a fault or you exceed the guests array.

Its not possible to exceed the guests array since the caller always 
check if it get such a error, -ENOBUFS, then if yes, it can reallocate 
appropriate array.

Thanks
Tiejun

>
>>>> +                }
>>>> +            }
>>>> +        }
>>>> +
>>>> +        if ( map.nr_entries < nr_entries )
>>>> +        {
>>>> +            map.nr_entries =  nr_entries;
>>>> +            if ( copy_to_guest(arg, &map, 1) )
>>>> +                return -EFAULT;
>>>> +            return -ENOBUFS;
>>>> +        }
>>>> +
>>>> +        map.nr_entries =  nr_entries;
>>>> +        buffer_param = guest_handle_cast(map.buffer,
>>>> +
>>>> xen_reserved_device_memory_t);
>>>> +        buffer = guest_handle_from_param(buffer_param,
>>>> +
>>>> xen_reserved_device_memory_t);
>>>> +        if ( !guest_handle_okay(buffer, map.nr_entries) )
>>>> +            return -EFAULT;
>>>> +
>>>> +        for ( i = 0; i < map.nr_entries; ++i )
>>>> +        {
>>>> +            if ( copy_to_guest_offset(buffer, i, rmrr_map + i, 1) )
>>>> +                return -EFAULT;
>>>> +        }
>>>> +
>>>> +        if ( copy_to_guest(arg, &map, 1) )
>>>> +                return -EFAULT;
>>>> +
>>>> +        return 0;
>>>> +    }
>>>> +
>>>>        default:
>>>>            return subarch_memory_op(cmd, arg);
>>>>        }
>>>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>>>> index 2c57aa0..8481843 100644
>>>> --- a/xen/include/public/memory.h
>>>> +++ b/xen/include/public/memory.h
>>>> @@ -523,7 +523,42 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>>>>
>>>>    #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>>
>>>> -/* Next available subop number is 26 */
>>>> +/*
>>>> + * Some devices may reserve some range.
>>>
>>> "range" is not a useful unit of measure.
>>
>> So what about this?
>>
>> The regions of memory used for some devices should be reserved.
>
> No - it is not a case of "should", it is a case of "must".
>
> "For legacy reasons, some devices must be configured with special memory
> regions to function correctly.  The guest must avoid using any of these
> regions."
>
>>
>>>
>>>> + *
>>>> + * Currently we just have RMRR
>>>> + * - Reserved memory Region Reporting Structure,
>>>> + * So returns the RMRR memory map as it was when the domain
>>>> + * was started.
>>>> + */
>>>> +#define XENMEM_reserved_device_memory_map   26
>>>> +struct xen_reserved_device_memory {
>>>
>>> xen_mem_ to match the prevailing style
>>
>> Okay.
>>
>>>
>>>> +    /* PFN of the current mapping of the page. */
>>>> +    xen_pfn_t pfn;
>>>> +    /* Number of the current mapping pages. */
>>>> +    xen_ulong_t count;
>>>> +};
>>>
>>> This struct marks a range, but the fields don't make it clear.  I would
>>> suggest "start" and "nr_frames" as names.
>>>
>>
>> I will prefer Jan's suggestion.
>
> As do I,
>
> ~Andrew
>

  reply	other threads:[~2014-08-26  3:12 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-22 10:09 [v4][PATCH 0/9] xen: reserve RMRR to avoid conflicting MMIO/RAM Tiejun Chen
2014-08-22 10:09 ` [v4][PATCH 1/9] xen:vtd:rmrr: export acpi_rmrr_units Tiejun Chen
2014-08-22 10:09 ` [v4][PATCH 2/9] xen:x86: define a new hypercall to get RMRR mappings Tiejun Chen
2014-08-22 10:53   ` Andrew Cooper
2014-08-22 11:36     ` Jan Beulich
2014-08-25 11:03       ` Chen, Tiejun
2014-08-25 11:21     ` Chen, Tiejun
2014-08-25 12:07       ` Andrew Cooper
2014-08-26  3:12         ` Chen, Tiejun [this message]
2014-08-26  9:25           ` Andrew Cooper
2014-08-22 10:09 ` [v4][PATCH 3/9] tools:libxc: introduce hypercall for xc_reserved_device_memory_map Tiejun Chen
2014-08-22 10:55   ` Andrew Cooper
2014-08-25 11:11     ` Chen, Tiejun
2014-08-25 11:58       ` Andrew Cooper
2014-08-22 10:09 ` [v4][PATCH 4/9] tools:libxc: check if mmio BAR is out of RMRR mappings Tiejun Chen
2014-08-26 20:36   ` Ian Campbell
2014-08-27  1:46     ` Chen, Tiejun
2014-08-27  2:20       ` Ian Campbell
2014-08-27  2:40         ` Chen, Tiejun
2014-08-27  2:47           ` Chen, Tiejun
2014-08-22 10:09 ` [v4][PATCH 5/9] hvm_info_table: introduce nr_reserved_device_memory_map Tiejun Chen
2014-08-26 20:38   ` Ian Campbell
2014-08-27  1:54     ` Chen, Tiejun
2014-08-27  1:57       ` Chen, Tiejun
2014-08-27  2:21       ` Ian Campbell
2014-08-27  2:28         ` Chen, Tiejun
2014-08-22 10:09 ` [v4][PATCH 6/9] xen:x86:: support xc_reserved_device_memory_map in compat case Tiejun Chen
2014-08-22 10:09 ` [v4][PATCH 7/9] tools:firmware:hvmloader: introduce hypercall for xc_reserved_device_memory_map Tiejun Chen
2014-08-22 10:09 ` [v4][PATCH 8/9] tools:firmware:hvmloader: check to reserve RMRR mappings in e820 Tiejun Chen
2014-08-22 10:09 ` [v4][PATCH 9/9] xen:vtd: make USB RMRR mapping safe Tiejun Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53FBFB0E.8030305@intel.com \
    --to=tiejun.chen@intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=kevin.tian@intel.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yang.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).