xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v2] xen: fix setup of PVH Dom0 memory map
Date: Fri, 23 May 2014 17:25:38 +0200	[thread overview]
Message-ID: <537F6872.4090701@citrix.com> (raw)
In-Reply-To: <537F11460200007800015345@mail.emea.novell.com>

On 23/05/14 09:13, Jan Beulich wrote:
>>>> On 22.05.14 at 19:01, <roger.pau@citrix.com> wrote:
>> @@ -369,6 +375,100 @@ static __init void pvh_map_all_iomem(struct domain *d)
>>          nump = end_pfn - start_pfn;
>>          pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
>>      }
>> +
>> +    /*
>> +     * Add the memory removed by the holes at the end of the
>> +     * memory map.
>> +     */
>> +    for ( i = 0, entry = e820.map; i < e820.nr_map && nr_holes > 0;
>> +          i++, entry++ )
>> +    {
>> +        if ( entry->type != E820_RAM )
>> +            continue;
>> +
>> +        end_pfn = PFN_UP(entry->addr + entry->size);
>> +        if ( end_pfn <= nr_pages )
>> +            continue;
>> +
>> +        navail = end_pfn - nr_pages;
>> +        nmap = navail > nr_holes ? nr_holes : navail;
> 
> min()?
> 
>> +        nr_holes -= nmap;
>> +        start_pfn = PFN_DOWN(entry->addr) < nr_pages ?
>> +                        nr_pages : PFN_DOWN(entry->addr);
> 
> max_t()?

Fixed both.

>> +        /*
>> +         * Populate this memory region using the pages
>> +         * previously removed by the MMIO holes.
>> +         */
>> +        page_list_for_each ( page, &d->page_list )
>> +        {
>> +            mfn = page_to_mfn(page);
>> +            if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY )
>> +                continue;
> 
> This being inside another loop, this is going to be unnecessarily
> expensive on big memory systems without dom0_mem=, as you're
> going to repeatedly skip a huge (and increasing) number of pages
> here. So apart from needing to add a process_pending_softirqs()
> call somewhere, I think you should have a cursor into the list
> (which isn't going to change under your feet).
> 
> Talking of which, Mukesh, I think the lack of periodic calls to
> process_pending_softirqs() is also a latent problem in the function
> without Roger's changes. Furthermore now that I look at it again I
> wonder whether this whole thing shouldn't be SRAT based (when
> there is an SRAT) in order to avoid mapping hotplug memory
> regions as MMIO.

I've added a call to process_pending_softirqs on pvh_add_mem_mapping and
also one here, in case we are dealing with big holes. I will leave the
SRAT stuff for later, since it's a much wider change probably.

>> +
>> +            rc = guest_physmap_add_page(d, start_pfn, mfn, 0);
>> +            if ( rc != 0 )
>> +                panic("Unable to add gpfn %#lx mfn %#lx to Dom0 physmap",
>> +                      start_pfn, page_to_mfn(page));
> 
> Please print rc here too, to help diagnosing eventual problems. Also
> please use "mfn" rather than "page_to_mfn(page)".

Done.

>> +            start_pfn++;
>> +            if ( --nmap == 0 )
>> +                break;
>> +        }
>> +        ASSERT(nmap == 0);
>> +    }
>> +
>> +    ASSERT(nr_holes == 0);
>> +}
>> +
>> +static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
>> +{
>> +    struct e820entry *entry, *entry_guest;
>> +    unsigned int i;
>> +    unsigned long pages, cur_pages = 0;
>> +
>> +    /*
>> +     * Craft the e820 memory map for Dom0 based on the hardware e820 map.
>> +     */
>> +    d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
>> +    if ( !d->arch.e820 )
>> +        panic("Unable to allocate memory for Dom0 e820 map");
>> +    entry_guest = d->arch.e820;
>> +
>> +    /* Clamp e820 memory map to match the memory assigned to Dom0 */
>> +    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
>> +    {
>> +        if ( entry->type != E820_RAM )
>> +        {
>> +            *entry_guest = *entry;
>> +            goto next;
>> +        }
>> +
>> +        if ( nr_pages == cur_pages )
>> +        {
>> +            /*
>> +             * We already have all the assigned memory,
>> +             * skip this entry
>> +             */
>> +            continue;
>> +        }
>> +
>> +        *entry_guest = *entry;
>> +        pages = (entry_guest->size + PAGE_SIZE-1) >> PAGE_SHIFT;
> 
> PFN_UP()?
> 
>> +        if ( (cur_pages + pages) > nr_pages )
>> +        {
>> +            /* Truncate region */
>> +            entry_guest->size = (nr_pages - cur_pages) << PAGE_SHIFT;
>> +            cur_pages = nr_pages;
>> +        }
>> +        else
>> +        {
>> +            cur_pages += pages;
>> +        }
>> +next:
> 
> Labels indented by at least on space please (avoiding diff's -p option
> picking this up as hunk context in future patches).

Both of the above comments are fixed, I'm going to send v3 now.

Roger.

      reply	other threads:[~2014-05-23 17:22 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-22 17:01 [PATCH v2] xen: fix setup of PVH Dom0 memory map Roger Pau Monne
2014-05-23  7:13 ` Jan Beulich
2014-05-23 15:25   ` Roger Pau Monné [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=537F6872.4090701@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).