From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Malcolm Crossley <malcolm.crossley@citrix.com>,
Keir Fraser <keir@xen.org>
Subject: Re: [PATCH] x86: don't blindly create L3 tables for the direct map
Date: Mon, 30 Sep 2013 13:36:40 +0100 [thread overview]
Message-ID: <52497058.6060800@citrix.com> (raw)
In-Reply-To: <52498C0B02000078000F8020@nat28.tlf.novell.com>
[-- Attachment #1.1: Type: text/plain, Size: 4241 bytes --]
On 30/09/13 13:34, Jan Beulich wrote:
> Now that the direct map area can extend all the way up to almost the
> end of address space, this is wasteful.
>
> Also fold two almost redundant messages in SRAT parsing into one.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Malcolm Crossley <malcolm.crossley@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -137,7 +137,7 @@ l1_pgentry_t __attribute__ ((__section__
> #define PTE_UPDATE_WITH_CMPXCHG
> #endif
>
> -bool_t __read_mostly mem_hotplug = 0;
> +paddr_t __read_mostly mem_hotplug;
>
> /* Private domain structs for DOMID_XEN and DOMID_IO. */
> struct domain *dom_xen, *dom_io, *dom_cow;
> --- a/xen/arch/x86/srat.c
> +++ b/xen/arch/x86/srat.c
> @@ -113,6 +113,7 @@ static __init void bad_srat(void)
> apicid_to_node[i] = NUMA_NO_NODE;
> for (i = 0; i < ARRAY_SIZE(pxm2node); i++)
> pxm2node[i] = NUMA_NO_NODE;
> + mem_hotplug = 0;
> }
>
> /*
> @@ -257,13 +258,6 @@ acpi_numa_memory_affinity_init(struct ac
> return;
> }
> /* It is fine to add this area to the nodes data it will be used later*/
> - if (ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)
> - {
> - printk(KERN_INFO "SRAT: hot plug zone found %"PRIx64" - %"PRIx64" \n",
> - start, end);
> - mem_hotplug = 1;
> - }
> -
> i = conflicting_memblks(start, end);
> if (i == node) {
> printk(KERN_WARNING
> @@ -287,8 +281,11 @@ acpi_numa_memory_affinity_init(struct ac
> if (nd->end < end)
> nd->end = end;
> }
> - printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIx64"-%"PRIx64"\n", node, pxm,
> - start, end);
> + if ((ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) && end > mem_hotplug)
> + mem_hotplug = end;
> + printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIx64"-%"PRIx64"%s\n",
> + node, pxm, start, end,
> + ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
>
> node_memblk_range[num_node_memblks].start = start;
> node_memblk_range[num_node_memblks].end = end;
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -559,25 +559,20 @@ void __init paging_init(void)
> * We setup the L3s for 1:1 mapping if host support memory hotplug
> * to avoid sync the 1:1 mapping on page fault handler
> */
> - if ( mem_hotplug )
> + for ( va = DIRECTMAP_VIRT_START;
> + va < DIRECTMAP_VIRT_END && (void *)va < __va(mem_hotplug);
> + va += (1UL << L4_PAGETABLE_SHIFT) )
> {
> - unsigned long va;
> -
> - for ( va = DIRECTMAP_VIRT_START;
> - va < DIRECTMAP_VIRT_END;
> - va += (1UL << L4_PAGETABLE_SHIFT) )
> + if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
> + _PAGE_PRESENT) )
> {
> - if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
> - _PAGE_PRESENT) )
> - {
> - l3_pg = alloc_domheap_page(NULL, 0);
> - if ( !l3_pg )
> - goto nomem;
> - l3_ro_mpt = page_to_virt(l3_pg);
> - clear_page(l3_ro_mpt);
> - l4e_write(&idle_pg_table[l4_table_offset(va)],
> - l4e_from_page(l3_pg, __PAGE_HYPERVISOR));
> - }
> + l3_pg = alloc_domheap_page(NULL, 0);
> + if ( !l3_pg )
> + goto nomem;
> + l3_ro_mpt = page_to_virt(l3_pg);
> + clear_page(l3_ro_mpt);
> + l4e_write(&idle_pg_table[l4_table_offset(va)],
> + l4e_from_page(l3_pg, __PAGE_HYPERVISOR));
> }
> }
>
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -399,7 +399,7 @@ static inline int get_page_and_type(stru
> int check_descriptor(const struct domain *, struct desc_struct *d);
>
> extern bool_t opt_allow_superpage;
> -extern bool_t mem_hotplug;
> +extern paddr_t mem_hotplug;
>
> /******************************************************************************
> * With shadow pagetables, the different kinds of address start
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
[-- Attachment #1.2: Type: text/html, Size: 5124 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2013-09-30 12:36 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-30 12:34 [PATCH] x86: don't blindly create L3 tables for the direct map Jan Beulich
2013-09-30 12:36 ` Andrew Cooper [this message]
2013-09-30 12:58 ` Keir Fraser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52497058.6060800@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=keir@xen.org \
--cc=malcolm.crossley@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).