From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH v2] add locking around certain calls to map_pages_to_xen()
Date: Fri, 12 Jul 2013 10:48:39 +0100 [thread overview]
Message-ID: <51DFD0F7.6020203@citrix.com> (raw)
In-Reply-To: <51DFD7A002000078000E45E3@nat28.tlf.novell.com>
On 12/07/13 09:17, Jan Beulich wrote:
> While boot time calls don't need this, run time uses of the function
> which may result in L2 page tables getting populated need to be
> serialized to avoid two CPUs populating the same L2 (or L3) entry,
> overwriting each other's results.
>
> This fixes what would seem to be a regression from commit b0581b92
> ("x86: make map_domain_page_global() a simple wrapper around vmap()"),
> albeit that change only made more readily visible the already existing
> issue.
>
> The __init addition to memguard_init(), while seemingly unrelated,
> helps making obvious that this function's use of map_pages_to_xen() is
> a boot time only one.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> v2: Latch system_state into a local variable, as suggested by Andrew
> Cooper (in the hope that this might accelerate getting the patch in
> to overcome the stage tester failures).
>
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5758,13 +5758,34 @@ void destroy_xen_mappings(unsigned long
> void __set_fixmap(
> enum fixed_addresses idx, unsigned long mfn, unsigned long flags)
> {
> - BUG_ON(idx >= __end_of_fixed_addresses);
> + /*
> + * As the fixmap area requires more than a single L2 entry, we need a lock
> + * to avoid races of map_pages_to_xen() populating those. There's no need
> + * for this at boot time, and not doing so avoids needing to disable IRQs
> + * while holding the lock.
> + */
> + static DEFINE_SPINLOCK(lock);
> + bool_t locking = system_state > SYS_STATE_boot;
> +
> + if ( locking )
> + spin_lock(&lock);
> map_pages_to_xen(fix_to_virt(idx), mfn, 1, flags);
> + if ( locking )
> + spin_unlock(&lock);
> }
>
> void *__init arch_vmap_virt_end(void)
> {
> - return (void *)fix_to_virt(__end_of_fixed_addresses);
> + /*
> + * If the fixmap area required more than a single L3 entry, we'd need to
> + * lock against vmap()'s calls to map_pages_to_xen(). Pre-populate the
> + * L3 entry possibly shared with vmap(), and make sure we don't share an
> + * L2 entry with it.
> + */
> + BUILD_BUG_ON(((FIXADDR_TOP - 1) ^ FIXADDR_START) >> L3_PAGETABLE_SHIFT);
> + map_pages_to_xen(fix_to_virt(__end_of_fixed_addresses), 0, 1, _PAGE_AVAIL);
> + return (void *)(fix_to_virt(__end_of_fixed_addresses) &
> + (~0L << L2_PAGETABLE_SHIFT));
> }
>
> void __iomem *ioremap(paddr_t pa, size_t len)
> @@ -6012,7 +6033,7 @@ void free_perdomain_mappings(struct doma
>
> #ifdef MEMORY_GUARD
>
> -void memguard_init(void)
> +void __init memguard_init(void)
> {
> unsigned long start = max_t(unsigned long, xen_phys_start, 1UL << 20);
> map_pages_to_xen(
> --- a/xen/arch/x86/x86_64/mmconfig_64.c
> +++ b/xen/arch/x86/x86_64/mmconfig_64.c
> @@ -118,6 +118,8 @@ static void __iomem *mcfg_ioremap(const
> unsigned long idx, unsigned int prot)
> {
> unsigned long virt, size;
> + int rc;
> + static DEFINE_SPINLOCK(lock);
>
> virt = PCI_MCFG_VIRT_START + (idx << mmcfg_pci_segment_shift) +
> (cfg->start_bus_number << 20);
> @@ -125,10 +127,13 @@ static void __iomem *mcfg_ioremap(const
> if (virt + size < virt || virt + size > PCI_MCFG_VIRT_END)
> return NULL;
>
> - if (map_pages_to_xen(virt,
> - (cfg->address >> PAGE_SHIFT) +
> - (cfg->start_bus_number << (20 - PAGE_SHIFT)),
> - size >> PAGE_SHIFT, prot))
> + spin_lock(&lock);
> + rc = map_pages_to_xen(virt,
> + (cfg->address >> PAGE_SHIFT) +
> + (cfg->start_bus_number << (20 - PAGE_SHIFT)),
> + size >> PAGE_SHIFT, prot);
> + spin_unlock(&lock);
> + if ( rc )
> return NULL;
>
> return (void __iomem *) virt;
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -179,7 +179,12 @@ void *__vmap(const unsigned long *mfn, u
>
> for ( ; va && nr--; ++mfn, cur += PAGE_SIZE * granularity )
> {
> - if ( map_pages_to_xen(cur, *mfn, granularity, flags) )
> + int rc;
> +
> + spin_lock(&vm_lock);
> + rc = map_pages_to_xen(cur, *mfn, granularity, flags);
> + spin_unlock(&vm_lock);
> + if ( rc )
> {
> vunmap(va);
> va = NULL;
> @@ -198,7 +203,9 @@ void vunmap(const void *va)
> {
> unsigned long addr = (unsigned long)va;
>
> + spin_lock(&vm_lock);
> destroy_xen_mappings(addr, addr + PAGE_SIZE * vm_size(va));
> + spin_unlock(&vm_lock);
> vm_free(va);
> }
> #endif
>
>
next prev parent reply other threads:[~2013-07-12 9:48 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-11 11:30 [PATCH] add locking around certain calls to map_pages_to_xen() Jan Beulich
2013-07-11 11:37 ` Andrew Cooper
2013-07-11 11:56 ` Jan Beulich
2013-07-12 8:17 ` [PATCH v2] " Jan Beulich
2013-07-12 9:48 ` Andrew Cooper [this message]
2013-07-12 12:15 ` Keir Fraser
2013-07-12 12:44 ` Jan Beulich
2013-07-12 13:37 ` Keir Fraser
2013-07-12 13:41 ` Jan Beulich
2013-07-12 14:01 ` Keir Fraser
2013-07-12 14:30 ` Jan Beulich
2013-07-15 8:24 ` Jan Beulich
2013-07-15 8:36 ` Keir Fraser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51DFD0F7.6020203@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=keir@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).