linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gleb Natapov <gleb@redhat.com>
To: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Rik van Riel <riel@redhat.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	mtosatti@redhat.com
Subject: Re: [RFCv2][PATCH 1/3] create slow_virt_to_phys()
Date: Sun, 9 Dec 2012 16:06:53 +0200	[thread overview]
Message-ID: <20121209140653.GA29003@redhat.com> (raw)
In-Reply-To: <20121207213023.AA3AFF11@kernel.stglabs.ibm.com>

Copying more people. Is this approach good? The alternative would be to
allocate NR_CPUS sized arrays in KVM.

On Fri, Dec 07, 2012 at 04:30:23PM -0500, Dave Hansen wrote:
> 
> This is necessary because __pa() does not work on some kinds of
> memory, like vmalloc() or the alloc_remap() areas on 32-bit
> NUMA systems.  We have some functions to do conversions _like_
> this in the vmalloc() code (like vmalloc_to_page()), but they
> do not work on sizes other than 4k pages.  We would potentially
> need to be able to handle all the page sizes that we use for
> the kernel linear mapping (4k, 2M, 1G).
> 
> In practice, on 32-bit NUMA systems, the percpu areas get stuck
> in the alloc_remap() area.  Any __pa() call on them will break
> and basically return garbage.
> 
> This patch introduces a new function slow_virt_to_phys(), which
> walks the kernel page tables on x86 and should do precisely
> the same logical thing as __pa(), but actually work on a wider
> range of memory.  It should work on the normal linear mapping,
> vmalloc(), kmap(), etc...
> 
> 
> Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
> ---
> 
>  linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h |    1 
>  linux-2.6.git-dave/arch/x86/mm/pageattr.c               |   47 ++++++++++++++++
>  2 files changed, 48 insertions(+)
> 
> diff -puN arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys arch/x86/include/asm/pgtable_types.h
> --- linux-2.6.git/arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys	2012-12-07 16:25:16.317592189 -0500
> +++ linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h	2012-12-07 16:25:16.321592224 -0500
> @@ -332,6 +332,7 @@ static inline void update_page_count(int
>   * as a pte too.
>   */
>  extern pte_t *lookup_address(unsigned long address, unsigned int *level);
> +extern phys_addr_t slow_virt_to_phys(void *__address);
>  
>  #endif	/* !__ASSEMBLY__ */
>  
> diff -puN arch/x86/mm/pageattr.c~create-slow_virt_to_phys arch/x86/mm/pageattr.c
> --- linux-2.6.git/arch/x86/mm/pageattr.c~create-slow_virt_to_phys	2012-12-07 16:25:16.317592189 -0500
> +++ linux-2.6.git-dave/arch/x86/mm/pageattr.c	2012-12-07 16:28:20.675189758 -0500
> @@ -364,6 +364,53 @@ pte_t *lookup_address(unsigned long addr
>  EXPORT_SYMBOL_GPL(lookup_address);
>  
>  /*
> + * This is necessary because __pa() does not work on some
> + * kinds of memory, like vmalloc() or the alloc_remap()
> + * areas on 32-bit NUMA systems.  The percpu areas can
> + * end up in this kind of memory, for instance.
> + *
> + * This could be optimized, but it is only intended to be
> + * used at inititalization time, and keeping it
> + * unoptimized should increase the testing coverage for
> + * the more obscure platforms.
> + */
> +phys_addr_t slow_virt_to_phys(void *__virt_addr)
> +{
> +	unsigned long virt_addr = (unsigned long)__virt_addr;
> +	phys_addr_t phys_addr;
> +	unsigned long offset;
> +	unsigned int level = -1;
> +	unsigned long psize = 0;
> +	unsigned long pmask = 0;
> +	pte_t *pte;
> +
> +	pte = lookup_address(virt_addr, &level);
> +	BUG_ON(!pte);
> +	switch (level) {
> +	case PG_LEVEL_4K:
> +		psize = PAGE_SIZE;
> +		pmask = PAGE_MASK;
> +		break;
> +	case PG_LEVEL_2M:
> +		psize = PMD_PAGE_SIZE;
> +		pmask = PMD_PAGE_MASK;
> +		break;
> +#ifdef CONFIG_X86_64
> +	case PG_LEVEL_1G:
> +		psize = PUD_PAGE_SIZE;
> +		pmask = PUD_PAGE_MASK;
> +		break;
> +#endif
> +	default:
> +		BUG();
> +	}
> +	offset = virt_addr & ~pmask;
> +	phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
> +	return (phys_addr | offset);
> +}
> +EXPORT_SYMBOL_GPL(slow_virt_to_phys);
> +
> +/*
>   * Set the new pmd in all the pgds we know about:
>   */
>  static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
> _

--
			Gleb.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      parent reply	other threads:[~2012-12-09 14:06 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-07 21:30 [RFCv2][PATCH 1/3] create slow_virt_to_phys() Dave Hansen
2012-12-07 21:30 ` [RFCv2][PATCH 2/3] fix kvm's use of __pa() on percpu areas Dave Hansen
2012-12-07 21:30 ` [RFCv2][PATCH 3/3] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
2012-12-09 14:06 ` Gleb Natapov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121209140653.GA29003@redhat.com \
    --to=gleb@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mtosatti@redhat.com \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).