From: Dave Hansen <dave@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Gleb Natapov <gleb@redhat.com>,
"H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, Marcelo Tosatti <mtosatti@redhat.com>,
Rik van Riel <riel@redhat.com>,
Dave Hansen <dave@linux.vnet.ibm.com>
Subject: [PATCH 4/5] create slow_virt_to_phys()
Date: Mon, 21 Jan 2013 09:52:49 -0800 [thread overview]
Message-ID: <20130121175249.AFE9EAD7@kernel.stglabs.ibm.com> (raw)
In-Reply-To: <20130121175244.E5839E06@kernel.stglabs.ibm.com>
This is necessary because __pa() does not work on some kinds of
memory, like vmalloc() or the alloc_remap() areas on 32-bit
NUMA systems. We have some functions to do conversions _like_
this in the vmalloc() code (like vmalloc_to_page()), but they
do not work on sizes other than 4k pages. We would potentially
need to be able to handle all the page sizes that we use for
the kernel linear mapping (4k, 2M, 1G).
In practice, on 32-bit NUMA systems, the percpu areas get stuck
in the alloc_remap() area. Any __pa() call on them will break
and basically return garbage.
This patch introduces a new function slow_virt_to_phys(), which
walks the kernel page tables on x86 and should do precisely
the same logical thing as __pa(), but actually work on a wider
range of memory. It should work on the normal linear mapping,
vmalloc(), kmap(), etc...
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h | 1
linux-2.6.git-dave/arch/x86/mm/pageattr.c | 31 ++++++++++++++++
2 files changed, 32 insertions(+)
diff -puN arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys arch/x86/include/asm/pgtable_types.h
--- linux-2.6.git/arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys 2013-01-17 10:22:26.590434129 -0800
+++ linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h 2013-01-17 10:22:26.598434199 -0800
@@ -352,6 +352,7 @@ static inline void update_page_count(int
* as a pte too.
*/
extern pte_t *lookup_address(unsigned long address, unsigned int *level);
+extern phys_addr_t slow_virt_to_phys(void *__address);
#endif /* !__ASSEMBLY__ */
diff -puN arch/x86/mm/pageattr.c~create-slow_virt_to_phys arch/x86/mm/pageattr.c
--- linux-2.6.git/arch/x86/mm/pageattr.c~create-slow_virt_to_phys 2013-01-17 10:22:26.594434163 -0800
+++ linux-2.6.git-dave/arch/x86/mm/pageattr.c 2013-01-17 10:22:26.598434199 -0800
@@ -364,6 +364,37 @@ pte_t *lookup_address(unsigned long addr
EXPORT_SYMBOL_GPL(lookup_address);
/*
+ * This is necessary because __pa() does not work on some
+ * kinds of memory, like vmalloc() or the alloc_remap()
+ * areas on 32-bit NUMA systems. The percpu areas can
+ * end up in this kind of memory, for instance.
+ *
+ * This could be optimized, but it is only intended to be
+ * used at inititalization time, and keeping it
+ * unoptimized should increase the testing coverage for
+ * the more obscure platforms.
+ */
+phys_addr_t slow_virt_to_phys(void *__virt_addr)
+{
+ unsigned long virt_addr = (unsigned long)__virt_addr;
+ phys_addr_t phys_addr;
+ unsigned long offset;
+ unsigned int level = -1;
+ unsigned long psize = 0;
+ unsigned long pmask = 0;
+ pte_t *pte;
+
+ pte = lookup_address(virt_addr, &level);
+ BUG_ON(!pte);
+ psize = page_level_size(level);
+ pmask = page_level_mask(level);
+ offset = virt_addr & ~pmask;
+ phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
+ return (phys_addr | offset);
+}
+EXPORT_SYMBOL_GPL(slow_virt_to_phys);
+
+/*
* Set the new pmd in all the pgds we know about:
*/
static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-01-21 17:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-21 17:52 ` [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
2013-01-21 17:52 ` [PATCH 2/5] pagetable level size/shift/mask helpers Dave Hansen
2013-01-21 17:52 ` [PATCH 3/5] use new pagetable helpers in try_preserve_large_page() Dave Hansen
2013-01-21 17:52 ` Dave Hansen [this message]
2013-01-21 18:08 ` [PATCH 4/5] create slow_virt_to_phys() H. Peter Anvin
2013-01-21 18:18 ` Dave Hansen
2013-01-21 17:52 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
2013-01-21 18:38 ` H. Peter Anvin
2013-01-21 18:59 ` Dave Hansen
2013-01-21 19:22 ` H. Peter Anvin
2013-01-21 19:02 ` Gleb Natapov
-- strict thread matches above, loose matches on Subject: below --
2013-01-22 21:24 [PATCH 0/5] [v3] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-22 21:24 ` [PATCH 4/5] create slow_virt_to_phys() Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130121175249.AFE9EAD7@kernel.stglabs.ibm.com \
--to=dave@linux.vnet.ibm.com \
--cc=gleb@redhat.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mtosatti@redhat.com \
--cc=riel@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).