linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-mm@kvack.org,
	akpm@linux-foundation.org, nios2-dev@lists.rocketboards.org,
	lftan@altera.com, jonas@southpole.se, linux@lists.openrisc.net
Cc: mark.rutland@arm.com, steve.capper@linaro.org,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: [PATCH v2 9/9] mm: replace open coded page to virt conversion with page_to_virt()
Date: Wed, 30 Mar 2016 16:46:04 +0200	[thread overview]
Message-ID: <1459349164-27175-10-git-send-email-ard.biesheuvel@linaro.org> (raw)
In-Reply-To: <1459349164-27175-1-git-send-email-ard.biesheuvel@linaro.org>

The open coded conversion from struct page address to virtual address in
lowmem_page_address() involves an intermediate conversion step to pfn
number/physical address. Since the placement of the struct page array
relative to the linear mapping may be completely independent from the
placement of physical RAM (as is that case for arm64 after commit
dfd55ad85e 'arm64: vmemmap: use virtual projection of linear region'),
the conversion to physical address and back again should factor out of
the equation, but unfortunately, the shifting and pointer arithmetic
involved prevent this from happening, and the resulting calculation
essentially subtracts the address of the start of physical memory and
adds it back again, in a way that prevents the compiler from optimizing
it away.

Since the start of physical memory is not a build time constant on arm64,
the resulting conversion involves an unnecessary memory access, which
we would like to get rid of. So replace the open coded conversion with
a call to page_to_virt(), and use the open coded conversion as its
default definition, to be overriden by the architecture, if desired.
The existing arch specific definitions of page_to_virt are all equivalent
to this default definition, so by itself this patch is a no-op.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/mm.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ed6407d1b7b5..474c4625756e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -72,6 +72,10 @@ extern int mmap_rnd_compat_bits __read_mostly;
 #define __pa_symbol(x)  __pa(RELOC_HIDE((unsigned long)(x), 0))
 #endif
 
+#ifndef page_to_virt
+#define page_to_virt(x)	__va(PFN_PHYS(page_to_pfn(x)))
+#endif
+
 /*
  * To prevent common memory management code establishing
  * a zero page mapping on a read fault.
@@ -948,7 +952,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 
 static __always_inline void *lowmem_page_address(const struct page *page)
 {
-	return __va(PFN_PHYS(page_to_pfn(page)));
+	return page_to_virt(page);
 }
 
 #if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL)
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-03-30 14:46 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-30 14:45 [PATCH v2 0/9] arm64: optimize virt_to_page and page_address Ard Biesheuvel
2016-03-30 14:45 ` [PATCH v2 1/9] arm64: vdso: avoid virt_to_page() translations on kernel symbols Ard Biesheuvel
2016-03-30 14:45 ` [PATCH v2 2/9] arm64: mm: free __init memory via the linear mapping Ard Biesheuvel
2016-03-30 14:45 ` [PATCH v2 3/9] arm64: mm: avoid virt_to_page() translation for the zero page Ard Biesheuvel
2016-03-30 14:45 ` [PATCH v2 4/9] arm64: insn: avoid virt_to_page() translations on core kernel symbols Ard Biesheuvel
2016-03-30 14:46 ` [PATCH v2 5/9] arm64: mm: move vmemmap region right below the linear region Ard Biesheuvel
2016-03-30 14:46 ` [PATCH v2 6/9] arm64: mm: restrict virt_to_page() to the linear mapping Ard Biesheuvel
2016-03-30 14:46 ` [PATCH v2 7/9] nios2: use correct void* return type for page_to_virt() Ard Biesheuvel
2016-03-30 14:46 ` [PATCH v2 8/9] openrisc: drop wrongly typed definition of page_to_virt() Ard Biesheuvel
2016-03-30 14:46 ` Ard Biesheuvel [this message]
2016-04-14 15:25   ` [PATCH v2 9/9] mm: replace open coded page to virt conversion with page_to_virt() Will Deacon
2016-04-14 15:33     ` Ard Biesheuvel
  -- strict thread matches above, loose matches on Subject: below --
2016-02-29 14:44 [PATCH v2 0/9] arm64: optimize virt_to_page and page_address Ard Biesheuvel
2016-02-29 14:44 ` [PATCH v2 9/9] mm: replace open coded page to virt conversion with page_to_virt() Ard Biesheuvel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1459349164-27175-10-git-send-email-ard.biesheuvel@linaro.org \
    --to=ard.biesheuvel@linaro.org \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=jonas@southpole.se \
    --cc=lftan@altera.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@lists.openrisc.net \
    --cc=mark.rutland@arm.com \
    --cc=nios2-dev@lists.rocketboards.org \
    --cc=steve.capper@linaro.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).