linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [patch 1/6] mm: introduce VM_MIXEDMAP
       [not found] <20080118045649.334391000@suse.de>
@ 2008-01-18  4:56 ` npiggin, Jared Hulbert
  2008-01-18  4:56 ` [patch 2/6] mm: introduce pte_special pte bit npiggin
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 25+ messages in thread
From: npiggin, Jared Hulbert @ 2008-01-18  4:56 UTC (permalink / raw)
  To: Linus Torvalds, Andrew Morton
  Cc: Hugh Dickins, Jared Hulbert, Carsten Otte, Martin Schwidefsky,
	Heiko Carstens, linux-mm

[-- Attachment #1: vm-mixedmap.patch --]
[-- Type: text/plain, Size: 6992 bytes --]

Introduce a new type of mapping, VM_MIXEDMAP. This is unlike VM_PFNMAP in
that it can support COW mappings of arbitrary ranges including ranges without
struct page (PFNMAP can only support COW in those cases where the un-COW-ed
translations are mapped linearly in the virtual address).

VM_MIXEDMAP achieves this by refcounting all pfn_valid pages, and not
refcounting !pfn_valid pages (which is not an option for VM_PFNMAP, because
it needs to avoid refcounting pfn_valid pages eg. for /dev/mem mappings).

(Needs Jared's SOB)
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-mm@kvack.org
---
 include/linux/mm.h |    1 
 mm/memory.c        |   79 ++++++++++++++++++++++++++++++++++++++---------------
 2 files changed, 59 insertions(+), 21 deletions(-)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -106,6 +106,7 @@ extern unsigned int kobjsize(const void 
 #define VM_ALWAYSDUMP	0x04000000	/* Always include in core dumps */
 
 #define VM_CAN_NONLINEAR 0x08000000	/* Has ->fault & does nonlinear pages */
+#define VM_MIXEDMAP	0x10000000	/* Can contain "struct page" and pure PFN pages */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -361,35 +361,65 @@ static inline int is_cow_mapping(unsigne
 }
 
 /*
- * This function gets the "struct page" associated with a pte.
+ * This function gets the "struct page" associated with a pte or returns
+ * NULL if no "struct page" is associated with the pte.
  *
- * NOTE! Some mappings do not have "struct pages". A raw PFN mapping
- * will have each page table entry just pointing to a raw page frame
- * number, and as far as the VM layer is concerned, those do not have
- * pages associated with them - even if the PFN might point to memory
+ * A raw VM_PFNMAP mapping (ie. one that is not COWed) may not have any "struct
+ * page" backing, and even if they do, they are not refcounted. COWed pages of
+ * a VM_PFNMAP do always have a struct page, and they are normally refcounted
+ * (they are _normal_ pages).
+ *
+ * So a raw PFNMAP mapping will have each page table entry just pointing
+ * to a page frame number, and as far as the VM layer is concerned, those do
+ * not have pages associated with them - even if the PFN might point to memory
  * that otherwise is perfectly fine and has a "struct page".
  *
- * The way we recognize those mappings is through the rules set up
- * by "remap_pfn_range()": the vma will have the VM_PFNMAP bit set,
- * and the vm_pgoff will point to the first PFN mapped: thus every
+ * The way we recognize COWed pages within VM_PFNMAP mappings is through the
+ * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
+ * set, and the vm_pgoff will point to the first PFN mapped: thus every
  * page that is a raw mapping will always honor the rule
  *
  *	pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT)
  *
- * and if that isn't true, the page has been COW'ed (in which case it
- * _does_ have a "struct page" associated with it even if it is in a
- * VM_PFNMAP range).
+ * A call to vm_normal_page() will return NULL for such a page.
+ *
+ * If the page doesn't follow the "remap_pfn_range()" rule in a VM_PFNMAP
+ * then the page has been COW'ed.  A COW'ed page _does_ have a "struct page"
+ * associated with it even if it is in a VM_PFNMAP range.  Calling
+ * vm_normal_page() on such a page will therefore return the "struct page".
+ *
+ *
+ * VM_MIXEDMAP mappings can likewise contain memory with or without "struct
+ * page" backing, however the difference is that _all_ pages with a struct
+ * page (that is, those where pfn_valid is true) are refcounted and considered
+ * normal pages by the VM. The disadvantage is that pages are refcounted
+ * (which can be slower and simply not an option for some PFNMAP users). The
+ * advantage is that we don't have to follow the strict linearity rule of
+ * PFNMAP mappings in order to support COWable mappings.
+ *
+ * A call to vm_normal_page() with a VM_MIXEDMAP mapping will return the
+ * associated "struct page" or NULL for memory not backed by a "struct page".
+ *
+ *
+ * All other mappings should have a valid struct page, which will be
+ * returned by a call to vm_normal_page().
  */
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
 {
 	unsigned long pfn = pte_pfn(pte);
 
-	if (unlikely(vma->vm_flags & VM_PFNMAP)) {
-		unsigned long off = (addr - vma->vm_start) >> PAGE_SHIFT;
-		if (pfn == vma->vm_pgoff + off)
-			return NULL;
-		if (!is_cow_mapping(vma->vm_flags))
-			return NULL;
+	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+		if (vma->vm_flags & VM_MIXEDMAP) {
+			if (!pfn_valid(pfn))
+				return NULL;
+			goto out;
+		} else {
+			unsigned long off = (addr-vma->vm_start) >> PAGE_SHIFT;
+			if (pfn == vma->vm_pgoff + off)
+				return NULL;
+			if (!is_cow_mapping(vma->vm_flags))
+				return NULL;
+		}
 	}
 
 #ifdef CONFIG_DEBUG_VM
@@ -412,6 +442,7 @@ struct page *vm_normal_page(struct vm_ar
 	 * The PAGE_ZERO() pages and various VDSO mappings can
 	 * cause them to exist.
 	 */
+out:
 	return pfn_to_page(pfn);
 }
 
@@ -1213,8 +1244,11 @@ int vm_insert_pfn(struct vm_area_struct 
 	pte_t *pte, entry;
 	spinlock_t *ptl;
 
-	BUG_ON(!(vma->vm_flags & VM_PFNMAP));
-	BUG_ON(is_cow_mapping(vma->vm_flags));
+	BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
+	BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
+						(VM_PFNMAP|VM_MIXEDMAP));
+	BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
+	BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
 
 	retval = -ENOMEM;
 	pte = get_locked_pte(mm, addr, &ptl);
@@ -2388,10 +2422,13 @@ static noinline int do_no_pfn(struct mm_
 	unsigned long pfn;
 
 	pte_unmap(page_table);
-	BUG_ON(!(vma->vm_flags & VM_PFNMAP));
-	BUG_ON(is_cow_mapping(vma->vm_flags));
+	BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
+	BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
 
 	pfn = vma->vm_ops->nopfn(vma, address & PAGE_MASK);
+
+	BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
+
 	if (unlikely(pfn == NOPFN_OOM))
 		return VM_FAULT_OOM;
 	else if (unlikely(pfn == NOPFN_SIGBUS))

-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [patch 2/6] mm: introduce pte_special pte bit
       [not found] <20080118045649.334391000@suse.de>
  2008-01-18  4:56 ` [patch 1/6] mm: introduce VM_MIXEDMAP npiggin, Jared Hulbert
@ 2008-01-18  4:56 ` npiggin
  2008-01-18 16:41   ` Linus Torvalds
  2008-01-18  4:56 ` [patch 3/6] mm: add vm_insert_mixed npiggin
  2008-01-18  4:56 ` [patch 4/6] xip: support non-struct page backed memory npiggin
  3 siblings, 1 reply; 25+ messages in thread
From: npiggin @ 2008-01-18  4:56 UTC (permalink / raw)
  To: Linus Torvalds, Andrew Morton
  Cc: Hugh Dickins, Jared Hulbert, Carsten Otte, Martin Schwidefsky,
	Heiko Carstens, linux-arch, linux-mm

[-- Attachment #1: mm-normal-pte-bit.patch --]
[-- Type: text/plain, Size: 31079 bytes --]

s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their
memory model (which is more dynamic than most). Instead, they had proposed
to implement it with an additional path through vm_normal_page(), using a
bit in the pte to determine whether or not the page should be refcounted:

vm_normal_page()
{
	...
        if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
                if (vma->vm_flags & VM_MIXEDMAP) {
#ifdef s390
			if (!mixedmap_refcount_pte(pte))
				return NULL;
#else
                        if (!pfn_valid(pfn))
                                return NULL;
#endif
                        goto out;
                }
	...
}

This is fine, however if we are allowed to use a bit in the pte to
determine refcountedness, we can use that to _completely_ replace all the
vma based schemes. So instead of adding more cases to the already complex
vma-based scheme, we can have a clearly seperate and simple pte-based scheme
(and get slightly better code generation in the process):

vm_normal_page()
{
#ifdef s390
	if (!mixedmap_refcount_pte(pte))
		return NULL;
	return pte_page(pte);
#else
	...
#endif
}

And finally, we may rather make this concept usable by any architecture
rather than making it s390 only, so implement a new type of pte state
for this. Unfortunately the old vma based code must stay, because some
architectures may not be able to spare pte bits. This makes vm_normal_page
a little bit more ugly than we would like, but the 2 cases are clearly
seperate.

So introduce a pte_special pte state, and use it in mm/memory.c. It is
currently a noop for all architectures, so this doesn't actually result
in any compiled code changes to mm/memory.o.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/asm-alpha/pgtable.h         |    2 
 include/asm-avr32/pgtable.h         |    8 ++
 include/asm-cris/pgtable.h          |    2 
 include/asm-frv/pgtable.h           |    2 
 include/asm-ia64/pgtable.h          |    3 +
 include/asm-m32r/pgtable.h          |   10 +++
 include/asm-m68k/motorola_pgtable.h |    2 
 include/asm-m68k/sun3_pgtable.h     |    2 
 include/asm-mips/pgtable.h          |    2 
 include/asm-parisc/pgtable.h        |    2 
 include/asm-powerpc/pgtable-ppc32.h |    3 +
 include/asm-powerpc/pgtable-ppc64.h |    3 +
 include/asm-ppc/pgtable.h           |    3 +
 include/asm-s390/pgtable.h          |   10 +++
 include/asm-sh64/pgtable.h          |    2 
 include/asm-sparc/pgtable.h         |    7 ++
 include/asm-sparc64/pgtable.h       |   10 +++
 include/asm-um/pgtable.h            |   10 +++
 include/asm-x86/pgtable_32.h        |    2 
 include/asm-x86/pgtable_64.h        |    2 
 include/asm-xtensa/pgtable.h        |    4 +
 include/linux/mm.h                  |    3 -
 mm/memory.c                         |   98 +++++++++++++++++++-----------------
 23 files changed, 147 insertions(+), 45 deletions(-)

Index: linux-2.6/include/asm-um/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-um/pgtable.h
+++ linux-2.6/include/asm-um/pgtable.h
@@ -220,6 +220,11 @@ static inline int pte_newprot(pte_t pte)
 	return(pte_present(pte) && (pte_get_bits(pte, _PAGE_NEWPROT)));
 }
 
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
+
 /*
  * =================================
  * Flags setting section.
@@ -288,6 +293,11 @@ static inline pte_t pte_mknewpage(pte_t 
 	return(pte);
 }
 
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return(pte);
+}
+
 static inline void set_pte(pte_t *pteptr, pte_t pteval)
 {
 	pte_copy(*pteptr, pteval);
Index: linux-2.6/include/asm-x86/pgtable_32.h
===================================================================
--- linux-2.6.orig/include/asm-x86/pgtable_32.h
+++ linux-2.6/include/asm-x86/pgtable_32.h
@@ -219,6 +219,7 @@ static inline int pte_dirty(pte_t pte)		
 static inline int pte_young(pte_t pte)		{ return (pte).pte_low & _PAGE_ACCESSED; }
 static inline int pte_write(pte_t pte)		{ return (pte).pte_low & _PAGE_RW; }
 static inline int pte_huge(pte_t pte)		{ return (pte).pte_low & _PAGE_PSE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 /*
  * The following only works if pte_present() is not true.
@@ -232,6 +233,7 @@ static inline pte_t pte_mkdirty(pte_t pt
 static inline pte_t pte_mkyoung(pte_t pte)	{ (pte).pte_low |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkwrite(pte_t pte)	{ (pte).pte_low |= _PAGE_RW; return pte; }
 static inline pte_t pte_mkhuge(pte_t pte)	{ (pte).pte_low |= _PAGE_PSE; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 #ifdef CONFIG_X86_PAE
 # include <asm/pgtable-3level.h>
Index: linux-2.6/include/asm-x86/pgtable_64.h
===================================================================
--- linux-2.6.orig/include/asm-x86/pgtable_64.h
+++ linux-2.6/include/asm-x86/pgtable_64.h
@@ -272,6 +272,7 @@ static inline int pte_young(pte_t pte)		
 static inline int pte_write(pte_t pte)		{ return pte_val(pte) & _PAGE_RW; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
 static inline int pte_huge(pte_t pte)		{ return pte_val(pte) & _PAGE_PSE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_mkclean(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
 static inline pte_t pte_mkold(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
@@ -282,6 +283,7 @@ static inline pte_t pte_mkyoung(pte_t pt
 static inline pte_t pte_mkwrite(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) | _PAGE_RW)); return pte; }
 static inline pte_t pte_mkhuge(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) | _PAGE_PSE)); return pte; }
 static inline pte_t pte_clrhuge(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_PSE)); return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 struct vm_area_struct;
 
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -699,7 +699,8 @@ struct zap_details {
 	unsigned long truncate_count;		/* Compare vm_truncate_count */
 };
 
-struct page *vm_normal_page(struct vm_area_struct *, unsigned long, pte_t);
+struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte);
+
 unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address,
 		unsigned long size, struct zap_details *);
 unsigned long unmap_vmas(struct mmu_gather **tlb,
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -361,34 +361,38 @@ static inline int is_cow_mapping(unsigne
 }
 
 /*
- * This function gets the "struct page" associated with a pte or returns
- * NULL if no "struct page" is associated with the pte.
+ * vm_normal_page -- This function gets the "struct page" associated with a pte.
  *
- * A raw VM_PFNMAP mapping (ie. one that is not COWed) may not have any "struct
- * page" backing, and even if they do, they are not refcounted. COWed pages of
- * a VM_PFNMAP do always have a struct page, and they are normally refcounted
- * (they are _normal_ pages).
- *
- * So a raw PFNMAP mapping will have each page table entry just pointing
- * to a page frame number, and as far as the VM layer is concerned, those do
- * not have pages associated with them - even if the PFN might point to memory
- * that otherwise is perfectly fine and has a "struct page".
+ * "Special" mappings do not wish to be associated with a "struct page" (either
+ * it doesn't exist, or it exists but they don't want to touch it). In this
+ * case, NULL is returned here. "Normal" mappings do have a struct page.
+ *
+ * There are 2 broad cases. Firstly, an architecture may define a pte_special()
+ * pte bit, in which case this function is trivial. Secondly, an architecture
+ * may not have a spare pte bit, which requires a more complicated scheme,
+ * described below.
+ *
+ * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a
+ * special mapping (even if there are underlying and valid "struct pages").
+ * COWed pages of a VM_PFNMAP are always normal.
  *
  * The way we recognize COWed pages within VM_PFNMAP mappings is through the
  * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
- * set, and the vm_pgoff will point to the first PFN mapped: thus every
- * page that is a raw mapping will always honor the rule
+ * set, and the vm_pgoff will point to the first PFN mapped: thus every special
+ * mapping will always honor the rule
  *
  *	pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT)
  *
- * A call to vm_normal_page() will return NULL for such a page.
+ * And for normal mappings this is false.
  *
- * If the page doesn't follow the "remap_pfn_range()" rule in a VM_PFNMAP
- * then the page has been COW'ed.  A COW'ed page _does_ have a "struct page"
- * associated with it even if it is in a VM_PFNMAP range.  Calling
- * vm_normal_page() on such a page will therefore return the "struct page".
+ * This restricts such mappings to be a linear translation from virtual address
+ * to pfn. To get around this restriction, we allow arbitrary mappings so long
+ * as the vma is not a COW mapping; in that case, we know that all ptes are
+ * special (because none can have been COWed).
  *
  *
+ * In order to support COW of arbitrary special mappings, we have VM_MIXEDMAP.
+ *
  * VM_MIXEDMAP mappings can likewise contain memory with or without "struct
  * page" backing, however the difference is that _all_ pages with a struct
  * page (that is, those where pfn_valid is true) are refcounted and considered
@@ -397,16 +401,28 @@ static inline int is_cow_mapping(unsigne
  * advantage is that we don't have to follow the strict linearity rule of
  * PFNMAP mappings in order to support COWable mappings.
  *
- * A call to vm_normal_page() with a VM_MIXEDMAP mapping will return the
- * associated "struct page" or NULL for memory not backed by a "struct page".
- *
- *
- * All other mappings should have a valid struct page, which will be
- * returned by a call to vm_normal_page().
  */
+#ifdef __HAVE_ARCH_PTE_SPECIAL
+# define HAVE_PTE_SPECIAL 1
+#else
+# define HAVE_PTE_SPECIAL 0
+#endif
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
 {
-	unsigned long pfn = pte_pfn(pte);
+	unsigned long pfn;
+
+	if (HAVE_PTE_SPECIAL) {
+		if (likely(!pte_special(pte))) {
+			VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+			return pte_page(pte);
+		}
+		VM_BUG_ON(!(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)));
+		return NULL;
+	}
+
+	/* !HAVE_PTE_SPECIAL case follows: */
+
+	pfn = pte_pfn(pte);
 
 	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
 		if (vma->vm_flags & VM_MIXEDMAP) {
@@ -414,7 +430,8 @@ struct page *vm_normal_page(struct vm_ar
 				return NULL;
 			goto out;
 		} else {
-			unsigned long off = (addr-vma->vm_start) >> PAGE_SHIFT;
+			unsigned long off;
+			off = (addr - vma->vm_start) >> PAGE_SHIFT;
 			if (pfn == vma->vm_pgoff + off)
 				return NULL;
 			if (!is_cow_mapping(vma->vm_flags))
@@ -422,25 +439,12 @@ struct page *vm_normal_page(struct vm_ar
 		}
 	}
 
-#ifdef CONFIG_DEBUG_VM
-	/*
-	 * Add some anal sanity checks for now. Eventually,
-	 * we should just do "return pfn_to_page(pfn)", but
-	 * in the meantime we check that we get a valid pfn,
-	 * and that the resulting page looks ok.
-	 */
-	if (unlikely(!pfn_valid(pfn))) {
-		print_bad_pte(vma, pte, addr);
-		return NULL;
-	}
-#endif
+	VM_BUG_ON(!pfn_valid(pfn));
 
 	/*
-	 * NOTE! We still have PageReserved() pages in the page 
-	 * tables. 
+	 * NOTE! We still have PageReserved() pages in the page tables.
 	 *
-	 * The PAGE_ZERO() pages and various VDSO mappings can
-	 * cause them to exist.
+	 * eg. VDSO mappings can cause them to exist.
 	 */
 out:
 	return pfn_to_page(pfn);
@@ -1244,6 +1248,12 @@ int vm_insert_pfn(struct vm_area_struct 
 	pte_t *pte, entry;
 	spinlock_t *ptl;
 
+	/*
+	 * Technically, architectures with pte_special can avoid all these
+	 * restrictions (same for remap_pfn_range).  However we would like
+	 * consistency in testing and feature parity among all, so we should
+	 * try to keep these invariants in place for everybody.
+	 */
 	BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)));
 	BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
 						(VM_PFNMAP|VM_MIXEDMAP));
@@ -1259,7 +1269,7 @@ int vm_insert_pfn(struct vm_area_struct 
 		goto out_unlock;
 
 	/* Ok, finally just insert the thing.. */
-	entry = pfn_pte(pfn, vma->vm_page_prot);
+	entry = pte_mkspecial(pfn_pte(pfn, vma->vm_page_prot));
 	set_pte_at(mm, addr, pte, entry);
 	update_mmu_cache(vma, addr, entry);
 
@@ -1290,7 +1300,7 @@ static int remap_pte_range(struct mm_str
 	arch_enter_lazy_mmu_mode();
 	do {
 		BUG_ON(!pte_none(*pte));
-		set_pte_at(mm, addr, pte, pfn_pte(pfn, prot));
+		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
 		pfn++;
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 	arch_leave_lazy_mmu_mode();
Index: linux-2.6/include/asm-alpha/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-alpha/pgtable.h
+++ linux-2.6/include/asm-alpha/pgtable.h
@@ -268,6 +268,7 @@ extern inline int pte_write(pte_t pte)		
 extern inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
 extern inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 extern inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+extern inline int pte_special(pte_t pte)	{ return 0; }
 
 extern inline pte_t pte_wrprotect(pte_t pte)	{ pte_val(pte) |= _PAGE_FOW; return pte; }
 extern inline pte_t pte_mkclean(pte_t pte)	{ pte_val(pte) &= ~(__DIRTY_BITS); return pte; }
@@ -275,6 +276,7 @@ extern inline pte_t pte_mkold(pte_t pte)
 extern inline pte_t pte_mkwrite(pte_t pte)	{ pte_val(pte) &= ~_PAGE_FOW; return pte; }
 extern inline pte_t pte_mkdirty(pte_t pte)	{ pte_val(pte) |= __DIRTY_BITS; return pte; }
 extern inline pte_t pte_mkyoung(pte_t pte)	{ pte_val(pte) |= __ACCESS_BITS; return pte; }
+extern inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 #define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address))
 
Index: linux-2.6/include/asm-avr32/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-avr32/pgtable.h
+++ linux-2.6/include/asm-avr32/pgtable.h
@@ -211,6 +211,10 @@ static inline int pte_young(pte_t pte)
 {
 	return pte_val(pte) & _PAGE_ACCESSED;
 }
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
 
 /*
  * The following only work if pte_present() is not true.
@@ -251,6 +255,10 @@ static inline pte_t pte_mkyoung(pte_t pt
 	set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED));
 	return pte;
 }
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return pte;
+}
 
 #define pmd_none(x)	(!pmd_val(x))
 #define pmd_present(x)	(pmd_val(x) & _PAGE_PRESENT)
Index: linux-2.6/include/asm-cris/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-cris/pgtable.h
+++ linux-2.6/include/asm-cris/pgtable.h
@@ -115,6 +115,7 @@ static inline int pte_write(pte_t pte)  
 static inline int pte_dirty(pte_t pte)          { return pte_val(pte) & _PAGE_MODIFIED; }
 static inline int pte_young(pte_t pte)          { return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)           { return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_wrprotect(pte_t pte)
 {
@@ -162,6 +163,7 @@ static inline pte_t pte_mkyoung(pte_t pt
         }
         return pte;
 }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 /*
  * Conversion functions: convert a page and protection to a page entry,
Index: linux-2.6/include/asm-frv/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-frv/pgtable.h
+++ linux-2.6/include/asm-frv/pgtable.h
@@ -380,6 +380,7 @@ static inline pmd_t *pmd_offset(pud_t *d
 static inline int pte_dirty(pte_t pte)		{ return (pte).pte & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte)		{ return (pte).pte & _PAGE_ACCESSED; }
 static inline int pte_write(pte_t pte)		{ return !((pte).pte & _PAGE_WP); }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_mkclean(pte_t pte)	{ (pte).pte &= ~_PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkold(pte_t pte)	{ (pte).pte &= ~_PAGE_ACCESSED; return pte; }
@@ -387,6 +388,7 @@ static inline pte_t pte_wrprotect(pte_t 
 static inline pte_t pte_mkdirty(pte_t pte)	{ (pte).pte |= _PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkyoung(pte_t pte)	{ (pte).pte |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkwrite(pte_t pte)	{ (pte).pte &= ~_PAGE_WP; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
 {
Index: linux-2.6/include/asm-ia64/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/pgtable.h
+++ linux-2.6/include/asm-ia64/pgtable.h
@@ -302,6 +302,8 @@ ia64_phys_addr_valid (unsigned long addr
 #define pte_dirty(pte)		((pte_val(pte) & _PAGE_D) != 0)
 #define pte_young(pte)		((pte_val(pte) & _PAGE_A) != 0)
 #define pte_file(pte)		((pte_val(pte) & _PAGE_FILE) != 0)
+#define pte_special(pte)	0
+
 /*
  * Note: we convert AR_RWX to AR_RX and AR_RW to AR_R by clearing the 2nd bit in the
  * access rights:
@@ -313,6 +315,7 @@ ia64_phys_addr_valid (unsigned long addr
 #define pte_mkclean(pte)	(__pte(pte_val(pte) & ~_PAGE_D))
 #define pte_mkdirty(pte)	(__pte(pte_val(pte) | _PAGE_D))
 #define pte_mkhuge(pte)		(__pte(pte_val(pte)))
+#define pte_mkspecial(pte)	(pte)
 
 /*
  * Because ia64's Icache and Dcache is not coherent (on a cpu), we need to
Index: linux-2.6/include/asm-m32r/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-m32r/pgtable.h
+++ linux-2.6/include/asm-m32r/pgtable.h
@@ -214,6 +214,11 @@ static inline int pte_file(pte_t pte)
 	return pte_val(pte) & _PAGE_FILE;
 }
 
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
+
 static inline pte_t pte_mkclean(pte_t pte)
 {
 	pte_val(pte) &= ~_PAGE_DIRTY;
@@ -250,6 +255,11 @@ static inline pte_t pte_mkwrite(pte_t pt
 	return pte;
 }
 
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return pte;
+}
+
 static inline  int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
 {
 	return test_and_clear_bit(_PAGE_BIT_ACCESSED, ptep);
Index: linux-2.6/include/asm-m68k/motorola_pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-m68k/motorola_pgtable.h
+++ linux-2.6/include/asm-m68k/motorola_pgtable.h
@@ -168,6 +168,7 @@ static inline int pte_write(pte_t pte)		
 static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_wrprotect(pte_t pte)	{ pte_val(pte) |= _PAGE_RONLY; return pte; }
 static inline pte_t pte_mkclean(pte_t pte)	{ pte_val(pte) &= ~_PAGE_DIRTY; return pte; }
@@ -185,6 +186,7 @@ static inline pte_t pte_mkcache(pte_t pt
 	pte_val(pte) = (pte_val(pte) & _CACHEMASK040) | m68k_supervisor_cachemode;
 	return pte;
 }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 #define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address))
 
Index: linux-2.6/include/asm-m68k/sun3_pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-m68k/sun3_pgtable.h
+++ linux-2.6/include/asm-m68k/sun3_pgtable.h
@@ -169,6 +169,7 @@ static inline int pte_write(pte_t pte)		
 static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & SUN3_PAGE_MODIFIED; }
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & SUN3_PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & SUN3_PAGE_ACCESSED; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_wrprotect(pte_t pte)	{ pte_val(pte) &= ~SUN3_PAGE_WRITEABLE; return pte; }
 static inline pte_t pte_mkclean(pte_t pte)	{ pte_val(pte) &= ~SUN3_PAGE_MODIFIED; return pte; }
@@ -181,6 +182,7 @@ static inline pte_t pte_mknocache(pte_t 
 //static inline pte_t pte_mkcache(pte_t pte)	{ pte_val(pte) &= SUN3_PAGE_NOCACHE; return pte; }
 // until then, use:
 static inline pte_t pte_mkcache(pte_t pte)	{ return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern pgd_t kernel_pg_dir[PTRS_PER_PGD];
Index: linux-2.6/include/asm-mips/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-mips/pgtable.h
+++ linux-2.6/include/asm-mips/pgtable.h
@@ -285,6 +285,8 @@ static inline pte_t pte_mkyoung(pte_t pt
 	return pte;
 }
 #endif
+static inline int pte_special(pte_t pte)	{ return 0; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 /*
  * Macro to make mark a page protection value as "uncacheable".  Note
Index: linux-2.6/include/asm-parisc/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-parisc/pgtable.h
+++ linux-2.6/include/asm-parisc/pgtable.h
@@ -331,6 +331,7 @@ static inline int pte_dirty(pte_t pte)		
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_write(pte_t pte)		{ return pte_val(pte) & _PAGE_WRITE; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_mkclean(pte_t pte)	{ pte_val(pte) &= ~_PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkold(pte_t pte)	{ pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
@@ -338,6 +339,7 @@ static inline pte_t pte_wrprotect(pte_t 
 static inline pte_t pte_mkdirty(pte_t pte)	{ pte_val(pte) |= _PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkyoung(pte_t pte)	{ pte_val(pte) |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkwrite(pte_t pte)	{ pte_val(pte) |= _PAGE_WRITE; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 /*
  * Conversion functions: convert a page and protection to a page entry,
Index: linux-2.6/include/asm-powerpc/pgtable-ppc32.h
===================================================================
--- linux-2.6.orig/include/asm-powerpc/pgtable-ppc32.h
+++ linux-2.6/include/asm-powerpc/pgtable-ppc32.h
@@ -514,6 +514,7 @@ static inline int pte_write(pte_t pte)		
 static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline void pte_uncache(pte_t pte)       { pte_val(pte) |= _PAGE_NO_CACHE; }
 static inline void pte_cache(pte_t pte)         { pte_val(pte) &= ~_PAGE_NO_CACHE; }
@@ -531,6 +532,8 @@ static inline pte_t pte_mkdirty(pte_t pt
 	pte_val(pte) |= _PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkyoung(pte_t pte) {
 	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte) {
+	return pte; }
 
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
Index: linux-2.6/include/asm-powerpc/pgtable-ppc64.h
===================================================================
--- linux-2.6.orig/include/asm-powerpc/pgtable-ppc64.h
+++ linux-2.6/include/asm-powerpc/pgtable-ppc64.h
@@ -239,6 +239,7 @@ static inline int pte_write(pte_t pte) {
 static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY;}
 static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED;}
 static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE;}
+static inline int pte_special(pte_t pte) { return 0; }
 
 static inline void pte_uncache(pte_t pte) { pte_val(pte) |= _PAGE_NO_CACHE; }
 static inline void pte_cache(pte_t pte)   { pte_val(pte) &= ~_PAGE_NO_CACHE; }
@@ -257,6 +258,8 @@ static inline pte_t pte_mkyoung(pte_t pt
 	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkhuge(pte_t pte) {
 	return pte; }
+static inline pte_t pte_mkspecial(pte_t pte) {
+	return pte; }
 
 /* Atomic PTE updates */
 static inline unsigned long pte_update(struct mm_struct *mm,
Index: linux-2.6/include/asm-ppc/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-ppc/pgtable.h
+++ linux-2.6/include/asm-ppc/pgtable.h
@@ -537,6 +537,7 @@ static inline int pte_write(pte_t pte)		
 static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline void pte_uncache(pte_t pte)       { pte_val(pte) |= _PAGE_NO_CACHE; }
 static inline void pte_cache(pte_t pte)         { pte_val(pte) &= ~_PAGE_NO_CACHE; }
@@ -554,6 +555,8 @@ static inline pte_t pte_mkdirty(pte_t pt
 	pte_val(pte) |= _PAGE_DIRTY; return pte; }
 static inline pte_t pte_mkyoung(pte_t pte) {
 	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte) {
+	return pte; }
 
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
Index: linux-2.6/include/asm-s390/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-s390/pgtable.h
+++ linux-2.6/include/asm-s390/pgtable.h
@@ -504,6 +504,11 @@ static inline int pte_file(pte_t pte)
 	return (pte_val(pte) & mask) == _PAGE_TYPE_FILE;
 }
 
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
+
 #define __HAVE_ARCH_PTE_SAME
 #define pte_same(a,b)  (pte_val(a) == pte_val(b))
 
@@ -654,6 +659,11 @@ static inline pte_t pte_mkyoung(pte_t pt
 	return pte;
 }
 
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return pte;
+}
+
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
 static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 					    unsigned long addr, pte_t *ptep)
Index: linux-2.6/include/asm-sh64/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-sh64/pgtable.h
+++ linux-2.6/include/asm-sh64/pgtable.h
@@ -419,6 +419,7 @@ static inline int pte_dirty(pte_t pte){ 
 static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
 static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+static inline int pte_special(pte_t pte)	{ return 0; }
 
 static inline pte_t pte_wrprotect(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
 static inline pte_t pte_mkclean(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
@@ -427,6 +428,7 @@ static inline pte_t pte_mkwrite(pte_t pt
 static inline pte_t pte_mkdirty(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
 static inline pte_t pte_mkyoung(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
 static inline pte_t pte_mkhuge(pte_t pte)	{ set_pte(&pte, __pte(pte_val(pte) | _PAGE_SZHUGE)); return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)	{ return pte; }
 
 
 /*
Index: linux-2.6/include/asm-sparc/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-sparc/pgtable.h
+++ linux-2.6/include/asm-sparc/pgtable.h
@@ -219,6 +219,11 @@ static inline int pte_file(pte_t pte)
 	return pte_val(pte) & BTFIXUP_HALF(pte_filei);
 }
 
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
+
 /*
  */
 BTFIXUPDEF_HALF(pte_wrprotecti)
@@ -251,6 +256,8 @@ BTFIXUPDEF_CALL_CONST(pte_t, pte_mkyoung
 #define pte_mkdirty(pte) BTFIXUP_CALL(pte_mkdirty)(pte)
 #define pte_mkyoung(pte) BTFIXUP_CALL(pte_mkyoung)(pte)
 
+#define pte_mkspecial(pte_t pte)    (pte)
+
 #define pfn_pte(pfn, prot)		mk_pte(pfn_to_page(pfn), prot)
 
 BTFIXUPDEF_CALL(unsigned long,	 pte_pfn, pte_t)
Index: linux-2.6/include/asm-sparc64/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-sparc64/pgtable.h
+++ linux-2.6/include/asm-sparc64/pgtable.h
@@ -506,6 +506,11 @@ static inline pte_t pte_mkyoung(pte_t pt
 	return __pte(pte_val(pte) | mask);
 }
 
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+	return pte;
+}
+
 static inline unsigned long pte_young(pte_t pte)
 {
 	unsigned long mask;
@@ -608,6 +613,11 @@ static inline unsigned long pte_present(
 	return val;
 }
 
+static inline int pte_special(pte_t pte)
+{
+	return 0;
+}
+
 #define pmd_set(pmdp, ptep)	\
 	(pmd_val(*(pmdp)) = (__pa((unsigned long) (ptep)) >> 11UL))
 #define pud_set(pudp, pmdp)	\
Index: linux-2.6/include/asm-xtensa/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-xtensa/pgtable.h
+++ linux-2.6/include/asm-xtensa/pgtable.h
@@ -212,6 +212,8 @@ static inline int pte_write(pte_t pte) {
 static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)  { return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte) { return 0; }
+
 static inline pte_t pte_wrprotect(pte_t pte)	
 	{ pte_val(pte) &= ~(_PAGE_WRITABLE | _PAGE_HW_WRITE); return pte; }
 static inline pte_t pte_mkclean(pte_t pte)
@@ -224,6 +226,8 @@ static inline pte_t pte_mkyoung(pte_t pt
 	{ pte_val(pte) |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkwrite(pte_t pte)
 	{ pte_val(pte) |= _PAGE_WRITABLE; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte)
+	{ return pte; }
 
 /*
  * Conversion functions: convert a page and protection to a page entry,

-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [patch 3/6] mm: add vm_insert_mixed
       [not found] <20080118045649.334391000@suse.de>
  2008-01-18  4:56 ` [patch 1/6] mm: introduce VM_MIXEDMAP npiggin, Jared Hulbert
  2008-01-18  4:56 ` [patch 2/6] mm: introduce pte_special pte bit npiggin
@ 2008-01-18  4:56 ` npiggin
  2008-01-18  4:56 ` [patch 4/6] xip: support non-struct page backed memory npiggin
  3 siblings, 0 replies; 25+ messages in thread
From: npiggin @ 2008-01-18  4:56 UTC (permalink / raw)
  To: Linus Torvalds, Andrew Morton
  Cc: Jared Hulbert, Carsten Otte, Martin Schwidefsky, Heiko Carstens,
	linux-mm

[-- Attachment #1: mm-insert_mixed.patch --]
[-- Type: text/plain, Size: 5171 bytes --]

vm_insert_mixed will insert either a raw pfn or a refcounted struct page
into the page tables, depending on whether vm_normal_page() will return
the page or not. With the introduction of the new pte bit, this is now
a too tricky for drivers to be doing themselves.

filemap_xip uses this in a subsequent patch.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-mm@kvack.org
---
 include/linux/mm.h |    2 +
 mm/memory.c        |   79 ++++++++++++++++++++++++++++++++++++-----------------
 2 files changed, 57 insertions(+), 24 deletions(-)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -1097,6 +1097,8 @@ int remap_pfn_range(struct vm_area_struc
 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn);
+int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+			unsigned long pfn);
 
 struct page *follow_page(struct vm_area_struct *, unsigned long address,
 			unsigned int foll_flags);
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1164,8 +1164,9 @@ pte_t * fastcall get_locked_pte(struct m
  * old drivers should use this, and they needed to mark their
  * pages reserved for the old functions anyway.
  */
-static int insert_page(struct mm_struct *mm, unsigned long addr, struct page *page, pgprot_t prot)
+static int insert_page(struct vm_area_struct *vma, unsigned long addr, struct page *page, pgprot_t prot)
 {
+	struct mm_struct *mm = vma->vm_mm;
 	int retval;
 	pte_t *pte;
 	spinlock_t *ptl;  
@@ -1224,10 +1225,37 @@ int vm_insert_page(struct vm_area_struct
 	if (!page_count(page))
 		return -EINVAL;
 	vma->vm_flags |= VM_INSERTPAGE;
-	return insert_page(vma->vm_mm, addr, page, vma->vm_page_prot);
+	return insert_page(vma, addr, page, vma->vm_page_prot);
 }
 EXPORT_SYMBOL(vm_insert_page);
 
+static int insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, pgprot_t prot)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	int retval;
+	pte_t *pte, entry;
+	spinlock_t *ptl;
+
+	retval = -ENOMEM;
+	pte = get_locked_pte(mm, addr, &ptl);
+	if (!pte)
+		goto out;
+	retval = -EBUSY;
+	if (!pte_none(*pte))
+		goto out_unlock;
+
+	/* Ok, finally just insert the thing.. */
+	entry = pte_mkspecial(pfn_pte(pfn, prot));
+	set_pte_at(mm, addr, pte, entry);
+	update_mmu_cache(vma, addr, entry); /* XXX: why not for insert_page? */
+
+	retval = 0;
+out_unlock:
+	pte_unmap_unlock(pte, ptl);
+out:
+	return retval;
+}
+
 /**
  * vm_insert_pfn - insert single pfn into user vma
  * @vma: user vma to map to
@@ -1243,11 +1271,6 @@ EXPORT_SYMBOL(vm_insert_page);
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 		unsigned long pfn)
 {
-	struct mm_struct *mm = vma->vm_mm;
-	int retval;
-	pte_t *pte, entry;
-	spinlock_t *ptl;
-
 	/*
 	 * Technically, architectures with pte_special can avoid all these
 	 * restrictions (same for remap_pfn_range).  However we would like
@@ -1260,27 +1283,35 @@ int vm_insert_pfn(struct vm_area_struct 
 	BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
 	BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn));
 
-	retval = -ENOMEM;
-	pte = get_locked_pte(mm, addr, &ptl);
-	if (!pte)
-		goto out;
-	retval = -EBUSY;
-	if (!pte_none(*pte))
-		goto out_unlock;
+	if (addr < vma->vm_start || addr >= vma->vm_end)
+		return -EFAULT;
+	return insert_pfn(vma, addr, pfn, vma->vm_page_prot);
+}
+EXPORT_SYMBOL(vm_insert_pfn);
 
-	/* Ok, finally just insert the thing.. */
-	entry = pte_mkspecial(pfn_pte(pfn, vma->vm_page_prot));
-	set_pte_at(mm, addr, pte, entry);
-	update_mmu_cache(vma, addr, entry);
+int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+			unsigned long pfn)
+{
+	BUG_ON(!(vma->vm_flags & VM_MIXEDMAP));
 
-	retval = 0;
-out_unlock:
-	pte_unmap_unlock(pte, ptl);
+	if (addr < vma->vm_start || addr >= vma->vm_end)
+		return -EFAULT;
 
-out:
-	return retval;
+	/*
+	 * If we don't have pte special, then we have to use the pfn_valid()
+	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+	 * refcount the page if pfn_valid is true (hence insert_page rather
+	 * than insert_pfn).
+	 */
+	if (!HAVE_PTE_SPECIAL && pfn_valid(pfn)) {
+		struct page *page;
+
+		page = pfn_to_page(pfn);
+		return insert_page(vma, addr, page, vma->vm_page_prot);
+	}
+	return insert_pfn(vma, addr, pfn, vma->vm_page_prot);
 }
-EXPORT_SYMBOL(vm_insert_pfn);
+EXPORT_SYMBOL(vm_insert_mixed);
 
 /*
  * maps a range of physical memory into the requested pages. the old

-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [patch 4/6] xip: support non-struct page backed memory
       [not found] <20080118045649.334391000@suse.de>
                   ` (2 preceding siblings ...)
  2008-01-18  4:56 ` [patch 3/6] mm: add vm_insert_mixed npiggin
@ 2008-01-18  4:56 ` npiggin
  2008-03-01  8:14   ` Jared Hulbert
  3 siblings, 1 reply; 25+ messages in thread
From: npiggin @ 2008-01-18  4:56 UTC (permalink / raw)
  To: Linus Torvalds, Andrew Morton
  Cc: Jared Hulbert, Carsten Otte, Martin Schwidefsky, Heiko Carstens,
	linux-mm, linux-fsdevel

[-- Attachment #1: xip-get_xip_addr.patch --]
[-- Type: text/plain, Size: 17570 bytes --]

Convert XIP to support non-struct page backed memory, using VM_MIXEDMAP
for the user mappings.

This requires the get_xip_page API to be changed to an address based one.
Improve the API layering a little bit too, while we're here.

(The kaddr->pfn conversion may not be quite right for all architectures or XIP
memory mappings, and the cacheflushing may need to be added for some archs). 

This scheme has been tested and works for Jared's work-in-progress filesystem,
with s390's xip, and with the new brd driver. It is required to have XIP
filesystems on memory that isn't backed with struct page.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org
---
 fs/ext2/inode.c    |    2 
 fs/ext2/xip.c      |   36 ++++-----
 fs/ext2/xip.h      |    8 +-
 fs/open.c          |    2 
 include/linux/fs.h |    3 
 mm/fadvise.c       |    2 
 mm/filemap_xip.c   |  191 ++++++++++++++++++++++++++---------------------------
 mm/madvise.c       |    2 
 8 files changed, 122 insertions(+), 124 deletions(-)

Index: linux-2.6/fs/ext2/inode.c
===================================================================
--- linux-2.6.orig/fs/ext2/inode.c
+++ linux-2.6/fs/ext2/inode.c
@@ -800,7 +800,7 @@ const struct address_space_operations ex
 
 const struct address_space_operations ext2_aops_xip = {
 	.bmap			= ext2_bmap,
-	.get_xip_page		= ext2_get_xip_page,
+	.get_xip_address	= ext2_get_xip_address,
 };
 
 const struct address_space_operations ext2_nobh_aops = {
Index: linux-2.6/fs/ext2/xip.c
===================================================================
--- linux-2.6.orig/fs/ext2/xip.c
+++ linux-2.6/fs/ext2/xip.c
@@ -15,24 +15,25 @@
 #include "xip.h"
 
 static inline int
-__inode_direct_access(struct inode *inode, sector_t sector,
-		      unsigned long *data)
+__inode_direct_access(struct inode *inode, sector_t block, unsigned long *data)
 {
+	sector_t sector;
 	BUG_ON(!inode->i_sb->s_bdev->bd_disk->fops->direct_access);
+
+	sector = block * (PAGE_SIZE / 512); /* ext2 block to bdev sector */
 	return inode->i_sb->s_bdev->bd_disk->fops
-		->direct_access(inode->i_sb->s_bdev,sector,data);
+		->direct_access(inode->i_sb->s_bdev, sector, data);
 }
 
 static inline int
-__ext2_get_sector(struct inode *inode, sector_t offset, int create,
+__ext2_get_block(struct inode *inode, pgoff_t pgoff, int create,
 		   sector_t *result)
 {
 	struct buffer_head tmp;
 	int rc;
 
 	memset(&tmp, 0, sizeof(struct buffer_head));
-	rc = ext2_get_block(inode, offset/ (PAGE_SIZE/512), &tmp,
-			    create);
+	rc = ext2_get_block(inode, pgoff, &tmp, create);
 	*result = tmp.b_blocknr;
 
 	/* did we get a sparse block (hole in the file)? */
@@ -45,13 +46,12 @@ __ext2_get_sector(struct inode *inode, s
 }
 
 int
-ext2_clear_xip_target(struct inode *inode, int block)
+ext2_clear_xip_target(struct inode *inode, sector_t block)
 {
-	sector_t sector = block * (PAGE_SIZE/512);
 	unsigned long data;
 	int rc;
 
-	rc = __inode_direct_access(inode, sector, &data);
+	rc = __inode_direct_access(inode, block, &data);
 	if (!rc)
 		clear_page((void*)data);
 	return rc;
@@ -69,24 +69,24 @@ void ext2_xip_verify_sb(struct super_blo
 	}
 }
 
-struct page *
-ext2_get_xip_page(struct address_space *mapping, sector_t offset,
-		   int create)
+void *
+ext2_get_xip_address(struct address_space *mapping, pgoff_t pgoff, int create)
 {
 	int rc;
 	unsigned long data;
-	sector_t sector;
+	sector_t block;
 
 	/* first, retrieve the sector number */
-	rc = __ext2_get_sector(mapping->host, offset, create, &sector);
+	rc = __ext2_get_block(mapping->host, pgoff, create, &block);
 	if (rc)
 		goto error;
 
 	/* retrieve address of the target data */
-	rc = __inode_direct_access
-		(mapping->host, sector * (PAGE_SIZE/512), &data);
-	if (!rc)
-		return virt_to_page(data);
+	rc = __inode_direct_access(mapping->host, block, &data);
+	if (rc)
+		goto error;
+
+	return (void *)data;
 
  error:
 	return ERR_PTR(rc);
Index: linux-2.6/fs/ext2/xip.h
===================================================================
--- linux-2.6.orig/fs/ext2/xip.h
+++ linux-2.6/fs/ext2/xip.h
@@ -7,19 +7,19 @@
 
 #ifdef CONFIG_EXT2_FS_XIP
 extern void ext2_xip_verify_sb (struct super_block *);
-extern int ext2_clear_xip_target (struct inode *, int);
+extern int ext2_clear_xip_target (struct inode *, sector_t);
 
 static inline int ext2_use_xip (struct super_block *sb)
 {
 	struct ext2_sb_info *sbi = EXT2_SB(sb);
 	return (sbi->s_mount_opt & EXT2_MOUNT_XIP);
 }
-struct page* ext2_get_xip_page (struct address_space *, sector_t, int);
-#define mapping_is_xip(map) unlikely(map->a_ops->get_xip_page)
+void *ext2_get_xip_address(struct address_space *, sector_t, int);
+#define mapping_is_xip(map) unlikely(map->a_ops->get_xip_address)
 #else
 #define mapping_is_xip(map)			0
 #define ext2_xip_verify_sb(sb)			do { } while (0)
 #define ext2_use_xip(sb)			0
 #define ext2_clear_xip_target(inode, chain)	0
-#define ext2_get_xip_page			NULL
+#define ext2_get_xip_address			NULL
 #endif
Index: linux-2.6/fs/open.c
===================================================================
--- linux-2.6.orig/fs/open.c
+++ linux-2.6/fs/open.c
@@ -778,7 +778,7 @@ static struct file *__dentry_open(struct
 	if (f->f_flags & O_DIRECT) {
 		if (!f->f_mapping->a_ops ||
 		    ((!f->f_mapping->a_ops->direct_IO) &&
-		    (!f->f_mapping->a_ops->get_xip_page))) {
+		    (!f->f_mapping->a_ops->get_xip_address))) {
 			fput(f);
 			f = ERR_PTR(-EINVAL);
 		}
Index: linux-2.6/include/linux/fs.h
===================================================================
--- linux-2.6.orig/include/linux/fs.h
+++ linux-2.6/include/linux/fs.h
@@ -473,8 +473,7 @@ struct address_space_operations {
 	int (*releasepage) (struct page *, gfp_t);
 	ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
 			loff_t offset, unsigned long nr_segs);
-	struct page* (*get_xip_page)(struct address_space *, sector_t,
-			int);
+	void * (*get_xip_address)(struct address_space *, pgoff_t, int);
 	/* migrate the contents of a page to the specified target */
 	int (*migratepage) (struct address_space *,
 			struct page *, struct page *);
Index: linux-2.6/mm/fadvise.c
===================================================================
--- linux-2.6.orig/mm/fadvise.c
+++ linux-2.6/mm/fadvise.c
@@ -49,7 +49,7 @@ asmlinkage long sys_fadvise64_64(int fd,
 		goto out;
 	}
 
-	if (mapping->a_ops->get_xip_page)
+	if (mapping->a_ops->get_xip_address)
 		/* no bad return value, but ignore advice */
 		goto out;
 
Index: linux-2.6/mm/filemap_xip.c
===================================================================
--- linux-2.6.orig/mm/filemap_xip.c
+++ linux-2.6/mm/filemap_xip.c
@@ -15,6 +15,7 @@
 #include <linux/rmap.h>
 #include <linux/sched.h>
 #include <asm/tlbflush.h>
+#include <asm/io.h>
 
 /*
  * We do use our own empty page to avoid interference with other users
@@ -42,36 +43,39 @@ static struct page *xip_sparse_page(void
 
 /*
  * This is a file read routine for execute in place files, and uses
- * the mapping->a_ops->get_xip_page() function for the actual low-level
+ * the mapping->a_ops->get_xip_address() function for the actual low-level
  * stuff.
  *
  * Note the struct file* is not used at all.  It may be NULL.
  */
-static void
+static ssize_t
 do_xip_mapping_read(struct address_space *mapping,
 		    struct file_ra_state *_ra,
 		    struct file *filp,
-		    loff_t *ppos,
-		    read_descriptor_t *desc,
-		    read_actor_t actor)
+		    char __user *buf,
+		    size_t len,
+		    loff_t *ppos)
 {
 	struct inode *inode = mapping->host;
 	unsigned long index, end_index, offset;
-	loff_t isize;
+	loff_t isize, pos;
+	size_t copied = 0, error = 0;
 
-	BUG_ON(!mapping->a_ops->get_xip_page);
+	BUG_ON(!mapping->a_ops->get_xip_address);
 
-	index = *ppos >> PAGE_CACHE_SHIFT;
-	offset = *ppos & ~PAGE_CACHE_MASK;
+	pos = *ppos;
+	index = pos >> PAGE_CACHE_SHIFT;
+	offset = pos & ~PAGE_CACHE_MASK;
 
 	isize = i_size_read(inode);
 	if (!isize)
 		goto out;
 
 	end_index = (isize - 1) >> PAGE_CACHE_SHIFT;
-	for (;;) {
-		struct page *page;
-		unsigned long nr, ret;
+	do {
+		unsigned long nr, left;
+		void *xip_mem;
+		int zero = 0;
 
 		/* nr is the maximum number of bytes to copy from this page */
 		nr = PAGE_CACHE_SIZE;
@@ -84,17 +88,20 @@ do_xip_mapping_read(struct address_space
 			}
 		}
 		nr = nr - offset;
+		if (nr > len)
+			nr = len;
 
-		page = mapping->a_ops->get_xip_page(mapping,
-			index*(PAGE_SIZE/512), 0);
-		if (!page)
-			goto no_xip_page;
-		if (unlikely(IS_ERR(page))) {
-			if (PTR_ERR(page) == -ENODATA) {
+		xip_mem = mapping->a_ops->get_xip_address(mapping, index, 0);
+		if (!xip_mem) {
+			error = -EIO;
+			goto out;
+		}
+		if (unlikely(IS_ERR(xip_mem))) {
+			if (PTR_ERR(xip_mem) == -ENODATA) {
 				/* sparse */
-				page = ZERO_PAGE(0);
+				zero = 1;
 			} else {
-				desc->error = PTR_ERR(page);
+				error = PTR_ERR(xip_mem);
 				goto out;
 			}
 		}
@@ -104,10 +111,10 @@ do_xip_mapping_read(struct address_space
 		 * before reading the page on the kernel side.
 		 */
 		if (mapping_writably_mapped(mapping))
-			flush_dcache_page(page);
+			/* address based flush */ ;
 
 		/*
-		 * Ok, we have the page, so now we can copy it to user space...
+		 * Ok, we have the mem, so now we can copy it to user space...
 		 *
 		 * The actor routine returns how many bytes were actually used..
 		 * NOTE! This may not be the same as how much of a user buffer
@@ -115,47 +122,38 @@ do_xip_mapping_read(struct address_space
 		 * "pos" here (the actor routine has to update the user buffer
 		 * pointers and the remaining count).
 		 */
-		ret = actor(desc, page, offset, nr);
-		offset += ret;
-		index += offset >> PAGE_CACHE_SHIFT;
-		offset &= ~PAGE_CACHE_MASK;
+		if (!zero)
+			left = __copy_to_user(buf+copied, xip_mem+offset, nr);
+		else
+			left = __clear_user(buf + copied, nr);
 
-		if (ret == nr && desc->count)
-			continue;
-		goto out;
+		if (left) {
+			error = -EFAULT;
+			goto out;
+		}
 
-no_xip_page:
-		/* Did not get the page. Report it */
-		desc->error = -EIO;
-		goto out;
-	}
+		copied += (nr - left);
+		offset += (nr - left);
+		index += offset >> PAGE_CACHE_SHIFT;
+		offset &= ~PAGE_CACHE_MASK;
+	} while (copied < len);
 
 out:
-	*ppos = ((loff_t) index << PAGE_CACHE_SHIFT) + offset;
+	*ppos = pos + copied;
 	if (filp)
 		file_accessed(filp);
+
+	return (copied ? copied : error);
 }
 
 ssize_t
 xip_file_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos)
 {
-	read_descriptor_t desc;
-
 	if (!access_ok(VERIFY_WRITE, buf, len))
 		return -EFAULT;
 
-	desc.written = 0;
-	desc.arg.buf = buf;
-	desc.count = len;
-	desc.error = 0;
-
-	do_xip_mapping_read(filp->f_mapping, &filp->f_ra, filp,
-			    ppos, &desc, file_read_actor);
-
-	if (desc.written)
-		return desc.written;
-	else
-		return desc.error;
+	return do_xip_mapping_read(filp->f_mapping, &filp->f_ra, filp,
+			    buf, len, ppos);
 }
 EXPORT_SYMBOL_GPL(xip_file_read);
 
@@ -210,13 +208,14 @@ __xip_unmap (struct address_space * mapp
  *
  * This function is derived from filemap_fault, but used for execute in place
  */
-static int xip_file_fault(struct vm_area_struct *area, struct vm_fault *vmf)
+static int xip_file_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
-	struct file *file = area->vm_file;
+	struct file *file = vma->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	struct inode *inode = mapping->host;
-	struct page *page;
 	pgoff_t size;
+	void *xip_mem;
+	struct page *page;
 
 	/* XXX: are VM_FAULT_ codes OK? */
 
@@ -224,35 +223,43 @@ static int xip_file_fault(struct vm_area
 	if (vmf->pgoff >= size)
 		return VM_FAULT_SIGBUS;
 
-	page = mapping->a_ops->get_xip_page(mapping,
-					vmf->pgoff*(PAGE_SIZE/512), 0);
-	if (!IS_ERR(page))
-		goto out;
-	if (PTR_ERR(page) != -ENODATA)
+	xip_mem = mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0);
+	if (!IS_ERR(xip_mem))
+		goto found;
+	if (PTR_ERR(xip_mem) != -ENODATA)
 		return VM_FAULT_OOM;
 
 	/* sparse block */
-	if ((area->vm_flags & (VM_WRITE | VM_MAYWRITE)) &&
-	    (area->vm_flags & (VM_SHARED| VM_MAYSHARE)) &&
+	if ((vma->vm_flags & (VM_WRITE | VM_MAYWRITE)) &&
+	    (vma->vm_flags & (VM_SHARED| VM_MAYSHARE)) &&
 	    (!(mapping->host->i_sb->s_flags & MS_RDONLY))) {
+		unsigned long pfn;
+		int err;
+
 		/* maybe shared writable, allocate new block */
-		page = mapping->a_ops->get_xip_page(mapping,
-					vmf->pgoff*(PAGE_SIZE/512), 1);
-		if (IS_ERR(page))
+		xip_mem = mapping->a_ops->get_xip_address(mapping,vmf->pgoff,1);
+		if (IS_ERR(xip_mem))
 			return VM_FAULT_SIGBUS;
-		/* unmap page at pgoff from all other vmas */
+		/* unmap sparse mappings at pgoff from all other vmas */
 		__xip_unmap(mapping, vmf->pgoff);
+
+found:
+		pfn = virt_to_phys(xip_mem) >> PAGE_SHIFT;
+		err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
+		if (err == -ENOMEM)
+			return VM_FAULT_OOM;
+		BUG_ON(err);
+		return VM_FAULT_NOPAGE;
 	} else {
 		/* not shared and writable, use xip_sparse_page() */
 		page = xip_sparse_page();
 		if (!page)
 			return VM_FAULT_OOM;
-	}
 
-out:
-	page_cache_get(page);
-	vmf->page = page;
-	return 0;
+		page_cache_get(page);
+		vmf->page = page;
+		return 0;
+	}
 }
 
 static struct vm_operations_struct xip_file_vm_ops = {
@@ -261,11 +268,11 @@ static struct vm_operations_struct xip_f
 
 int xip_file_mmap(struct file * file, struct vm_area_struct * vma)
 {
-	BUG_ON(!file->f_mapping->a_ops->get_xip_page);
+	BUG_ON(!file->f_mapping->a_ops->get_xip_address);
 
 	file_accessed(file);
 	vma->vm_ops = &xip_file_vm_ops;
-	vma->vm_flags |= VM_CAN_NONLINEAR;
+	vma->vm_flags |= VM_CAN_NONLINEAR | VM_MIXEDMAP;
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xip_file_mmap);
@@ -278,17 +285,16 @@ __xip_file_write(struct file *filp, cons
 	const struct address_space_operations *a_ops = mapping->a_ops;
 	struct inode 	*inode = mapping->host;
 	long		status = 0;
-	struct page	*page;
 	size_t		bytes;
 	ssize_t		written = 0;
 
-	BUG_ON(!mapping->a_ops->get_xip_page);
+	BUG_ON(!mapping->a_ops->get_xip_address);
 
 	do {
 		unsigned long index;
 		unsigned long offset;
 		size_t copied;
-		char *kaddr;
+		void *xip_mem;
 
 		offset = (pos & (PAGE_CACHE_SIZE -1)); /* Within page */
 		index = pos >> PAGE_CACHE_SHIFT;
@@ -296,28 +302,22 @@ __xip_file_write(struct file *filp, cons
 		if (bytes > count)
 			bytes = count;
 
-		page = a_ops->get_xip_page(mapping,
-					   index*(PAGE_SIZE/512), 0);
-		if (IS_ERR(page) && (PTR_ERR(page) == -ENODATA)) {
+		xip_mem = a_ops->get_xip_address(mapping, index, 0);
+		if (IS_ERR(xip_mem) && (PTR_ERR(xip_mem) == -ENODATA)) {
 			/* we allocate a new page unmap it */
-			page = a_ops->get_xip_page(mapping,
-						   index*(PAGE_SIZE/512), 1);
-			if (!IS_ERR(page))
+			xip_mem = a_ops->get_xip_address(mapping, index, 1);
+			if (!IS_ERR(xip_mem))
 				/* unmap page at pgoff from all other vmas */
 				__xip_unmap(mapping, index);
 		}
 
-		if (IS_ERR(page)) {
-			status = PTR_ERR(page);
+		if (IS_ERR(xip_mem)) {
+			status = PTR_ERR(xip_mem);
 			break;
 		}
 
-		fault_in_pages_readable(buf, bytes);
-		kaddr = kmap_atomic(page, KM_USER0);
 		copied = bytes -
-			__copy_from_user_inatomic_nocache(kaddr + offset, buf, bytes);
-		kunmap_atomic(kaddr, KM_USER0);
-		flush_dcache_page(page);
+			__copy_from_user_nocache(xip_mem + offset, buf, bytes);
 
 		if (likely(copied > 0)) {
 			status = copied;
@@ -397,7 +397,7 @@ EXPORT_SYMBOL_GPL(xip_file_write);
 
 /*
  * truncate a page used for execute in place
- * functionality is analog to block_truncate_page but does use get_xip_page
+ * functionality is analog to block_truncate_page but does use get_xip_adddress
  * to get the page instead of page cache
  */
 int
@@ -407,9 +407,9 @@ xip_truncate_page(struct address_space *
 	unsigned offset = from & (PAGE_CACHE_SIZE-1);
 	unsigned blocksize;
 	unsigned length;
-	struct page *page;
+	void *xip_mem;
 
-	BUG_ON(!mapping->a_ops->get_xip_page);
+	BUG_ON(!mapping->a_ops->get_xip_address);
 
 	blocksize = 1 << mapping->host->i_blkbits;
 	length = offset & (blocksize - 1);
@@ -420,18 +420,17 @@ xip_truncate_page(struct address_space *
 
 	length = blocksize - length;
 
-	page = mapping->a_ops->get_xip_page(mapping,
-					    index*(PAGE_SIZE/512), 0);
-	if (!page)
+	xip_mem = mapping->a_ops->get_xip_address(mapping, index, 0);
+	if (!xip_mem)
 		return -ENOMEM;
-	if (unlikely(IS_ERR(page))) {
-		if (PTR_ERR(page) == -ENODATA)
+	if (unlikely(IS_ERR(xip_mem))) {
+		if (PTR_ERR(xip_mem) == -ENODATA)
 			/* Hole? No need to truncate */
 			return 0;
 		else
-			return PTR_ERR(page);
+			return PTR_ERR(xip_mem);
 	}
-	zero_user_page(page, offset, length, KM_USER0);
+	memset(xip_mem + offset, 0, length);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xip_truncate_page);
Index: linux-2.6/mm/madvise.c
===================================================================
--- linux-2.6.orig/mm/madvise.c
+++ linux-2.6/mm/madvise.c
@@ -112,7 +112,7 @@ static long madvise_willneed(struct vm_a
 	if (!file)
 		return -EBADF;
 
-	if (file->f_mapping->a_ops->get_xip_page) {
+	if (file->f_mapping->a_ops->get_xip_address) {
 		/* no bad return value, but ignore advice */
 		return 0;
 	}

-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18  4:56 ` [patch 2/6] mm: introduce pte_special pte bit npiggin
@ 2008-01-18 16:41   ` Linus Torvalds
  2008-01-18 18:04     ` Sam Ravnborg
  2008-01-18 22:46     ` Nick Piggin
  0 siblings, 2 replies; 25+ messages in thread
From: Linus Torvalds @ 2008-01-18 16:41 UTC (permalink / raw)
  To: npiggin
  Cc: Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm


On Fri, 18 Jan 2008, npiggin@suse.de wrote:
>   */
> +#ifdef __HAVE_ARCH_PTE_SPECIAL
> +# define HAVE_PTE_SPECIAL 1
> +#else
> +# define HAVE_PTE_SPECIAL 0
> +#endif
>  struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
>  {
> -	unsigned long pfn = pte_pfn(pte);
> +	unsigned long pfn;
> +
> +	if (HAVE_PTE_SPECIAL) {

I really don't think this is *any* different from "#ifdefs in code".

#ifdef's in code is not about syntax, it's about abstraction. This is 
still the exact same thing as having an #ifdef around it, and in many ways 
it is *worse*, because now it's just made to look somewhat different with 
a particularly ugly #ifdef.

IOW, this didn't abstract the issue away, it just massaged it to look 
different.

I suspect that the nicest abstraction would be to simply make the whole 
function be a per-architecture thing. Not exposing a "pte_special()" bit 
at all, but instead having the interface simply be:

 - create special entries:
	pte_t pte_mkspecial(pte_t pte)

 - check if an entry is special:
	struct page *vm_normal_page(vma, addr, pte)

and now it's not while the naming is a bit odd (for historical reasons), 
at least it is properly *abstracted* and you don't have any #ifdef's in 
code (and we'd probably need to extend that abstraction then for the 
"locklessly look up page" case eventually).

[ To make it slightly more regular, we could make "pte_mkspecial()" take 
  the vma/addr thing too, even though it would never really use it except 
  to perhaps have a VM_BUG_ON() that it only happens within XIP/PFNMAP 
  vma's.

  The "pte_mkspecial()" definitely has more to to with "vm_normal_page()"
  than with the other "pte_mkxyzzy()" functions, so it really might make
  sense to instead make the thing

	void set_special_page(vma, addr, pte_t *, pfn, pgprot) 

  because it is never acceptable to do "pte_mkspecial()" on any existent 
  PTE *anyway*, so we might as well make the interface reflect that: it's 
  not that you make a pte "special", it's that you insert a special page 
  into the VM.

  So the operation really conceptually has more to do with "set_pte()" 
  than with "pte_mkxxx()", no? ]

Then, just have a library version of the long form, and make architectures 
that don't support it just use that (just so that you don't have to 
duplicate that silly thing). So an architecture that support special page 
flags would do somethiing like

	#define set_special_page(vma,addr,ptep,pfn,prot) \
		set_pte_at(vma, addr, ptep, mk_special_pte(pfn,prot))
	#define vm_normal_page(vma,addr,pte)
		(pte_special(pte) ? NULL : pte_page(pte))

and other architectures would just do

	#define set_special_page(vma,addr,ptep,pfn,prot) \
		set_pte_at(vma, addr, ptep, mk_pte(pfn,prot))
	#define vm_normal_page(vma,addr,pte) \
		generic_vm_normal_page(vma,addr,pte)

or something.

THAT is what I mean by "no #ifdef's in code" - that the selection is done 
at a higher level, the same way we have good interfaces with clear 
*conceptual* meaning for all the other PTE accessing stuff, rather than 
have conditionals in the architecture-independent code.

It's not about syntax - it's about having good conceptual abstractions.

			Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 16:41   ` Linus Torvalds
@ 2008-01-18 18:04     ` Sam Ravnborg
  2008-01-18 18:28       ` Linus Torvalds
  2008-01-18 22:46     ` Nick Piggin
  1 sibling, 1 reply; 25+ messages in thread
From: Sam Ravnborg @ 2008-01-18 18:04 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: npiggin, Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm

On Fri, Jan 18, 2008 at 08:41:22AM -0800, Linus Torvalds wrote:
> 
> 
> On Fri, 18 Jan 2008, npiggin@suse.de wrote:
> >   */
> > +#ifdef __HAVE_ARCH_PTE_SPECIAL
> > +# define HAVE_PTE_SPECIAL 1
> > +#else
> > +# define HAVE_PTE_SPECIAL 0
> > +#endif
> >  struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> >  {
> > -	unsigned long pfn = pte_pfn(pte);
> > +	unsigned long pfn;
> > +
> > +	if (HAVE_PTE_SPECIAL) {
> 
> I really don't think this is *any* different from "#ifdefs in code".

One fundamental difference is that with the above syntax we always
compile both versions of the code - so we do not end up with one
version that builds and another version that dont.

This has always striked me as a good reason to do the above and
I think it is busybox that does so with success.

	Sam

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 18:04     ` Sam Ravnborg
@ 2008-01-18 18:28       ` Linus Torvalds
  2008-01-18 18:53         ` Sam Ravnborg
  0 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2008-01-18 18:28 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: npiggin, Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm


On Fri, 18 Jan 2008, Sam Ravnborg wrote:
> 
> One fundamental difference is that with the above syntax we always
> compile both versions of the code - so we do not end up with one
> version that builds and another version that dont.

Yes, in that sense it tends to be better to use C language constructs over 
preprocessor constructs, since error diagnostics and syntax checking is 
improved.

So yeah, I'll give you that it can be an improvement. It's just not what I 
was really hoping for.

		Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 18:28       ` Linus Torvalds
@ 2008-01-18 18:53         ` Sam Ravnborg
  0 siblings, 0 replies; 25+ messages in thread
From: Sam Ravnborg @ 2008-01-18 18:53 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: npiggin, Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm

On Fri, Jan 18, 2008 at 10:28:39AM -0800, Linus Torvalds wrote:
> 
> 
> On Fri, 18 Jan 2008, Sam Ravnborg wrote:
> > 
> > One fundamental difference is that with the above syntax we always
> > compile both versions of the code - so we do not end up with one
> > version that builds and another version that dont.
> 
> Yes, in that sense it tends to be better to use C language constructs over 
> preprocessor constructs, since error diagnostics and syntax checking is 
> improved.
> 
> So yeah, I'll give you that it can be an improvement. It's just not what I 
> was really hoping for.

Just to clarify - my comment was solely related to the usage
of if (HAVE_*) versus #ifdef.
I had nothing to do with the actual discussion which I do not try to follw .

	Sam

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 16:41   ` Linus Torvalds
  2008-01-18 18:04     ` Sam Ravnborg
@ 2008-01-18 22:46     ` Nick Piggin
  2008-01-18 23:03       ` Linus Torvalds
  2008-01-21  9:43       ` Nick Piggin
  1 sibling, 2 replies; 25+ messages in thread
From: Nick Piggin @ 2008-01-18 22:46 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm

On Fri, Jan 18, 2008 at 08:41:22AM -0800, Linus Torvalds wrote:
> 
> 
> On Fri, 18 Jan 2008, npiggin@suse.de wrote:
> >   */
> > +#ifdef __HAVE_ARCH_PTE_SPECIAL
> > +# define HAVE_PTE_SPECIAL 1
> > +#else
> > +# define HAVE_PTE_SPECIAL 0
> > +#endif
> >  struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> >  {
> > -	unsigned long pfn = pte_pfn(pte);
> > +	unsigned long pfn;
> > +
> > +	if (HAVE_PTE_SPECIAL) {
> 
> I really don't think this is *any* different from "#ifdefs in code".
> 
> #ifdef's in code is not about syntax, it's about abstraction. This is 
> still the exact same thing as having an #ifdef around it, and in many ways 
> it is *worse*, because now it's just made to look somewhat different with 
> a particularly ugly #ifdef.
> 
> IOW, this didn't abstract the issue away, it just massaged it to look 
> different.

Yes, the if () is just to please Andrew, not you ;)

I thought in your last mail on the subject, that you had conceded the
vma-based scheme should stay, so I might have misunderstood that to mean
you would, reluctantly, go with the scheme. I guess I need to try a bit
harder ;)

 
> I suspect that the nicest abstraction would be to simply make the whole 
> function be a per-architecture thing. Not exposing a "pte_special()" bit 
> at all, but instead having the interface simply be:
> 
>  - create special entries:
> 	pte_t pte_mkspecial(pte_t pte)
> 
>  - check if an entry is special:
> 	struct page *vm_normal_page(vma, addr, pte)
> 
> and now it's not while the naming is a bit odd (for historical reasons), 
> at least it is properly *abstracted* and you don't have any #ifdef's in 
> code (and we'd probably need to extend that abstraction then for the 
> "locklessly look up page" case eventually).

Now I would have done this in a flash, except the existing vm_normal_page
code is quite a lot, and complex, to duplicate in every architecture.

 
> [ To make it slightly more regular, we could make "pte_mkspecial()" take 
>   the vma/addr thing too, even though it would never really use it except 
>   to perhaps have a VM_BUG_ON() that it only happens within XIP/PFNMAP 
>   vma's.
> 
>   The "pte_mkspecial()" definitely has more to to with "vm_normal_page()"
>   than with the other "pte_mkxyzzy()" functions, so it really might make
>   sense to instead make the thing
> 
> 	void set_special_page(vma, addr, pte_t *, pfn, pgprot) 
> 
>   because it is never acceptable to do "pte_mkspecial()" on any existent 
>   PTE *anyway*, so we might as well make the interface reflect that: it's 
>   not that you make a pte "special", it's that you insert a special page 
>   into the VM.
> 
>   So the operation really conceptually has more to do with "set_pte()" 
>   than with "pte_mkxxx()", no? ]

Possibly, although I think going that far is hiding things from mm/ a bit
much. If you have a look at the places that call pte_mkspecial, it isn't
too much I think...

 
> Then, just have a library version of the long form, and make architectures 
> that don't support it just use that (just so that you don't have to 
> duplicate that silly thing). So an architecture that support special page 
> flags would do somethiing like
> 
> 	#define set_special_page(vma,addr,ptep,pfn,prot) \
> 		set_pte_at(vma, addr, ptep, mk_special_pte(pfn,prot))
> 	#define vm_normal_page(vma,addr,pte)
> 		(pte_special(pte) ? NULL : pte_page(pte))
> 
> and other architectures would just do
> 
> 	#define set_special_page(vma,addr,ptep,pfn,prot) \
> 		set_pte_at(vma, addr, ptep, mk_pte(pfn,prot))
> 	#define vm_normal_page(vma,addr,pte) \
> 		generic_vm_normal_page(vma,addr,pte)
> 
> or something.
> 
> THAT is what I mean by "no #ifdef's in code" - that the selection is done 
> at a higher level, the same way we have good interfaces with clear 
> *conceptual* meaning for all the other PTE accessing stuff, rather than 
> have conditionals in the architecture-independent code.

OK, that gets around the "duplicate vm_normal_page everywhere" issue I
had. I'm still not quite happy with it ;)

How about taking a different approach. How about also having a pte_normal()
function. Each architecture that has a pte special bit would make this
!pte_special, and those that don't would return 0. They return 0 from both
pte_special and pte_normal because they don't know whether the pte is
special or normal.

Then vm_normal_page would become:

    if (pte_special(pte))
        return NULL;
    else if (pte_normal(pte))
        return pte_page(pte);

    ... /* vma based scheme */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 22:46     ` Nick Piggin
@ 2008-01-18 23:03       ` Linus Torvalds
  2008-01-19  5:07         ` Nick Piggin
  2008-01-21  9:43       ` Nick Piggin
  1 sibling, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2008-01-18 23:03 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm


On Fri, 18 Jan 2008, Nick Piggin wrote:
> 
> I thought in your last mail on the subject, that you had conceded the
> vma-based scheme should stay, so I might have misunderstood that to mean
> you would, reluctantly, go with the scheme. I guess I need to try a bit
> harder ;)

Yes, I did concede that apparently we cannot just mandate "let's just use 
a bit in the pte".

So I do agree that we seem to be forced to have two different 
implementations: one for architectures where we can make use of a marker 
on the PTE itself (or perhaps some *other* way to distinguish things 
automatically), and one for the ones where we need to just be able 
to distinguish purely based on our own data structures.

I just then didn't like the lack of abstraction.

> How about taking a different approach. How about also having a pte_normal()
> function.

Well, one reason I'd prefer not to, is that I can well imagine an 
architecture that doesn't actually put the "normal" bit in the PTE itself, 
but in a separate data structure.

In particular, let's say that you decide that

 - the architecture really doesn't have any space in the hw page tables
 - but for various reasons you *really* don't want to use the tricky 
   "page->offset" logic etc
 - ..and you realize that PFNMAP and FIXMAP are actually very rare

so..

 - you just associate each PFNMAP/FIXMAP vma with a simple bitmap that 
   contains the "special" bit.

It's actually not that hard to do. If you have an architecture-specific 
interface like

	struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte);

then it wouldn't be too hard at all to create a hash lookup on the VMA (or 
perhaps on a "vma, 256-page-aligned(addr)" tuple) to look up a bitmap, and 
then use the address to see if it was marked special or not.

But yes, then you'd also need to have that extended

	set_special_pte_at(vma, addr, pfn, prot);

interface to set that bit in that bitmap.

See? 

Is it better than what we already have for the generic case? Possibly not. 
But I like abstractions that aren't tied to *one* particular 
implementation.

			Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 23:03       ` Linus Torvalds
@ 2008-01-19  5:07         ` Nick Piggin
  0 siblings, 0 replies; 25+ messages in thread
From: Nick Piggin @ 2008-01-19  5:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm

On Fri, Jan 18, 2008 at 03:03:03PM -0800, Linus Torvalds wrote:
> 
> 
> On Fri, 18 Jan 2008, Nick Piggin wrote:
> > 
> > I thought in your last mail on the subject, that you had conceded the
> > vma-based scheme should stay, so I might have misunderstood that to mean
> > you would, reluctantly, go with the scheme. I guess I need to try a bit
> > harder ;)
> 
> Yes, I did concede that apparently we cannot just mandate "let's just use 
> a bit in the pte".
> 
> So I do agree that we seem to be forced to have two different 
> implementations: one for architectures where we can make use of a marker 
> on the PTE itself (or perhaps some *other* way to distinguish things 
> automatically), and one for the ones where we need to just be able 
> to distinguish purely based on our own data structures.

Yep, thanks for the clarification.


> I just then didn't like the lack of abstraction.
> 
> > How about taking a different approach. How about also having a pte_normal()
> > function.
> 
> Well, one reason I'd prefer not to, is that I can well imagine an 
> architecture that doesn't actually put the "normal" bit in the PTE itself, 
> but in a separate data structure.
> 
> In particular, let's say that you decide that
> 
>  - the architecture really doesn't have any space in the hw page tables
>  - but for various reasons you *really* don't want to use the tricky 
>    "page->offset" logic etc
>  - ..and you realize that PFNMAP and FIXMAP are actually very rare
> 
> so..
> 
>  - you just associate each PFNMAP/FIXMAP vma with a simple bitmap that 
>    contains the "special" bit.
> 
> It's actually not that hard to do. If you have an architecture-specific 
> interface like
> 
> 	struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte);
> 
> then it wouldn't be too hard at all to create a hash lookup on the VMA (or 
> perhaps on a "vma, 256-page-aligned(addr)" tuple) to look up a bitmap, and 
> then use the address to see if it was marked special or not.
> 
> But yes, then you'd also need to have that extended
> 
> 	set_special_pte_at(vma, addr, pfn, prot);
> 
> interface to set that bit in that bitmap.
> 
> See? 
> 
> Is it better than what we already have for the generic case? Possibly not. 
> But I like abstractions that aren't tied to *one* particular 
> implementation.

Well that's all true, but I would be a bit worried about architectures
inventing their own ways of doin gthings. I mean, _every_ implementation
needs to be understood by core mm/ developers; and conversely, none of
the architecture maintainers need to care a single bit about any of the
implementations if they provide some basic low level things to us.

So I'd argue that if someone really needed to invent another scheme, then
that should also be somehow folded into mm/ code if possible rather than
let the arch do it...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 2/6] mm: introduce pte_special pte bit
  2008-01-18 22:46     ` Nick Piggin
  2008-01-18 23:03       ` Linus Torvalds
@ 2008-01-21  9:43       ` Nick Piggin
  1 sibling, 0 replies; 25+ messages in thread
From: Nick Piggin @ 2008-01-21  9:43 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Hugh Dickins, Jared Hulbert, Carsten Otte,
	Martin Schwidefsky, Heiko Carstens, linux-arch, linux-mm

On Fri, Jan 18, 2008 at 11:46:22PM +0100, Nick Piggin wrote:
> On Fri, Jan 18, 2008 at 08:41:22AM -0800, Linus Torvalds wrote:
>  
> > Then, just have a library version of the long form, and make architectures 
> > that don't support it just use that (just so that you don't have to 
> > duplicate that silly thing). So an architecture that support special page 
> > flags would do somethiing like
> > 
> > 	#define set_special_page(vma,addr,ptep,pfn,prot) \
> > 		set_pte_at(vma, addr, ptep, mk_special_pte(pfn,prot))
> > 	#define vm_normal_page(vma,addr,pte)
> > 		(pte_special(pte) ? NULL : pte_page(pte))
> > 
> > and other architectures would just do
> > 
> > 	#define set_special_page(vma,addr,ptep,pfn,prot) \
> > 		set_pte_at(vma, addr, ptep, mk_pte(pfn,prot))
> > 	#define vm_normal_page(vma,addr,pte) \
> > 		generic_vm_normal_page(vma,addr,pte)
> > 
> > or something.
> > 
> > THAT is what I mean by "no #ifdef's in code" - that the selection is done 
> > at a higher level, the same way we have good interfaces with clear 
> > *conceptual* meaning for all the other PTE accessing stuff, rather than 
> > have conditionals in the architecture-independent code.
> 
> OK, that gets around the "duplicate vm_normal_page everywhere" issue I
> had. I'm still not quite happy with it ;)
> 
> How about taking a different approach. How about also having a pte_normal()
> function. Each architecture that has a pte special bit would make this
> !pte_special, and those that don't would return 0. They return 0 from both
> pte_special and pte_normal because they don't know whether the pte is
> special or normal.
> 
> Then vm_normal_page would become:
> 
>     if (pte_special(pte))
>         return NULL;
>     else if (pte_normal(pte))
>         return pte_page(pte);
> 
>     ... /* vma based scheme */

Hmm, it's not *quite* trivial as that for one important case:
vm_insert_mixed. Because we don't actually have a pte yet, so we can't
easily reuse insert_page / insert_pfn, rather we have to build the pte
first and then check it (patch attached, but I think it is a step
backwards)...

Really, I don't think either of my two approaches or your approach is
really a fundamentally different _abstraction_. It basically just has
to accommodate 2 different code paths no matter how you look at it. I
don't know how this is different to, say, conditionally compiling eg.
the FLATMEM/SPARSEMEM memory model code, or rwsem code, depending on
whether an architecture has defined some symbol. It happens all over
the kernel.

Actually, I'd argue it is _better_ than that, because the logic stays
in one place (one screenful, even), and away from abuse or divergence
by arch code.

If one actually came up with a new API that handles both cases better,
I'd say that is a different abstraction. Or if you could come up with
some different arch functions which would allow vm_normal_page to be
streamlined to read more like a regular C function, that should be a
different abstraction...

I'm still keen on my first patch. I know it isn't beautiful, but I
think it is better than the alternatives.

---

mm: add vm_insert_mixed

vm_insert_mixed will insert either a raw pfn or a refcounted struct page
into the page tables, depending on whether vm_normal_page() will return
the page or not. With the introduction of the new pte bit, this is now
a bit too tricky for drivers to be doing themselves.

filemap_xip uses this in a subsequent patch.

---
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -1097,6 +1097,8 @@ int remap_pfn_range(struct vm_area_struc
 int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
 int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn);
+int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+			unsigned long pfn);
 
 struct page *follow_page(struct vm_area_struct *, unsigned long address,
 			unsigned int foll_flags);
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1282,6 +1282,53 @@ out:
 }
 EXPORT_SYMBOL(vm_insert_pfn);
 
+int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+			unsigned long pfn)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	int retval;
+	pte_t *pte, entry;
+	spinlock_t *ptl;
+
+	BUG_ON(!(vma->vm_flags & VM_MIXEDMAP));
+
+	if (addr < vma->vm_start || addr >= vma->vm_end)
+		return -EFAULT;
+
+	retval = -ENOMEM;
+	pte = get_locked_pte(mm, addr, &ptl);
+	if (!pte)
+		goto out;
+	retval = -EBUSY;
+	if (!pte_none(*pte))
+		goto out_unlock;
+
+	entry = pte_mkspecial(pfn_pte(pfn, prot));
+	/*
+	 * If we don't have pte special, then we have to use the pfn_valid()
+	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+	 * refcount the page if pfn_valid is true. Otherwise we can *always*
+	 * avoid refcounting the page if we have pte_special.
+	 */
+	if (!pte_special(entry) && pfn_valid(pfn)) {
+		struct page *page;
+
+		page = pfn_to_page(pfn);
+		get_page(page);
+		inc_mm_counter(mm, file_rss);
+		page_add_file_rmap(page);
+	}
+	/* Ok, finally just insert the thing.. */
+	set_pte_at(mm, addr, pte, entry);
+
+	retval = 0;
+out_unlock:
+	pte_unmap_unlock(pte, ptl);
+out:
+	return retval;
+}
+EXPORT_SYMBOL(vm_insert_mixed);
+
 /*
  * maps a range of physical memory into the requested pages. the old
  * mappings are removed. any references to nonexistent pages results

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-01-18  4:56 ` [patch 4/6] xip: support non-struct page backed memory npiggin
@ 2008-03-01  8:14   ` Jared Hulbert
  2008-03-03  5:29     ` Nick Piggin
  2008-03-03  8:18     ` Carsten Otte
  0 siblings, 2 replies; 25+ messages in thread
From: Jared Hulbert @ 2008-03-01  8:14 UTC (permalink / raw)
  To: npiggin
  Cc: Linus Torvalds, Andrew Morton, Carsten Otte, Martin Schwidefsky,
	Heiko Carstens, linux-mm, linux-fsdevel

>  (The kaddr->pfn conversion may not be quite right for all architectures or XIP
>  memory mappings, and the cacheflushing may need to be added for some archs).
>
>  This scheme has been tested and works for Jared's work-in-progress filesystem,

Opps.  I screwed up testing this.  It doesn't work with MTD devices and ARM....

The problem is that virt_to_phys() gives bogus answer for a
mtd->point()'ed address.  It's a ioremap()'ed address which doesn't
work with the ARM virt_to_phys().  I can get a physical address from
mtd->point() with a patch I dropped a little while back.

So I was thinking how about instead of:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
void * get_xip_address(struct address_space *mapping, pgoff_t pgoff,
int create);

xip_mem = mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0);
pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Could we do?

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
int get_xip_address(struct address_space *mapping, pgoff_t pgoff, int
create, unsigned long *address);

if(mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &xip_mem)){
     /* virtual address */
     pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
} else {
     /* physical address */
     pfn = xip_mem >> PAGE_SHIFT;
}
err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Or maybe like...

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
unsigned long get_xip_address(struct address_space *mapping, pgoff_t
pgoff, int create, int *switch);

xip_mem = mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &switch);
if(switch){
     /* virtual address */
     pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
} else {
     /* physical address */
     pfn = xip_mem >> PAGE_SHIFT;
}
err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Or...

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
void get_xip_address(struct address_space *mapping, pgoff_t pgoff, int
create, unsigned long *phys, void **virt);

mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &phys, &virt);
if(phys){
     /* physical address */
     pfn = phys >> PAGE_SHIFT;
} else {
     /* physical address */
     pfn = virt_to_phys(virt) >> PAGE_SHIFT;
}
err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-01  8:14   ` Jared Hulbert
@ 2008-03-03  5:29     ` Nick Piggin
  2008-03-03  8:30       ` Carsten Otte
  2008-03-03 15:59       ` Jared Hulbert
  2008-03-03  8:18     ` Carsten Otte
  1 sibling, 2 replies; 25+ messages in thread
From: Nick Piggin @ 2008-03-03  5:29 UTC (permalink / raw)
  To: Jared Hulbert
  Cc: Linus Torvalds, Andrew Morton, Carsten Otte, Martin Schwidefsky,
	Heiko Carstens, linux-mm, linux-fsdevel

On Sat, Mar 01, 2008 at 12:14:35AM -0800, Jared Hulbert wrote:
> >  (The kaddr->pfn conversion may not be quite right for all architectures or XIP
> >  memory mappings, and the cacheflushing may need to be added for some archs).
> >
> >  This scheme has been tested and works for Jared's work-in-progress filesystem,
> 
> Opps.  I screwed up testing this.  It doesn't work with MTD devices and ARM....
> 
> The problem is that virt_to_phys() gives bogus answer for a
> mtd->point()'ed address.  It's a ioremap()'ed address which doesn't
> work with the ARM virt_to_phys().  I can get a physical address from
> mtd->point() with a patch I dropped a little while back.

Yeah, I thought that virt_to_phys was going to be problematic...

 
> So I was thinking how about instead of:
> 
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> void * get_xip_address(struct address_space *mapping, pgoff_t pgoff,
> int create);
> 
> xip_mem = mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0);
> pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
> err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
> 
> Could we do?
> 
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> int get_xip_address(struct address_space *mapping, pgoff_t pgoff, int
> create, unsigned long *address);
> 
> if(mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &xip_mem)){
>      /* virtual address */
>      pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
> } else {
>      /* physical address */
>      pfn = xip_mem >> PAGE_SHIFT;
> }
> err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
> Or maybe like...
> 
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> unsigned long get_xip_address(struct address_space *mapping, pgoff_t
> pgoff, int create, int *switch);
> 
> xip_mem = mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &switch);
> if(switch){
>      /* virtual address */
>      pfn = virt_to_phys((void *)xip_mem) >> PAGE_SHIFT;
> } else {
>      /* physical address */
>      pfn = xip_mem >> PAGE_SHIFT;
> }
> err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
> 
> Or...
> 
> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> void get_xip_address(struct address_space *mapping, pgoff_t pgoff, int
> create, unsigned long *phys, void **virt);
> 
> mapping->a_ops->get_xip_address(mapping, vmf->pgoff, 0, &phys, &virt);
> if(phys){
>      /* physical address */
>      pfn = phys >> PAGE_SHIFT;
> } else {
>      /* physical address */
>      pfn = virt_to_phys(virt) >> PAGE_SHIFT;
> }
> err = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

OK right... one problem is that we need an address for the kernel to
manipulate the memory with, but we also need a pfn to insert into user
page tables. So I like your last suggestion, but I think we always
need both address and pfn.

What about 
int get_xip_mem(mapping, pgoff, create, void **kaddr, unsigned long *pfn)

get_xip_mem(mapping, pgoff, create, &addr, &pfn);
if (pagefault)
    vm_insert_mixed(vma, vaddr, pfn);
else if (read/write) {
    memcpy(kaddr, blah, sizeof);

My simple brd driver can easily do
 *kaddr = page_address(page);
 *pfn = page_to_pfn(page);

This should work for you too?

Thanks,
Nick

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-01  8:14   ` Jared Hulbert
  2008-03-03  5:29     ` Nick Piggin
@ 2008-03-03  8:18     ` Carsten Otte
  2008-03-03 15:44       ` Jared Hulbert
  2008-03-03 18:40       ` Linus Torvalds
  1 sibling, 2 replies; 25+ messages in thread
From: Carsten Otte @ 2008-03-03  8:18 UTC (permalink / raw)
  To: Jared Hulbert
  Cc: npiggin, Linus Torvalds, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel

Jared Hulbert wrote:
> The problem is that virt_to_phys() gives bogus answer for a
> mtd->point()'ed address.  It's a ioremap()'ed address which doesn't
> work with the ARM virt_to_phys().  I can get a physical address from
> mtd->point() with a patch I dropped a little while back.
Is there a chance virt_to_phys() can be fixed on arm? It looks like a 
simple page table walk to me. If not, I would prefer to have 
get_xip_address return a physical address over having to split the 
code path here. S390 has a 1:1 mapping for xip mappings, thus it 
would'nt be a big change for us.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03  5:29     ` Nick Piggin
@ 2008-03-03  8:30       ` Carsten Otte
  2008-03-03 15:59       ` Jared Hulbert
  1 sibling, 0 replies; 25+ messages in thread
From: Carsten Otte @ 2008-03-03  8:30 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Jared Hulbert, Linus Torvalds, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel

Nick Piggin wrote:
> What about 
> int get_xip_mem(mapping, pgoff, create, void **kaddr, unsigned long *pfn)
> 
> get_xip_mem(mapping, pgoff, create, &addr, &pfn);
> if (pagefault)
>     vm_insert_mixed(vma, vaddr, pfn);
> else if (read/write) {
>     memcpy(kaddr, blah, sizeof);
> 
> My simple brd driver can easily do
>  *kaddr = page_address(page);
>  *pfn = page_to_pfn(page);
> 
> This should work for you too?
Looks good to me. Otoh, if there is an easy way to fix virt_to_phys() 
I would like that better.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03  8:18     ` Carsten Otte
@ 2008-03-03 15:44       ` Jared Hulbert
  2008-03-03 18:40       ` Linus Torvalds
  1 sibling, 0 replies; 25+ messages in thread
From: Jared Hulbert @ 2008-03-03 15:44 UTC (permalink / raw)
  To: carsteno
  Cc: npiggin, Linus Torvalds, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel

>  Is there a chance virt_to_phys() can be fixed on arm? It looks like a
>  simple page table walk to me.

Are there functions already available for doing a page table walk?  If
so it could be done. I'd like that.  If somebody could point me in the
right direction I'd appreciate it.

It might be a problem because today the simple case for virt_to_phys()
just subtracts 0x20000000 to go from 0xCXXXXXXX to 0xAXXXXXXX.  So it
could have a negative performance if we complicate it.

Is it possible that it might be easier to fix this if we changed
ioremap()?  I got the impression that ioremap() on ARM ends up placing
ioremap()'ed memory in the middle of the 0xCXXXXXXX range that is
valid for RAM.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03  5:29     ` Nick Piggin
  2008-03-03  8:30       ` Carsten Otte
@ 2008-03-03 15:59       ` Jared Hulbert
  1 sibling, 0 replies; 25+ messages in thread
From: Jared Hulbert @ 2008-03-03 15:59 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Linus Torvalds, Andrew Morton, Carsten Otte, Martin Schwidefsky,
	Heiko Carstens, linux-mm, linux-fsdevel

>  OK right... one problem is that we need an address for the kernel to
>  manipulate the memory with, but we also need a pfn to insert into user
>  page tables. So I like your last suggestion, but I think we always
>  need both address and pfn.

Right I forgot about the xip_file_read() path.

I like to use UML kernels with the file-to-iomem interface for
testing.  I forget how UML kernels deal with physical address like
this.  I remember there are some caveats.  I'll look into it.  But I
think I can do for UML:

*kaddr = (void *)(start_virt_addr + offset);
*pfn = virt_to_phys(*kaddr) >> PAGE_SHIFT;

And the ARM + MTD I can do:

*kaddr = (void *)(start_virt_addr + offset);
*pfn = (start_phys_addr + offset) >> PAGE_SHIFT;

>  This should work for you too?

I think so.  But like Carsten I'd prefer a virtual only solution.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03  8:18     ` Carsten Otte
  2008-03-03 15:44       ` Jared Hulbert
@ 2008-03-03 18:40       ` Linus Torvalds
  2008-03-03 19:38         ` Jared Hulbert
  1 sibling, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2008-03-03 18:40 UTC (permalink / raw)
  To: carsteno
  Cc: Jared Hulbert, npiggin, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel


On Mon, 3 Mar 2008, Carsten Otte wrote:
>
> Jared Hulbert wrote:
> > The problem is that virt_to_phys() gives bogus answer for a
> > mtd->point()'ed address.  It's a ioremap()'ed address which doesn't
> > work with the ARM virt_to_phys().  I can get a physical address from
> > mtd->point() with a patch I dropped a little while back.
>
> Is there a chance virt_to_phys() can be fixed on arm?

NO!

"virt_to_phys()" is about kernel 1:1-mapped virtual addresses, and 
"fixing" it would be totally wrong. We don't do crap like following page 
tables, and we shouldn't encourage anybody to even think that we do.

If somebody needs to follow page table pointers, they had better do it 
themselves and open-code the fact that they are doing something stupid and 
expensive, not make it easy for everybody else to do that mistake without 
even realising.

A lot of the kernel architecture is all about making it really hard to do 
stupid things by mistake.

		Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 18:40       ` Linus Torvalds
@ 2008-03-03 19:38         ` Jared Hulbert
  2008-03-03 20:04           ` Linus Torvalds
  0 siblings, 1 reply; 25+ messages in thread
From: Jared Hulbert @ 2008-03-03 19:38 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: carsteno, npiggin, Andrew Morton, mschwid2, heicars2, linux-mm,
	linux-fsdevel

>  NO!
>
>  "virt_to_phys()" is about kernel 1:1-mapped virtual addresses, and
>  "fixing" it would be totally wrong.

By 1:1 you mean virtual + offset == physical + offset right?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 19:38         ` Jared Hulbert
@ 2008-03-03 20:04           ` Linus Torvalds
  2008-03-03 20:32             ` Nick Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2008-03-03 20:04 UTC (permalink / raw)
  To: Jared Hulbert
  Cc: carsteno, npiggin, Andrew Morton, mschwid2, heicars2, linux-mm,
	linux-fsdevel


On Mon, 3 Mar 2008, Jared Hulbert wrote:
> 
> By 1:1 you mean virtual + offset == physical + offset right?

Right. It's a special case, and it's an important special case because 
it's the only one that is fast to do.

It's not very common, but it's common enough that it's worth doing.

That said, xip should probably never have used virt_to_phys() in the first 
place. It should be limited to purely architecture-specific memory 
management routines.

[ There's a number of drivers that need "physical" addresses for DMA, and 
  that use virt_to_phys, but they should use the DMA interfaces 
  that do this right, and even for legacy things that don't use the proper 
  DMA allocator things virt_to_phys is wrong, because it's about _bus_ 
  addresses, not CPU physical addresses. Only architecture code can know 
  when the two actually mean the same thing ]

Quite frankly, I think it's totally wrong to use kernel-virtual addresses 
in those interfaces in first place. Either you use "struct page *" or you 
use a pfn number. Nothing else is simply valid.

			Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 20:04           ` Linus Torvalds
@ 2008-03-03 20:32             ` Nick Piggin
  2008-03-03 22:21               ` Linus Torvalds
  0 siblings, 1 reply; 25+ messages in thread
From: Nick Piggin @ 2008-03-03 20:32 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Jared Hulbert, carsteno, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel

On Mon, Mar 03, 2008 at 12:04:37PM -0800, Linus Torvalds wrote:
> 
> 
> On Mon, 3 Mar 2008, Jared Hulbert wrote:
> > 
> > By 1:1 you mean virtual + offset == physical + offset right?
> 
> Right. It's a special case, and it's an important special case because 
> it's the only one that is fast to do.
> 
> It's not very common, but it's common enough that it's worth doing.
> 
> That said, xip should probably never have used virt_to_phys() in the first 
> place. It should be limited to purely architecture-specific memory 
> management routines.

Actually, xip in your kernel doesn't, it was just a patch I proposed.
Basically I wanted to get a pfn from a kva, however that kva might be
ioremapped which I didn't actually worry about because only testing
a plain RAM backed system.

 
> [ There's a number of drivers that need "physical" addresses for DMA, and 
>   that use virt_to_phys, but they should use the DMA interfaces 
>   that do this right, and even for legacy things that don't use the proper 
>   DMA allocator things virt_to_phys is wrong, because it's about _bus_ 
>   addresses, not CPU physical addresses. Only architecture code can know 
>   when the two actually mean the same thing ]
> 
> Quite frankly, I think it's totally wrong to use kernel-virtual addresses 
> in those interfaces in first place. Either you use "struct page *" or you 
> use a pfn number. Nothing else is simply valid.

Although they were already using kernel-virtual addresses before I got
there, we want to remove the requirement to have a struct page, and
there are no good accessors to kmap a pfn (AFAIK) otherwise we could
indeed just use a pfn.

We'll scrap the virt_to_phys idea and make the interface return both
the kaddr and the pfn, I think.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 20:32             ` Nick Piggin
@ 2008-03-03 22:21               ` Linus Torvalds
  2008-03-03 23:25                 ` Jared Hulbert
  2008-03-04  9:06                 ` Carsten Otte
  0 siblings, 2 replies; 25+ messages in thread
From: Linus Torvalds @ 2008-03-03 22:21 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Jared Hulbert, carsteno, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel


On Mon, 3 Mar 2008, Nick Piggin wrote:
> 
> Although they were already using kernel-virtual addresses before I got
> there, we want to remove the requirement to have a struct page, and
> there are no good accessors to kmap a pfn (AFAIK) otherwise we could
> indeed just use a pfn.

Implementing a kmap_pfn() sounds like a perfectly sane idea. But why does 
it need to even be mapped into kernel space? Is it for the ELF header 
reading or something (not having looked at the patch, just reacting to the 
wrongness of using virt_to_phys())?

		Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 22:21               ` Linus Torvalds
@ 2008-03-03 23:25                 ` Jared Hulbert
  2008-03-04  9:06                 ` Carsten Otte
  1 sibling, 0 replies; 25+ messages in thread
From: Jared Hulbert @ 2008-03-03 23:25 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Nick Piggin, carsteno, Andrew Morton, mschwid2, heicars2,
	linux-mm, linux-fsdevel

>  Implementing a kmap_pfn() sounds like a perfectly sane idea. But why does
>  it need to even be mapped into kernel space? Is it for the ELF header
>  reading or something (not having looked at the patch, just reacting to the
>  wrongness of using virt_to_phys())?

Right.

My AXFS prefers the filesystem image to be in memory like Flash.  So
it also uses the kaddr to read it's data structures and to fetch data
for the readpage().  In fact, the MTD doesn't provide access to the
physical address of a given partition without a patch.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [patch 4/6] xip: support non-struct page backed memory
  2008-03-03 22:21               ` Linus Torvalds
  2008-03-03 23:25                 ` Jared Hulbert
@ 2008-03-04  9:06                 ` Carsten Otte
  1 sibling, 0 replies; 25+ messages in thread
From: Carsten Otte @ 2008-03-04  9:06 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Nick Piggin, Jared Hulbert, carsteno, Andrew Morton, mschwid2,
	heicars2, linux-mm, linux-fsdevel

Linus Torvalds wrote:
> Implementing a kmap_pfn() sounds like a perfectly sane idea. But why does 
> it need to even be mapped into kernel space? Is it for the ELF header 
> reading or something (not having looked at the patch, just reacting to the 
> wrongness of using virt_to_phys())?
It needs to be mapped into kernel space to do regular file operations 
other than mmap. In mm/filemap_xip.c we do access the xip memory from 
kernel to fulfill sys_read/sys_write and friends [copy_from/to_user to 
user's buffer].

so long,
Carsten

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2008-03-04  9:06 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20080118045649.334391000@suse.de>
2008-01-18  4:56 ` [patch 1/6] mm: introduce VM_MIXEDMAP npiggin, Jared Hulbert
2008-01-18  4:56 ` [patch 2/6] mm: introduce pte_special pte bit npiggin
2008-01-18 16:41   ` Linus Torvalds
2008-01-18 18:04     ` Sam Ravnborg
2008-01-18 18:28       ` Linus Torvalds
2008-01-18 18:53         ` Sam Ravnborg
2008-01-18 22:46     ` Nick Piggin
2008-01-18 23:03       ` Linus Torvalds
2008-01-19  5:07         ` Nick Piggin
2008-01-21  9:43       ` Nick Piggin
2008-01-18  4:56 ` [patch 3/6] mm: add vm_insert_mixed npiggin
2008-01-18  4:56 ` [patch 4/6] xip: support non-struct page backed memory npiggin
2008-03-01  8:14   ` Jared Hulbert
2008-03-03  5:29     ` Nick Piggin
2008-03-03  8:30       ` Carsten Otte
2008-03-03 15:59       ` Jared Hulbert
2008-03-03  8:18     ` Carsten Otte
2008-03-03 15:44       ` Jared Hulbert
2008-03-03 18:40       ` Linus Torvalds
2008-03-03 19:38         ` Jared Hulbert
2008-03-03 20:04           ` Linus Torvalds
2008-03-03 20:32             ` Nick Piggin
2008-03-03 22:21               ` Linus Torvalds
2008-03-03 23:25                 ` Jared Hulbert
2008-03-04  9:06                 ` Carsten Otte

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).