public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 0/5] access_process_vm device memory access
@ 2008-05-15 17:53 Rik van Riel
  2008-05-15 17:53 ` [patch 1/5] access_process_vm device memory infrastructure Rik van Riel
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

In order to be able to debug things like the X server and programs
using the PPC Cell SPUs, the debugger needs to be able to access
device memory through ptrace and /proc/pid/mem.

Changelog:
 - use CONFIG_HAVE_IOREMAP_PROT
 - do not use generic_access_phys as a fallback but define it in
   the ->vm_ops->access for appropriate VMAs
 - other code cleanups suggested by Andrew Morton and Sam Ravnborg

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 1/5] access_process_vm device memory infrastructure
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
@ 2008-05-15 17:53 ` Rik van Riel
  2008-05-16  8:20   ` Peter Zijlstra
  2008-05-15 17:53 ` [patch 2/5] use generic_access_phys for /dev/mem mappings Rik van Riel
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

[-- Attachment #1: 01-access_process_vm-device-memory.patch --]
[-- Type: text/plain, Size: 9019 bytes --]

Add the generic_access_phys access function and put the hooks in place
to allow access_process_vm to access device or PPC Cell SPU memory.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Benjamin Herrensmidt <benh@kernel.crashing.org>

--- 

 arch/x86/mm/ioremap.c   |    8 ++
 include/asm-x86/io_32.h |    3 +
 include/asm-x86/io_64.h |    3 +
 include/linux/mm.h      |    6 ++
 mm/memory.c             |  134 +++++++++++++++++++++++++++++++++++++++++-------
 5 files changed, 136 insertions(+), 18 deletions(-)

Index: ptrace-2.6.26-rc2-mm1/mm/memory.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/mm/memory.c	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/mm/memory.c	2008-05-15 13:50:15.000000000 -0400
@@ -2668,6 +2668,86 @@ int in_gate_area_no_task(unsigned long a
 
 #endif	/* __HAVE_ARCH_GATE_AREA */
 
+#ifdef CONFIG_HAVE_IOREMAP_PROT
+static resource_size_t follow_phys(struct vm_area_struct *vma,
+			unsigned long address, unsigned int flags,
+			unsigned long *prot)
+{
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *ptep, pte;
+	spinlock_t *ptl;
+	resource_size_t phys_addr = 0;
+	struct mm_struct *mm = vma->vm_mm;
+
+	VM_BUG_ON(!(vma->vm_flags & (VM_IO | VM_PFNMAP)));
+
+	pgd = pgd_offset(mm, address);
+	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		goto no_page_table;
+
+	pud = pud_offset(pgd, address);
+	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
+		goto no_page_table;
+
+	pmd = pmd_offset(pud, address);
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
+		goto no_page_table;
+
+	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
+	if (pmd_huge(*pmd))
+		goto no_page_table;
+
+	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (!ptep)
+		goto out;
+
+	pte = *ptep;
+	if (!pte_present(pte))
+		goto unlock;
+	if ((flags & FOLL_WRITE) && !pte_write(pte))
+		goto unlock;
+	phys_addr = pte_pfn(pte);
+	phys_addr <<= PAGE_SHIFT; /* Shift here to avoid overflow on PAE */
+
+	*prot = pgprot_val(pte_pgprot(pte));
+
+unlock:
+	pte_unmap_unlock(ptep, ptl);
+out:
+	return phys_addr;
+no_page_table:
+	return 0;
+}
+
+int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
+			void *buf, int len, int write)
+{
+	resource_size_t phys_addr;
+	unsigned long prot = 0;
+	void *maddr;
+	int offset = addr & (PAGE_SIZE-1);
+
+	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+		return -EINVAL;
+
+	phys_addr = follow_phys(vma, addr, write, &prot);
+
+	if (!phys_addr)
+		return -EINVAL;
+
+	maddr = ioremap_prot(phys_addr, PAGE_SIZE, prot);
+	if (write)
+		memcpy_toio(maddr + offset, buf, len);
+	else
+		memcpy_fromio(buf, maddr + offset, len);
+	iounmap(maddr);
+
+	return len;
+}
+#endif
+
 /*
  * Access another process' address space.
  * Source/target buffer must be kernel space,
@@ -2677,7 +2757,6 @@ int access_process_vm(struct task_struct
 {
 	struct mm_struct *mm;
 	struct vm_area_struct *vma;
-	struct page *page;
 	void *old_buf = buf;
 
 	mm = get_task_mm(tsk);
@@ -2689,28 +2768,44 @@ int access_process_vm(struct task_struct
 	while (len) {
 		int bytes, ret, offset;
 		void *maddr;
+		struct page *page = NULL;
 
 		ret = get_user_pages(tsk, mm, addr, 1,
 				write, 1, &page, &vma);
-		if (ret <= 0)
-			break;
-
-		bytes = len;
-		offset = addr & (PAGE_SIZE-1);
-		if (bytes > PAGE_SIZE-offset)
-			bytes = PAGE_SIZE-offset;
-
-		maddr = kmap(page);
-		if (write) {
-			copy_to_user_page(vma, page, addr,
-					  maddr + offset, buf, bytes);
-			set_page_dirty_lock(page);
+		if (ret <= 0) {
+			/*
+			 * Check if this is a VM_IO | VM_PFNMAP VMA, which
+			 * we can access using slightly different code.
+			 */
+#ifdef CONFIG_HAVE_IOREMAP_PROT
+			vma = find_vma(mm, addr);
+			if (!vma)
+				break;
+			if (vma->vm_ops && vma->vm_ops->access)
+				ret = vma->vm_ops->access(vma, addr, buf,
+							  len, write);
+			if (ret <= 0)
+#endif
+				break;
+			bytes = ret;
 		} else {
-			copy_from_user_page(vma, page, addr,
-					    buf, maddr + offset, bytes);
+			bytes = len;
+			offset = addr & (PAGE_SIZE-1);
+			if (bytes > PAGE_SIZE-offset)
+				bytes = PAGE_SIZE-offset;
+
+			maddr = kmap(page);
+			if (write) {
+				copy_to_user_page(vma, page, addr,
+						  maddr + offset, buf, bytes);
+				set_page_dirty_lock(page);
+			} else {
+				copy_from_user_page(vma, page, addr,
+						    buf, maddr + offset, bytes);
+			}
+			kunmap(page);
+			page_cache_release(page);
 		}
-		kunmap(page);
-		page_cache_release(page);
 		len -= bytes;
 		buf += bytes;
 		addr += bytes;
Index: ptrace-2.6.26-rc2-mm1/arch/x86/mm/ioremap.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/x86/mm/ioremap.c	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/x86/mm/ioremap.c	2008-05-15 13:50:15.000000000 -0400
@@ -307,6 +307,14 @@ void __iomem *ioremap_cache(resource_siz
 }
 EXPORT_SYMBOL(ioremap_cache);
 
+void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
+				unsigned long prot_val)
+{
+	return __ioremap_caller(phys_addr, size, (prot_val & _PAGE_CACHE_MASK),
+				__builtin_return_address(0));
+}
+EXPORT_SYMBOL(ioremap_prot);
+
 /**
  * iounmap - Free a IO remapping
  * @addr: virtual address from ioremap_*
Index: ptrace-2.6.26-rc2-mm1/include/asm-x86/io_32.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-x86/io_32.h	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-x86/io_32.h	2008-05-15 13:50:15.000000000 -0400
@@ -110,6 +110,8 @@ static inline void *phys_to_virt(unsigne
  */
 extern void __iomem *ioremap_nocache(resource_size_t offset, unsigned long size);
 extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
+extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size,
+				unsigned long prot_val);
 
 /*
  * The default ioremap() behavior is non-cached:
Index: ptrace-2.6.26-rc2-mm1/include/asm-x86/io_64.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-x86/io_64.h	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-x86/io_64.h	2008-05-15 13:50:15.000000000 -0400
@@ -175,6 +175,8 @@ extern void early_iounmap(void *addr, un
  */
 extern void __iomem *ioremap_nocache(resource_size_t offset, unsigned long size);
 extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
+extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size,
+				unsigned long prot_val);
 
 /*
  * The default ioremap() behavior is non-cached:
Index: ptrace-2.6.26-rc2-mm1/include/linux/mm.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/linux/mm.h	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/linux/mm.h	2008-05-15 13:50:18.000000000 -0400
@@ -169,6 +169,12 @@ struct vm_operations_struct {
 	/* notification that a previously read-only page is about to become
 	 * writable, if an error is returned it will cause a SIGBUS */
 	int (*page_mkwrite)(struct vm_area_struct *vma, struct page *page);
+
+	/* called by access_process_vm when get_user_pages() fails, typically
+	 * for use by special VMAs that can switch between memory and hardware
+	 */
+	int (*access)(struct vm_area_struct *vma, unsigned long addr,
+		      void *buf, int len, int write);
 #ifdef CONFIG_NUMA
 	/*
 	 * set_policy() op must add a reference to any non-NULL @new mempolicy
@@ -769,6 +775,8 @@ int copy_page_range(struct mm_struct *ds
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
+int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
+			void *buf, int len, int write);
 
 static inline void unmap_shared_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen)
Index: ptrace-2.6.26-rc2-mm1/arch/Kconfig
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/Kconfig	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/Kconfig	2008-05-15 13:50:15.000000000 -0400
@@ -40,6 +40,9 @@ config HAVE_KRETPROBES
 config HAVE_DMA_ATTRS
 	def_bool n
 
+config HAVE_IOREMAP_PROT
+	def_bool n
+
 config HAVE_EFFICIENT_UNALIGNED_ACCESS
 	def_bool n
 	help
Index: ptrace-2.6.26-rc2-mm1/arch/x86/Kconfig
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/x86/Kconfig	2008-05-15 13:50:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/x86/Kconfig	2008-05-15 13:50:15.000000000 -0400
@@ -27,6 +27,7 @@ config X86
 	select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64)
 	select HAVE_ARCH_KGDB if !X86_VOYAGER
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS
+	select HAVE_IOREMAP_PROT
 
 config DEFCONFIG_LIST
 	string

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 2/5] use generic_access_phys for /dev/mem mappings
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
  2008-05-15 17:53 ` [patch 1/5] access_process_vm device memory infrastructure Rik van Riel
@ 2008-05-15 17:53 ` Rik van Riel
  2008-05-15 17:54 ` [patch 3/5] use generic_access_phys for pci mmap on x86 Rik van Riel
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

[-- Attachment #1: 02-devmem-access.patch --]
[-- Type: text/plain, Size: 794 bytes --]

Use generic_access_phys as the access_process_vm access
function for /dev/mem mappings. This makes it possible to
debug the X server.

Signed-off-by: Rik van Riel <riel@redhat.com>

Index: ptrace-2.6.26-rc2-mm1/drivers/char/mem.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/drivers/char/mem.c	2008-05-15 13:28:13.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/drivers/char/mem.c	2008-05-15 13:28:41.000000000 -0400
@@ -326,7 +326,8 @@ static void mmap_mem_close(struct vm_are
 
 static struct vm_operations_struct mmap_mem_ops = {
 	.open  = mmap_mem_open,
-	.close = mmap_mem_close
+	.close = mmap_mem_close,
+	.access = generic_access_phys
 };
 
 static int mmap_mem(struct file * file, struct vm_area_struct * vma)

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 3/5] use generic_access_phys for pci mmap on x86
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
  2008-05-15 17:53 ` [patch 1/5] access_process_vm device memory infrastructure Rik van Riel
  2008-05-15 17:53 ` [patch 2/5] use generic_access_phys for /dev/mem mappings Rik van Riel
@ 2008-05-15 17:54 ` Rik van Riel
  2008-05-15 17:54 ` [patch 4/5] powerpc ioremap_prot Rik van Riel
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

[-- Attachment #1: 03-x86-pci.patch --]
[-- Type: text/plain, Size: 794 bytes --]

Use generic_access_phys for ptrace and /proc/pid/mem access to
PCI device memory on x86.  This makes it possible to debug the
X server.

Signed-off-by: Rik van Riel <riel@redhat.com>

Index: ptrace-2.6.26-rc2-mm1/arch/x86/pci/i386.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/x86/pci/i386.c	2008-05-15 12:46:41.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/x86/pci/i386.c	2008-05-15 13:34:36.000000000 -0400
@@ -280,6 +280,7 @@ static void pci_track_mmap_page_range(st
 static struct vm_operations_struct pci_mmap_ops = {
 	.open  = pci_track_mmap_page_range,
 	.close = pci_unmap_page_range,
+	.access = generic_access_phys,
 };
 
 int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 4/5] powerpc ioremap_prot
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
                   ` (2 preceding siblings ...)
  2008-05-15 17:54 ` [patch 3/5] use generic_access_phys for pci mmap on x86 Rik van Riel
@ 2008-05-15 17:54 ` Rik van Riel
  2008-05-15 17:54 ` [patch 5/5] spufs use the new vm_ops->access Rik van Riel
  2008-05-23  5:58 ` [patch 0/5] access_process_vm device memory access Andrew Morton
  5 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

[-- Attachment #1: 04-powerpc-ioremap_prot.patch --]
[-- Type: text/plain, Size: 8527 bytes --]

This adds ioremap_prot and pte_pgprot() so that one can extract
protection bits from a PTE and use them to ioremap_prot() (in
order to support ptrace of VM_IO | VM_PFNMAP as per Rik's patch).

This moves a couple of flag checks around in the ioremap
implementations of arch/powerpc. There's a side effect of allowing
non-cacheable and non-guarded mappings on ppc32 which before would
always have _PAGE_GUARDED set whenever _PAGE_NO_CACHE is.

(standard ioremap will still set _PAGE_GUARDED, but ioremap_prot
will be capable of setting such a non guarded mapping).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Rik van Riel <riel@redhat.com>
---

Index: ptrace-2.6.26-rc2-mm1/arch/powerpc/mm/pgtable_32.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/powerpc/mm/pgtable_32.c	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/powerpc/mm/pgtable_32.c	2008-05-15 13:17:21.000000000 -0400
@@ -145,13 +145,20 @@ void pte_free(struct mm_struct *mm, pgta
 void __iomem *
 ioremap(phys_addr_t addr, unsigned long size)
 {
-	return __ioremap(addr, size, _PAGE_NO_CACHE);
+	return __ioremap(addr, size, _PAGE_NO_CACHE | _PAGE_GUARDED);
 }
 EXPORT_SYMBOL(ioremap);
 
 void __iomem *
 ioremap_flags(phys_addr_t addr, unsigned long size, unsigned long flags)
 {
+	/* writeable implies dirty for kernel addresses */
+	if (flags & _PAGE_RW)
+		flags |= _PAGE_DIRTY | _PAGE_HWWRITE;
+
+	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
+	flags &= ~(_PAGE_USER | _PAGE_EXEC | _PAGE_HWEXEC);
+
 	return __ioremap(addr, size, flags);
 }
 EXPORT_SYMBOL(ioremap_flags);
@@ -163,6 +170,14 @@ __ioremap(phys_addr_t addr, unsigned lon
 	phys_addr_t p;
 	int err;
 
+	/* Make sure we have the base flags */
+	if ((flags & _PAGE_PRESENT) == 0)
+		flags |= _PAGE_KERNEL;
+
+	/* Non-cacheable page cannot be coherent */
+	if (flags & _PAGE_NO_CACHE)
+		flags &= ~_PAGE_COHERENT;
+
 	/*
 	 * Choose an address to map it to.
 	 * Once the vmalloc system is running, we use it.
@@ -219,11 +234,6 @@ __ioremap(phys_addr_t addr, unsigned lon
 		v = (ioremap_bot -= size);
 	}
 
-	if ((flags & _PAGE_PRESENT) == 0)
-		flags |= _PAGE_KERNEL;
-	if (flags & _PAGE_NO_CACHE)
-		flags |= _PAGE_GUARDED;
-
 	/*
 	 * Should check if it is a candidate for a BAT mapping
 	 */
Index: ptrace-2.6.26-rc2-mm1/arch/powerpc/mm/pgtable_64.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/powerpc/mm/pgtable_64.c	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/powerpc/mm/pgtable_64.c	2008-05-15 13:17:21.000000000 -0400
@@ -107,9 +107,18 @@ void __iomem * __ioremap_at(phys_addr_t 
 {
 	unsigned long i;
 
+	/* Make sure we have the base flags */
 	if ((flags & _PAGE_PRESENT) == 0)
 		flags |= pgprot_val(PAGE_KERNEL);
 
+	/* Non-cacheable page cannot be coherent */
+	if (flags & _PAGE_NO_CACHE)
+		flags &= ~_PAGE_COHERENT;
+
+	/* We don't support the 4K PFN hack with ioremap */
+	if (flags & _PAGE_4K_PFN)
+		return NULL;
+
 	WARN_ON(pa & ~PAGE_MASK);
 	WARN_ON(((unsigned long)ea) & ~PAGE_MASK);
 	WARN_ON(size & ~PAGE_MASK);
@@ -190,6 +199,13 @@ void __iomem * ioremap(phys_addr_t addr,
 void __iomem * ioremap_flags(phys_addr_t addr, unsigned long size,
 			     unsigned long flags)
 {
+	/* writeable implies dirty for kernel addresses */
+	if (flags & _PAGE_RW)
+		flags |= _PAGE_DIRTY;
+
+	/* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
+	flags &= ~(_PAGE_USER | _PAGE_EXEC);
+
 	if (ppc_md.ioremap)
 		return ppc_md.ioremap(addr, size, flags);
 	return __ioremap(addr, size, flags);
Index: ptrace-2.6.26-rc2-mm1/include/asm-powerpc/io.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-powerpc/io.h	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-powerpc/io.h	2008-05-15 13:17:21.000000000 -0400
@@ -588,7 +588,8 @@ static inline void iosync(void)
  *   and can be hooked by the platform via ppc_md
  *
  * * ioremap_flags allows to specify the page flags as an argument and can
- *   also be hooked by the platform via ppc_md
+ *   also be hooked by the platform via ppc_md. ioremap_prot is the exact
+ *   same thing as ioremap_flags.
  *
  * * ioremap_nocache is identical to ioremap
  *
@@ -610,6 +611,8 @@ extern void __iomem *ioremap(phys_addr_t
 extern void __iomem *ioremap_flags(phys_addr_t address, unsigned long size,
 				   unsigned long flags);
 #define ioremap_nocache(addr, size)	ioremap((addr), (size))
+#define ioremap_prot(addr, size, prot)	ioremap_flags((addr), (size), (prot))
+
 extern void iounmap(volatile void __iomem *addr);
 
 extern void __iomem *__ioremap(phys_addr_t, unsigned long size,
Index: ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-ppc32.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-powerpc/pgtable-ppc32.h	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-ppc32.h	2008-05-15 13:21:47.000000000 -0400
@@ -388,6 +388,12 @@ extern int icache_44x_need_flush;
 #ifndef _PAGE_EXEC
 #define _PAGE_EXEC	0
 #endif
+#ifndef _PAGE_ENDIAN
+#define _PAGE_ENDIAN	0
+#endif
+#ifndef _PAGE_COHERENT
+#define _PAGE_COHERENT	0
+#endif
 #ifndef _PMD_PRESENT_MASK
 #define _PMD_PRESENT_MASK	_PMD_PRESENT
 #endif
@@ -398,6 +404,12 @@ extern int icache_44x_need_flush;
 
 #define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
 
+
+#define PAGE_PROT_BITS	__pgprot(_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
+				 _PAGE_WRITETHRU | _PAGE_ENDIAN | \
+				 _PAGE_USER | _PAGE_ACCESSED | \
+				 _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \
+				 _PAGE_EXEC | _PAGE_HWEXEC)
 /*
  * Note: the _PAGE_COHERENT bit automatically gets set in the hardware
  * PTE if CONFIG_SMP is defined (hash_page does this); there is no need
@@ -531,6 +543,10 @@ static inline pte_t pte_mkyoung(pte_t pt
 	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
 static inline pte_t pte_mkspecial(pte_t pte) {
 	return pte; }
+static inline unsigned long pte_pgprot(pte_t pte)
+{
+	return __pgprot(pte_val(pte)) & PAGE_PROT_BITS;
+}
 
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
Index: ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-ppc64.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-powerpc/pgtable-ppc64.h	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-ppc64.h	2008-05-15 13:21:59.000000000 -0400
@@ -115,6 +115,10 @@
 #define PAGE_AGP	__pgprot(_PAGE_BASE | _PAGE_WRENABLE | _PAGE_NO_CACHE)
 #define HAVE_PAGE_AGP
 
+#define PAGE_PROT_BITS	__pgprot(_PAGE_GUARDED | _PAGE_COHERENT | \
+				 _PAGE_NO_CACHE | _PAGE_WRITETHRU | \
+				 _PAGE_4K_PFN | _PAGE_RW | _PAGE_USER | \
+ 				 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
 /* PTEIDX nibble */
 #define _PTEIDX_SECONDARY	0x8
 #define _PTEIDX_GROUP_IX	0x7
@@ -260,6 +264,10 @@ static inline pte_t pte_mkhuge(pte_t pte
 	return pte; }
 static inline pte_t pte_mkspecial(pte_t pte) {
 	return pte; }
+static inline unsigned long pte_pgprot(pte_t pte)
+{
+	return __pgprot(pte_val(pte)) & PAGE_PROT_BITS;
+}
 
 /* Atomic PTE updates */
 static inline unsigned long pte_update(struct mm_struct *mm,
Index: ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-4k.h
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/include/asm-powerpc/pgtable-4k.h	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/include/asm-powerpc/pgtable-4k.h	2008-05-15 13:17:21.000000000 -0400
@@ -50,6 +50,9 @@
 #define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
 			 _PAGE_SECONDARY | _PAGE_GROUP_IX)
 
+/* There is no 4K PFN hack on 4K pages */
+#define _PAGE_4K_PFN	0
+
 /* PAGE_MASK gives the right answer below, but only by accident */
 /* It should be preserving the high 48 bits and then specifically */
 /* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */
Index: ptrace-2.6.26-rc2-mm1/arch/powerpc/Kconfig
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/powerpc/Kconfig	2008-05-15 13:16:51.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/powerpc/Kconfig	2008-05-15 13:17:21.000000000 -0400
@@ -111,6 +111,7 @@ config PPC
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_LMB
+	select HAVE_IOREMAP_PROT
 
 config EARLY_PRINTK
 	bool

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 5/5] spufs use the new vm_ops->access
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
                   ` (3 preceding siblings ...)
  2008-05-15 17:54 ` [patch 4/5] powerpc ioremap_prot Rik van Riel
@ 2008-05-15 17:54 ` Rik van Riel
  2008-05-23  5:58 ` [patch 0/5] access_process_vm device memory access Andrew Morton
  5 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-15 17:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, ajackson, airlied, benh

[-- Attachment #1: 05-spufs-use-vm_ops-access.patch --]
[-- Type: text/plain, Size: 1684 bytes --]

spufs: use new vm_ops->access to allow local state access from gdb

This uses the new vm_ops->access to allow gdb to access the SPU
local store. We currently prevent access to problem state registers,
this can be done later if really needed but it's safer not to.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Rik van Riel <riel@redhat.com>
---

Index: ptrace-2.6.26-rc2-mm1/arch/powerpc/platforms/cell/spufs/file.c
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/arch/powerpc/platforms/cell/spufs/file.c	2008-05-15 13:35:39.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/arch/powerpc/platforms/cell/spufs/file.c	2008-05-15 13:42:26.000000000 -0400
@@ -287,9 +287,32 @@ spufs_mem_mmap_fault(struct vm_area_stru
 	return VM_FAULT_NOPAGE;
 }
 
+static int spufs_mem_mmep_access(struct vm_area_struct *vma,
+				unsigned long address,
+				void *buf, int len, int write)
+{
+	struct spu_context *ctx = vma->vm_file->private_data;
+	unsigned long offset = address - vma->vm_start;
+	char *local_store;
+
+	if (write && !(vma->vm_flags & VM_WRITE))
+		return -EACCES;
+	if (spu_acquire(ctx))
+		return -EINTR;
+	if ((offset + len) > vma->vm_end)
+		len = vma->vm_end - offset;
+	local_store = ctx->ops->get_ls(ctx);
+	if (write)
+		memcpy_toio(local_store + offset, buf, len);
+	else
+		memcpy_fromio(buf, local_store + offset, len);
+	spu_release(ctx);
+	return len;
+}
 
 static struct vm_operations_struct spufs_mem_mmap_vmops = {
 	.fault = spufs_mem_mmap_fault,
+	.access = spufs_mem_mmep_access,
 };
 
 static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)

-- 
All Rights Reversed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 1/5] access_process_vm device memory infrastructure
  2008-05-15 17:53 ` [patch 1/5] access_process_vm device memory infrastructure Rik van Riel
@ 2008-05-16  8:20   ` Peter Zijlstra
  2008-05-16 20:27     ` [PATCH 6/5] Add documentation for the vm_ops->access function Rik van Riel
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2008-05-16  8:20 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel, akpm, ajackson, airlied, benh

On Thu, 2008-05-15 at 13:53 -0400, Rik van Riel wrote:
> plain text document attachment
> (01-access_process_vm-device-memory.patch)
> Add the generic_access_phys access function and put the hooks in place
> to allow access_process_vm to access device or PPC Cell SPU memory.
> 
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Signed-off-by: Benjamin Herrensmidt <benh@kernel.crashing.org>
> 

> Index: ptrace-2.6.26-rc2-mm1/include/linux/mm.h
> ===================================================================
> --- ptrace-2.6.26-rc2-mm1.orig/include/linux/mm.h	2008-05-15 13:50:13.000000000 -0400
> +++ ptrace-2.6.26-rc2-mm1/include/linux/mm.h	2008-05-15 13:50:18.000000000 -0400
> @@ -169,6 +169,12 @@ struct vm_operations_struct {
>  	/* notification that a previously read-only page is about to become
>  	 * writable, if an error is returned it will cause a SIGBUS */
>  	int (*page_mkwrite)(struct vm_area_struct *vma, struct page *page);
> +
> +	/* called by access_process_vm when get_user_pages() fails, typically
> +	 * for use by special VMAs that can switch between memory and hardware
> +	 */
> +	int (*access)(struct vm_area_struct *vma, unsigned long addr,
> +		      void *buf, int len, int write);


This bit misses corresponding Documentation/ changes.

Other than that it looks good.

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 6/5] Add documentation for the vm_ops->access function.
  2008-05-16  8:20   ` Peter Zijlstra
@ 2008-05-16 20:27     ` Rik van Riel
  0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2008-05-16 20:27 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, akpm, ajackson, airlied, benh

From: Rik van Riel <riel@redhat.com>

Add documentation for the vm_ops->access function.

Signed-off-by: Rik van Riel <riel@redhat.com>

Index: ptrace-2.6.26-rc2-mm1/Documentation/filesystems/Locking
===================================================================
--- ptrace-2.6.26-rc2-mm1.orig/Documentation/filesystems/Locking	2008-05-15 12:46:37.000000000 -0400
+++ ptrace-2.6.26-rc2-mm1/Documentation/filesystems/Locking	2008-05-16 16:25:10.000000000 -0400
@@ -510,6 +510,7 @@ prototypes:
 	void (*close)(struct vm_area_struct*);
 	int (*fault)(struct vm_area_struct*, struct vm_fault *);
 	int (*page_mkwrite)(struct vm_area_struct *, struct page *);
+	int (*access)(struct vm_area_struct *, unsigned long, void*, int, int);
 
 locking rules:
 		BKL	mmap_sem	PageLocked(page)
@@ -517,6 +518,7 @@ open:		no	yes
 close:		no	yes
 fault:		no	yes
 page_mkwrite:	no	yes		no
+access:		no	yes
 
 	->page_mkwrite() is called when a previously read-only page is
 about to become writeable. The file system is responsible for
@@ -525,6 +527,11 @@ taking to lock out truncate, the page ra
 within i_size. The page mapping should also be checked that it is not
 NULL.
 
+	->access() is called when get_user_pages() fails in
+acces_process_vm(), typically used to debug a process through
+/proc/pid/mem or ptrace.  This function is needed only for
+VM_IO | VM_PFNMAP VMAs.
+
 ================================================================================
 			Dubious stuff
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/5] access_process_vm device memory access
  2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
                   ` (4 preceding siblings ...)
  2008-05-15 17:54 ` [patch 5/5] spufs use the new vm_ops->access Rik van Riel
@ 2008-05-23  5:58 ` Andrew Morton
  2008-05-23  6:01   ` Andrew Morton
  5 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2008-05-23  5:58 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel, ajackson, airlied, benh

On Thu, 15 May 2008 13:53:57 -0400 Rik van Riel <riel@redhat.com> wrote:

> In order to be able to debug things like the X server and programs
> using the PPC Cell SPUs, the debugger needs to be able to access
> device memory through ptrace and /proc/pid/mem.

My alpha allmodconfig build is unwell

drivers/built-in.o:(.data+0x4600): undefined reference to `generic_access_phys'

and I think you might be involved.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 0/5] access_process_vm device memory access
  2008-05-23  5:58 ` [patch 0/5] access_process_vm device memory access Andrew Morton
@ 2008-05-23  6:01   ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2008-05-23  6:01 UTC (permalink / raw)
  To: Rik van Riel, linux-kernel, ajackson, airlied, benh

On Thu, 22 May 2008 22:58:57 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:

> On Thu, 15 May 2008 13:53:57 -0400 Rik van Riel <riel@redhat.com> wrote:
> 
> > In order to be able to debug things like the X server and programs
> > using the PPC Cell SPUs, the debugger needs to be able to access
> > device memory through ptrace and /proc/pid/mem.
> 
> My alpha allmodconfig build is unwell
> 
> drivers/built-in.o:(.data+0x4600): undefined reference to `generic_access_phys'
> 
> and I think you might be involved.

Ditto arch/arm/configs/iop32x_defconfig

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2008-05-23  6:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-15 17:53 [patch 0/5] access_process_vm device memory access Rik van Riel
2008-05-15 17:53 ` [patch 1/5] access_process_vm device memory infrastructure Rik van Riel
2008-05-16  8:20   ` Peter Zijlstra
2008-05-16 20:27     ` [PATCH 6/5] Add documentation for the vm_ops->access function Rik van Riel
2008-05-15 17:53 ` [patch 2/5] use generic_access_phys for /dev/mem mappings Rik van Riel
2008-05-15 17:54 ` [patch 3/5] use generic_access_phys for pci mmap on x86 Rik van Riel
2008-05-15 17:54 ` [patch 4/5] powerpc ioremap_prot Rik van Riel
2008-05-15 17:54 ` [patch 5/5] spufs use the new vm_ops->access Rik van Riel
2008-05-23  5:58 ` [patch 0/5] access_process_vm device memory access Andrew Morton
2008-05-23  6:01   ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox