public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* mlock hanging on non-ram with VM_IO set
@ 2004-11-16 17:33 Andrea Arcangeli
  0 siblings, 0 replies; only message in thread
From: Andrea Arcangeli @ 2004-11-16 17:33 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins; +Cc: linux-kernel

I've got a report of mlock deadlocking on non-ram mappings (precisely on
a mmap of /dev/mem outside the mem_map array range)

I believe this should fix it (I verified the mlock is the only one
passing down pages == NULL, and infact mlock is the only one hanging on
VM_IO vmas since the pfn isn't valid and handle_mm_fault sure can't
istantiate a valid pfn there since the ptes are filled synchronously at
mmap time), comments? (I don't see any reason why mlock should be
allowed to trigger page faults on VM_IO regions)

This is not a bad bug, mlock is privilegied, and /dev/mem is privilegied
too, and mlock makes no sense on a non-ram mappings (noop), but I guess
it's ok to add it mostly for things like mlockall that would otherwise
deadlock the box and I guess it's nicer if they work transparently, no
matter what mappings are in the task.

btw, mlock traps the error and ignores it as it should.

this is untested but I guess it should fix it (and it applied cleanly to
kernel CVS).

Signed-off-by: Andrea Arcangeli <andrea@novell.com>

--- sles/mm/memory.c.~1~	2004-11-12 12:30:25.000000000 +0100
+++ sles/mm/memory.c	2004-11-16 17:58:02.752131952 +0100
@@ -753,7 +753,7 @@ int get_user_pages(struct task_struct *t
 			continue;
 		}
 
-		if (!vma || (pages && (vma->vm_flags & VM_IO))
+		if (!vma || (vma->vm_flags & VM_IO)
 				|| !(flags & vma->vm_flags))
 			return i ? : -EFAULT;
 

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2004-11-16 17:33 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-11-16 17:33 mlock hanging on non-ram with VM_IO set Andrea Arcangeli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox