* atomic copy_from_user?
@ 2003-12-22 0:48 Albert Cahalan
2003-12-22 4:31 ` Linus Torvalds
2003-12-22 15:00 ` William Lee Irwin III
0 siblings, 2 replies; 17+ messages in thread
From: Albert Cahalan @ 2003-12-22 0:48 UTC (permalink / raw)
To: linux-kernel mailing list
Surely I'm not the only one wanting such a beast...?
>From some naughty place in the code where might_sleep
would trigger, I'd like to read from user memory.
I'll pretty much assume that mlockall() has been
called. Suppose that "current" is correct as well.
I'd just use a pointer directly, except that:
a. it isn't OK for the 4g/4g feature, s390, or sparc64
b. it causes the "sparse" type checker to complain
c. it will oops or worse if the user screwed up
If the page is swapped out, I want a failed copy.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 0:48 atomic copy_from_user? Albert Cahalan
@ 2003-12-22 4:31 ` Linus Torvalds
2003-12-22 9:36 ` Andrew Morton
2003-12-22 15:00 ` William Lee Irwin III
1 sibling, 1 reply; 17+ messages in thread
From: Linus Torvalds @ 2003-12-22 4:31 UTC (permalink / raw)
To: Albert Cahalan; +Cc: linux-kernel mailing list
On Sun, 21 Dec 2003, Albert Cahalan wrote:
>
> Surely I'm not the only one wanting such a beast...?
I sure as hell hope you are.
> From some naughty place in the code where might_sleep
> would trigger, I'd like to read from user memory.
> I'll pretty much assume that mlockall() has been
> called. Suppose that "current" is correct as well.
> I'd just use a pointer directly, except that:
>
> a. it isn't OK for the 4g/4g feature, s390, or sparc64
> b. it causes the "sparse" type checker to complain
> c. it will oops or worse if the user screwed up
>
> If the page is swapped out, I want a failed copy.
the sequence
local_bh_disable();
err = get_user(n, ptr);
local_bh_enable();
if (!err)
.. 'n' .. was the value
will do this in 2.6.x, except it will complain loudly about the unatomic
access. Other than that, it will do what you ask for.
However, I'd still suggest not doing this. It's just broken. I don't see
any real reason to do this except as a "test if the page is paged out"
kind of thing..
Linus
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 4:31 ` Linus Torvalds
@ 2003-12-22 9:36 ` Andrew Morton
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2003-12-22 9:36 UTC (permalink / raw)
To: Linus Torvalds; +Cc: albert, linux-kernel
Linus Torvalds <torvalds@osdl.org> wrote:
>
> > From some naughty place in the code where might_sleep
> > would trigger, I'd like to read from user memory.
> > I'll pretty much assume that mlockall() has been
> > called. Suppose that "current" is correct as well.
> > I'd just use a pointer directly, except that:
> >
> > a. it isn't OK for the 4g/4g feature, s390, or sparc64
> > b. it causes the "sparse" type checker to complain
> > c. it will oops or worse if the user screwed up
> >
> > If the page is swapped out, I want a failed copy.
>
> the sequence
>
> local_bh_disable();
> err = get_user(n, ptr);
> local_bh_enable();
> if (!err)
> .. 'n' .. was the value
>
> will do this in 2.6.x, except it will complain loudly about the unatomic
> access. Other than that, it will do what you ask for.
An explicit inc_preempt_count() would be clearer. See how ia32's
kmap_atomic() does it. And filemap_copy_from_user().
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 0:48 atomic copy_from_user? Albert Cahalan
2003-12-22 4:31 ` Linus Torvalds
@ 2003-12-22 15:00 ` William Lee Irwin III
2003-12-22 18:26 ` Joe Korty
1 sibling, 1 reply; 17+ messages in thread
From: William Lee Irwin III @ 2003-12-22 15:00 UTC (permalink / raw)
To: Albert Cahalan; +Cc: linux-kernel mailing list
On Sun, Dec 21, 2003 at 07:48:20PM -0500, Albert Cahalan wrote:
> Surely I'm not the only one wanting such a beast...?
> From some naughty place in the code where might_sleep
> would trigger, I'd like to read from user memory.
> I'll pretty much assume that mlockall() has been
> called. Suppose that "current" is correct as well.
> I'd just use a pointer directly, except that:
> a. it isn't OK for the 4g/4g feature, s390, or sparc64
> b. it causes the "sparse" type checker to complain
> c. it will oops or worse if the user screwed up
> If the page is swapped out, I want a failed copy.
c.f. kmap_atomic() usage in mm/filemap.c
-- wli
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 15:00 ` William Lee Irwin III
@ 2003-12-22 18:26 ` Joe Korty
2003-12-22 20:55 ` Rob Love
0 siblings, 1 reply; 17+ messages in thread
From: Joe Korty @ 2003-12-22 18:26 UTC (permalink / raw)
To: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, Dec 22, 2003 at 07:00:26AM -0800, William Lee Irwin III wrote:
> c.f. kmap_atomic() usage in mm/filemap.c
Shouldn't the dec_prempt_count() in kunmap_atomic() be followed
by a preempt_check_resched()???
Joe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 18:26 ` Joe Korty
@ 2003-12-22 20:55 ` Rob Love
2003-12-22 21:22 ` Joe Korty
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Rob Love @ 2003-12-22 20:55 UTC (permalink / raw)
To: Joe Korty
Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, 2003-12-22 at 13:26, Joe Korty wrote:
> Shouldn't the dec_prempt_count() in kunmap_atomic() be followed
> by a preempt_check_resched()???
Probably.
Actually, dec_preempt_count() ought to call preempt_check_resched()
itself. In the case of !CONFIG_PREEMPT, that call would simply optimize
away.
Attached patch is against 2.6.0.
Rob Love
linux/preempt.h | 1 +
1 files changed, 1 insertion(+)
diff -urN include/linux/preempt.h.orig include/linux/preempt.h
--- include/linux/preempt.h.orig 2003-12-22 15:53:11.329113296 -0500
+++ include/linux/preempt.h 2003-12-22 15:53:51.314034664 -0500
@@ -18,6 +18,7 @@
#define dec_preempt_count() \
do { \
preempt_count()--; \
+ preempt_check_resched(); \
} while (0)
#ifdef CONFIG_PREEMPT
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 20:55 ` Rob Love
@ 2003-12-22 21:22 ` Joe Korty
2003-12-22 21:40 ` Rob Love
2003-12-22 22:06 ` Joe Korty
2003-12-22 22:14 ` Andrew Morton
2 siblings, 1 reply; 17+ messages in thread
From: Joe Korty @ 2003-12-22 21:22 UTC (permalink / raw)
To: Rob Love; +Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, Dec 22, 2003 at 03:55:06PM -0500, Rob Love wrote:
> On Mon, 2003-12-22 at 13:26, Joe Korty wrote:
>
> > Shouldn't the dec_prempt_count() in kunmap_atomic() be followed
> > by a preempt_check_resched()???
>
> Probably.
>
> Actually, dec_preempt_count() ought to call preempt_check_resched()
> itself. In the case of !CONFIG_PREEMPT, that call would simply optimize
> away.
>
> Attached patch is against 2.6.0.
>
> Rob Love
>
>
> linux/preempt.h | 1 +
> 1 files changed, 1 insertion(+)
>
> diff -urN include/linux/preempt.h.orig include/linux/preempt.h
> --- include/linux/preempt.h.orig 2003-12-22 15:53:11.329113296 -0500
> +++ include/linux/preempt.h 2003-12-22 15:53:51.314034664 -0500
> @@ -18,6 +18,7 @@
> #define dec_preempt_count() \
> do { \
> preempt_count()--; \
> + preempt_check_resched(); \
> } while (0)
>
> #ifdef CONFIG_PREEMPT
I am guessing that nowdays even when preemption is disabled one can
find preempt_count still being used somewhere. Otherwise it would be
better to replace all uses of inc_preempt_count() with
preempt_disable() and dec_preempt_count() with preempt_enable().
Joe
diff -ura base/arch/i386/mm/highmem.c new/arch/i386/mm/highmem.c
--- base/arch/i386/mm/highmem.c 2003-12-17 21:58:56.000000000 -0500
+++ new/arch/i386/mm/highmem.c 2003-12-22 16:16:27.000000000 -0500
@@ -30,7 +30,7 @@
enum fixed_addresses idx;
unsigned long vaddr;
- inc_preempt_count();
+ preempt_disable();
if (page < highmem_start_page)
return page_address(page);
@@ -53,7 +53,7 @@
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
- dec_preempt_count();
+ preempt_enable();
return;
}
@@ -68,7 +68,7 @@
__flush_tlb_one(vaddr);
#endif
- dec_preempt_count();
+ preempt_enable();
}
struct page *kmap_atomic_to_page(void *ptr)
diff -ura base/arch/mips/mm/highmem.c new/arch/mips/mm/highmem.c
--- base/arch/mips/mm/highmem.c 2003-12-17 21:58:28.000000000 -0500
+++ new/arch/mips/mm/highmem.c 2003-12-22 16:17:28.000000000 -0500
@@ -40,7 +40,7 @@
enum fixed_addresses idx;
unsigned long vaddr;
- inc_preempt_count();
+ preempt_disable();
if (page < highmem_start_page)
return page_address(page);
@@ -63,7 +63,7 @@
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
- dec_preempt_count();
+ preempt_enable();
return;
}
@@ -78,7 +78,7 @@
local_flush_tlb_one(vaddr);
#endif
- dec_preempt_count();
+ preempt_enable();
}
struct page *kmap_atomic_to_page(void *ptr)
diff -ura base/arch/sparc/mm/highmem.c new/arch/sparc/mm/highmem.c
--- base/arch/sparc/mm/highmem.c 2003-12-17 21:58:28.000000000 -0500
+++ new/arch/sparc/mm/highmem.c 2003-12-22 16:17:02.000000000 -0500
@@ -33,7 +33,7 @@
unsigned long idx;
unsigned long vaddr;
- inc_preempt_count();
+ preempt_disable();
if (page < highmem_start_page)
return page_address(page);
@@ -68,7 +68,7 @@
unsigned long idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < fix_kmap_begin) { // FIXME
- dec_preempt_count();
+ preempt_enable();
return;
}
@@ -95,5 +95,5 @@
flush_tlb_all();
#endif
#endif
- dec_preempt_count();
+ preempt_enable();
}
diff -ura base/include/asm-ppc/highmem.h new/include/asm-ppc/highmem.h
--- base/include/asm-ppc/highmem.h 2003-12-17 21:59:45.000000000 -0500
+++ new/include/asm-ppc/highmem.h 2003-12-22 16:18:05.000000000 -0500
@@ -81,7 +81,7 @@
unsigned int idx;
unsigned long vaddr;
- inc_preempt_count();
+ preempt_disable();
if (page < highmem_start_page)
return page_address(page);
@@ -104,7 +104,7 @@
unsigned int idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < KMAP_FIX_BEGIN) { // FIXME
- dec_preempt_count();
+ preempt_disable();
return;
}
@@ -118,7 +118,7 @@
pte_clear(kmap_pte+idx);
flush_tlb_page(0, vaddr);
#endif
- dec_preempt_count();
+ preempt_enable();
}
static inline struct page *kmap_atomic_to_page(void *ptr)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 21:22 ` Joe Korty
@ 2003-12-22 21:40 ` Rob Love
2003-12-22 21:59 ` Joe Korty
0 siblings, 1 reply; 17+ messages in thread
From: Rob Love @ 2003-12-22 21:40 UTC (permalink / raw)
To: Joe Korty
Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, 2003-12-22 at 16:22, Joe Korty wrote:
> I am guessing that nowdays even when preemption is disabled one can
> find preempt_count still being used somewhere. Otherwise it would be
> better to replace all uses of inc_preempt_count() with
> preempt_disable() and dec_preempt_count() with preempt_enable().
Right. So why did you make this patch? :)
inc_preempt_count() and dec_preempt_count() are for use when you
_absolutely_ must manage the preemption counter, regardless of whether
or not kernel preemption is enabled.
They are used for things like atomic kmaps.
Rob Love
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 21:40 ` Rob Love
@ 2003-12-22 21:59 ` Joe Korty
2003-12-22 22:14 ` Rob Love
2003-12-22 22:24 ` Andrew Morton
0 siblings, 2 replies; 17+ messages in thread
From: Joe Korty @ 2003-12-22 21:59 UTC (permalink / raw)
To: Rob Love; +Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, Dec 22, 2003 at 04:40:11PM -0500, Rob Love wrote:
> On Mon, 2003-12-22 at 16:22, Joe Korty wrote:
>
> > I am guessing that nowdays even when preemption is disabled one can
> > find preempt_count still being used somewhere. Otherwise it would be
> > better to replace all uses of inc_preempt_count() with
> > preempt_disable() and dec_preempt_count() with preempt_enable().
>
> Right. So why did you make this patch? :)
>
> inc_preempt_count() and dec_preempt_count() are for use when you
> _absolutely_ must manage the preemption counter, regardless of whether
> or not kernel preemption is enabled.
>
> They are used for things like atomic kmaps.
Hi Robert,
I do not see why a non-preempt kernel would care at all about
the value of preempt_count. (kmap_atomic is obviously setting it,
where is the place in a non-preempt kernel where the set value
is being acted upon?).
Joe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 20:55 ` Rob Love
2003-12-22 21:22 ` Joe Korty
@ 2003-12-22 22:06 ` Joe Korty
2003-12-22 22:18 ` Rob Love
2003-12-22 22:14 ` Andrew Morton
2 siblings, 1 reply; 17+ messages in thread
From: Joe Korty @ 2003-12-22 22:06 UTC (permalink / raw)
To: Rob Love; +Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, Dec 22, 2003 at 03:55:06PM -0500, Rob Love wrote:
> On Mon, 2003-12-22 at 13:26, Joe Korty wrote:
>
> > Shouldn't the dec_prempt_count() in kunmap_atomic() be followed
> > by a preempt_check_resched()???
>
> Probably.
>
> Actually, dec_preempt_count() ought to call preempt_check_resched()
> itself. In the case of !CONFIG_PREEMPT, that call would simply optimize
> away.
>
> Attached patch is against 2.6.0.
If this is done then preempt_enable_no_resched() and preempt_enable() also
need to be adjusted, as they both call dec_preempt_count().
Joe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 20:55 ` Rob Love
2003-12-22 21:22 ` Joe Korty
2003-12-22 22:06 ` Joe Korty
@ 2003-12-22 22:14 ` Andrew Morton
2003-12-22 22:19 ` Rob Love
2 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2003-12-22 22:14 UTC (permalink / raw)
To: Rob Love; +Cc: joe.korty, wli, albert, linux-kernel
Rob Love <rml@ximian.com> wrote:
>
> Actually, dec_preempt_count() ought to call preempt_check_resched()
> itself. In the case of !CONFIG_PREEMPT, that call would simply optimize
> away.
>
> Attached patch is against 2.6.0.
>
> Rob Love
>
>
> linux/preempt.h | 1 +
> 1 files changed, 1 insertion(+)
>
> diff -urN include/linux/preempt.h.orig include/linux/preempt.h
> --- include/linux/preempt.h.orig 2003-12-22 15:53:11.329113296 -0500
> +++ include/linux/preempt.h 2003-12-22 15:53:51.314034664 -0500
> @@ -18,6 +18,7 @@
> #define dec_preempt_count() \
> do { \
> preempt_count()--; \
> + preempt_check_resched(); \
> } while (0)
But preempt_enable_no_resched() calls dec_preempt_count().
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 21:59 ` Joe Korty
@ 2003-12-22 22:14 ` Rob Love
2003-12-22 22:24 ` Andrew Morton
1 sibling, 0 replies; 17+ messages in thread
From: Rob Love @ 2003-12-22 22:14 UTC (permalink / raw)
To: Joe Korty
Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, 2003-12-22 at 16:59, Joe Korty wrote:
> I do not see why a non-preempt kernel would care at all about
> the value of preempt_count. (kmap_atomic is obviously setting it,
> where is the place in a non-preempt kernel where the set value
> is being acted upon?).
Last I checked, the architecture-specific page fault handlers. They do
something like:
if (in_atomic())
goto do_not_service_fault;
This let us implement the atomic copy_*_user() functions.
kmap_atomic() needs to mark the system atomic, so the in_atomic() will
fail.
This was done around ~2.5.30 by akpm.
Rob Love
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 22:06 ` Joe Korty
@ 2003-12-22 22:18 ` Rob Love
0 siblings, 0 replies; 17+ messages in thread
From: Rob Love @ 2003-12-22 22:18 UTC (permalink / raw)
To: Joe Korty
Cc: William Lee Irwin III, Albert Cahalan, linux-kernel mailing list
On Mon, 2003-12-22 at 17:06, Joe Korty wrote:
> If this is done then preempt_enable_no_resched() and preempt_enable() also
> need to be adjusted, as they both call dec_preempt_count().
True.
In that case, and because dec_preempt_count() is our base interface, I
think we should leave it alone and go back to your original idea
(explicitly call preempt_check_resched()).
Rob Love
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 22:14 ` Andrew Morton
@ 2003-12-22 22:19 ` Rob Love
2003-12-22 22:35 ` Joe Korty
0 siblings, 1 reply; 17+ messages in thread
From: Rob Love @ 2003-12-22 22:19 UTC (permalink / raw)
To: Andrew Morton; +Cc: joe.korty, wli, albert, linux-kernel
On Mon, 2003-12-22 at 17:14, Andrew Morton wrote:
> But preempt_enable_no_resched() calls dec_preempt_count().
Yah, Joe just pointed that out.
I do not really want to change the base interfaces, anyway ;)
I do think we should add an explicit preempt_check_resched() after calls
to dec_preempt_count() where we might be delaying a reschedule, though.
Rob Love
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 21:59 ` Joe Korty
2003-12-22 22:14 ` Rob Love
@ 2003-12-22 22:24 ` Andrew Morton
1 sibling, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2003-12-22 22:24 UTC (permalink / raw)
To: Joe Korty; +Cc: rml, wli, albert, linux-kernel
Joe Korty <joe.korty@ccur.com> wrote:
>
> > inc_preempt_count() and dec_preempt_count() are for use when you
> > _absolutely_ must manage the preemption counter, regardless of whether
> > or not kernel preemption is enabled.
> >
> > They are used for things like atomic kmaps.
>
> Hi Robert,
> I do not see why a non-preempt kernel would care at all about
> the value of preempt_count. (kmap_atomic is obviously setting it,
> where is the place in a non-preempt kernel where the set value
> is being acted upon?).
do_page_fault()'s in_atomic() test.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 22:19 ` Rob Love
@ 2003-12-22 22:35 ` Joe Korty
2003-12-22 22:59 ` Rob Love
0 siblings, 1 reply; 17+ messages in thread
From: Joe Korty @ 2003-12-22 22:35 UTC (permalink / raw)
To: Rob Love; +Cc: Andrew Morton, wli, albert, linux-kernel
On Mon, Dec 22, 2003 at 05:19:48PM -0500, Rob Love wrote:
> On Mon, 2003-12-22 at 17:14, Andrew Morton wrote:
>
> > But preempt_enable_no_resched() calls dec_preempt_count().
>
> Yah, Joe just pointed that out.
>
> I do not really want to change the base interfaces, anyway ;)
>
> I do think we should add an explicit preempt_check_resched() after calls
> to dec_preempt_count() where we might be delaying a reschedule, though.
Thanks, Robert and Andrew, for you explanations. This patch should
do the trick.
Joe
diff -ura base/arch/i386/mm/highmem.c new/arch/i386/mm/highmem.c
--- base/arch/i386/mm/highmem.c 2003-12-17 21:58:56.000000000 -0500
+++ new/arch/i386/mm/highmem.c 2003-12-22 17:32:46.000000000 -0500
@@ -30,6 +30,7 @@
enum fixed_addresses idx;
unsigned long vaddr;
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
if (page < highmem_start_page)
return page_address(page);
@@ -54,6 +55,7 @@
if (vaddr < FIXADDR_START) { // FIXME
dec_preempt_count();
+ preempt_check_resched();
return;
}
@@ -69,6 +71,7 @@
#endif
dec_preempt_count();
+ preempt_check_resched();
}
struct page *kmap_atomic_to_page(void *ptr)
diff -ura base/arch/mips/mm/highmem.c new/arch/mips/mm/highmem.c
--- base/arch/mips/mm/highmem.c 2003-12-17 21:58:28.000000000 -0500
+++ new/arch/mips/mm/highmem.c 2003-12-22 17:32:59.000000000 -0500
@@ -40,6 +40,7 @@
enum fixed_addresses idx;
unsigned long vaddr;
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
if (page < highmem_start_page)
return page_address(page);
@@ -64,6 +65,7 @@
if (vaddr < FIXADDR_START) { // FIXME
dec_preempt_count();
+ preempt_check_resched();
return;
}
@@ -79,6 +81,7 @@
#endif
dec_preempt_count();
+ preempt_check_resched();
}
struct page *kmap_atomic_to_page(void *ptr)
diff -ura base/arch/sparc/mm/highmem.c new/arch/sparc/mm/highmem.c
--- base/arch/sparc/mm/highmem.c 2003-12-17 21:58:28.000000000 -0500
+++ new/arch/sparc/mm/highmem.c 2003-12-22 17:33:05.000000000 -0500
@@ -33,6 +33,7 @@
unsigned long idx;
unsigned long vaddr;
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
if (page < highmem_start_page)
return page_address(page);
@@ -69,6 +70,7 @@
if (vaddr < fix_kmap_begin) { // FIXME
dec_preempt_count();
+ preempt_check_resched();
return;
}
@@ -96,4 +98,5 @@
#endif
#endif
dec_preempt_count();
+ preempt_check_resched();
}
diff -ura base/include/asm-ppc/highmem.h new/include/asm-ppc/highmem.h
--- base/include/asm-ppc/highmem.h 2003-12-17 21:59:45.000000000 -0500
+++ new/include/asm-ppc/highmem.h 2003-12-22 17:33:13.000000000 -0500
@@ -81,6 +81,7 @@
unsigned int idx;
unsigned long vaddr;
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
if (page < highmem_start_page)
return page_address(page);
@@ -105,6 +106,7 @@
if (vaddr < KMAP_FIX_BEGIN) { // FIXME
dec_preempt_count();
+ preempt_check_resched();
return;
}
@@ -119,6 +121,7 @@
flush_tlb_page(0, vaddr);
#endif
dec_preempt_count();
+ preempt_check_resched();
}
static inline struct page *kmap_atomic_to_page(void *ptr)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: atomic copy_from_user?
2003-12-22 22:35 ` Joe Korty
@ 2003-12-22 22:59 ` Rob Love
0 siblings, 0 replies; 17+ messages in thread
From: Rob Love @ 2003-12-22 22:59 UTC (permalink / raw)
To: Joe Korty; +Cc: Andrew Morton, wli, albert, linux-kernel
On Mon, 2003-12-22 at 17:35, Joe Korty wrote:
> Thanks, Robert and Andrew, for you explanations. This patch should
> do the trick.
Looks right to me. I like the comment, too.
Rob Love
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2003-12-22 22:59 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-12-22 0:48 atomic copy_from_user? Albert Cahalan
2003-12-22 4:31 ` Linus Torvalds
2003-12-22 9:36 ` Andrew Morton
2003-12-22 15:00 ` William Lee Irwin III
2003-12-22 18:26 ` Joe Korty
2003-12-22 20:55 ` Rob Love
2003-12-22 21:22 ` Joe Korty
2003-12-22 21:40 ` Rob Love
2003-12-22 21:59 ` Joe Korty
2003-12-22 22:14 ` Rob Love
2003-12-22 22:24 ` Andrew Morton
2003-12-22 22:06 ` Joe Korty
2003-12-22 22:18 ` Rob Love
2003-12-22 22:14 ` Andrew Morton
2003-12-22 22:19 ` Rob Love
2003-12-22 22:35 ` Joe Korty
2003-12-22 22:59 ` Rob Love
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox