* [patch -mm] fix SLOB on x64
@ 2005-12-11 14:12 Ingo Molnar
2005-12-11 17:22 ` Ed Tomlinson
2005-12-11 18:05 ` Matt Mackall
0 siblings, 2 replies; 5+ messages in thread
From: Ingo Molnar @ 2005-12-11 14:12 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, Matt Mackall
this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
SLOB allocator exclusively, so it must work on all platforms)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Index: linux/mm/slob.c
===================================================================
--- linux.orig/mm/slob.c
+++ linux/mm/slob.c
@@ -198,7 +198,7 @@ void kfree(const void *block)
if (!block)
return;
- if (!((unsigned int)block & (PAGE_SIZE-1))) {
+ if (!((unsigned long)block & (PAGE_SIZE-1))) {
/* might be on the big block list */
spin_lock_irqsave(&block_lock, flags);
for (bb = bigblocks; bb; last = &bb->next, bb = bb->next) {
@@ -227,7 +227,7 @@ unsigned int ksize(const void *block)
if (!block)
return 0;
- if (!((unsigned int)block & (PAGE_SIZE-1))) {
+ if (!((unsigned long)block & (PAGE_SIZE-1))) {
spin_lock_irqsave(&block_lock, flags);
for (bb = bigblocks; bb; bb = bb->next)
if (bb->pages == block) {
@@ -326,7 +326,7 @@ void kmem_cache_init(void)
void *p = slob_alloc(PAGE_SIZE, 0, PAGE_SIZE-1);
if (p)
- free_page((unsigned int)p);
+ free_page((unsigned long)p);
mod_timer(&slob_timer, jiffies + HZ);
}
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch -mm] fix SLOB on x64
2005-12-11 14:12 [patch -mm] fix SLOB on x64 Ingo Molnar
@ 2005-12-11 17:22 ` Ed Tomlinson
2005-12-11 20:05 ` Ingo Molnar
2005-12-11 18:05 ` Matt Mackall
1 sibling, 1 reply; 5+ messages in thread
From: Ed Tomlinson @ 2005-12-11 17:22 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Andrew Morton, linux-kernel, Matt Mackall
On Sunday 11 December 2005 09:12, Ingo Molnar wrote:
> this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> SLOB allocator exclusively, so it must work on all platforms)
Its a good idea to get this working everywhere. Why have you switched to
use SLOB exclusively?
Thanks
Ed Tomlinson
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch -mm] fix SLOB on x64
2005-12-11 14:12 [patch -mm] fix SLOB on x64 Ingo Molnar
2005-12-11 17:22 ` Ed Tomlinson
@ 2005-12-11 18:05 ` Matt Mackall
2005-12-11 20:02 ` Ingo Molnar
1 sibling, 1 reply; 5+ messages in thread
From: Matt Mackall @ 2005-12-11 18:05 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Andrew Morton, linux-kernel
On Sun, Dec 11, 2005 at 03:12:17PM +0100, Ingo Molnar wrote:
>
> this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> SLOB allocator exclusively, so it must work on all platforms)
The patch looks fine, but what's this about using SLOB exclusively?
Fragmentation performance of SLOB is miserable on anything like a
modern desktop, I think SLOB only makes sense for small machines. The
locking also suggests dual core at most.
Anyway,
> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Matt Mackall <mpm@selenic.com>
>
> Index: linux/mm/slob.c
> ===================================================================
> --- linux.orig/mm/slob.c
> +++ linux/mm/slob.c
> @@ -198,7 +198,7 @@ void kfree(const void *block)
> if (!block)
> return;
>
> - if (!((unsigned int)block & (PAGE_SIZE-1))) {
> + if (!((unsigned long)block & (PAGE_SIZE-1))) {
> /* might be on the big block list */
> spin_lock_irqsave(&block_lock, flags);
> for (bb = bigblocks; bb; last = &bb->next, bb = bb->next) {
> @@ -227,7 +227,7 @@ unsigned int ksize(const void *block)
> if (!block)
> return 0;
>
> - if (!((unsigned int)block & (PAGE_SIZE-1))) {
> + if (!((unsigned long)block & (PAGE_SIZE-1))) {
> spin_lock_irqsave(&block_lock, flags);
> for (bb = bigblocks; bb; bb = bb->next)
> if (bb->pages == block) {
> @@ -326,7 +326,7 @@ void kmem_cache_init(void)
> void *p = slob_alloc(PAGE_SIZE, 0, PAGE_SIZE-1);
>
> if (p)
> - free_page((unsigned int)p);
> + free_page((unsigned long)p);
>
> mod_timer(&slob_timer, jiffies + HZ);
> }
--
Mathematics is the supreme nostalgia of our time.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch -mm] fix SLOB on x64
2005-12-11 18:05 ` Matt Mackall
@ 2005-12-11 20:02 ` Ingo Molnar
0 siblings, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2005-12-11 20:02 UTC (permalink / raw)
To: Matt Mackall; +Cc: Andrew Morton, linux-kernel
* Matt Mackall <mpm@selenic.com> wrote:
> On Sun, Dec 11, 2005 at 03:12:17PM +0100, Ingo Molnar wrote:
> >
> > this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> > with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> > SLOB allocator exclusively, so it must work on all platforms)
>
> The patch looks fine, but what's this about using SLOB exclusively?
> Fragmentation performance of SLOB is miserable on anything like a
> modern desktop, I think SLOB only makes sense for small machines. The
> locking also suggests dual core at most.
well, this is only an -rt artifact: the SLOB needs zero modifications to
work on PREEMPT_RT, while SLAB needed a risky 66K monster patch. Until
someone simplifies the SLAB conversion to PREEMPT_RT, i'll use the SLOB.
i havent noticed any significant slowdown due to the SLOB. In any case,
we'll give it some workout which should further speed up its upstream
integration - it's looking good so far.
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch -mm] fix SLOB on x64
2005-12-11 17:22 ` Ed Tomlinson
@ 2005-12-11 20:05 ` Ingo Molnar
0 siblings, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2005-12-11 20:05 UTC (permalink / raw)
To: Ed Tomlinson; +Cc: Andrew Morton, linux-kernel, Matt Mackall
* Ed Tomlinson <edt@aei.ca> wrote:
> On Sunday 11 December 2005 09:12, Ingo Molnar wrote:
> > this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> > with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> > SLOB allocator exclusively, so it must work on all platforms)
>
> Its a good idea to get this working everywhere. Why have you switched
> to use SLOB exclusively?
because the SLAB hacks were getting ugly, and i gave up on it during the
2.6.15-rc5 merge. (The SLAB code has lots of irqs-off / per-cpu and
non-preempt assumptions integrated, which were a pain to sort out.)
We'll eventually do a cleaner conversion of SLAB to PREEMPT_RT, but for
now the SLOB is turned on exclusively if PREEMPT_RT. (in other
preemption modes it's optionally selectable if EMBEDDED is enabled)
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2005-12-11 20:06 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-12-11 14:12 [patch -mm] fix SLOB on x64 Ingo Molnar
2005-12-11 17:22 ` Ed Tomlinson
2005-12-11 20:05 ` Ingo Molnar
2005-12-11 18:05 ` Matt Mackall
2005-12-11 20:02 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox