* slab cache allocator problem?
@ 2008-01-29 22:23 Adrian McMenamin
2008-01-31 4:19 ` Paul Mundt
2008-01-31 23:39 ` Adrian McMenamin
0 siblings, 2 replies; 3+ messages in thread
From: Adrian McMenamin @ 2008-01-29 22:23 UTC (permalink / raw)
To: linux-sh
I suspect this is my code, but I cannot see what is wrong. I am using
SLUB.
I create the cache thus:
maple_queue_cache kmem_cache_create("maple_queue_cache", 0x400, 0x20,
SLAB_HWCACHE_ALIGN, NULL);
Then allocate objects like this:
mq->recvbufdcsp = kmem_cache_zalloc(maple_queue_cache, GFP_KERNEL);
mq->recvbuf = (void *) P2SEGADDR(mq->recvbufdcsp);
And free them like this:
if (mq) {
if (mq->recvbufdcsp)
kmem_cache_free(maple_queue_cache, mq->recvbufdcsp);
kfree(mq);
But when I allocate and free them via hotplugging maple devices, I get things like this (after what appears to be a random number of hotplugs):
[ 41.512498] Maple bus device detaching at (0, 2)
[ 42.531304] Maple bus at (0, 2): Connected function 0x10
[ 42.537436] No maple driver found for this device
[ 44.527953] Maple bus device detaching at (0, 2)
[ 46.551408] Maple bus at (0, 2): Connected function 0x10
[ 46.557561] No maple driver found for this device
[ 48.547964] Maple bus device detaching at (0, 1)
[ 51.561406] Maple bus device detaching at (0, 2)
[ 53.586601] Maple bus at (0, 2): Connected function 0x10
[ 53.590137] No maple driver found for this device
[ 55.594248] Maple bus device detaching at (0, 2)
[ 57.619446] Maple bus at (0, 2): Connected function 0x10
[ 57.625583] No maple driver found for this device
[ 59.613890] Maple bus device detaching at (0, 2)
[ 61.639544] Maple bus at (0, 2): Connected function 0x10
[ 61.645684] No maple driver found for this device
[ 63.633992] Maple bus device detaching at (0, 2)
[ 66.664682] Maple bus at (0, 2): Connected function 0x10
[ 66.666407] No maple driver found for this device
[ 66.673767] Fault in unaligned fixup: 0000 [#1]
[ 66.677050] Modules linked in: nbd
[ 66.677050]
[ 66.677050] Pid : 752, Comm: udevd
[ 66.677050] PC is at kmem_cache_alloc+0x46/0xc0
[ 66.677050] PC : 8c0606a6 SP : 8c98be84 SR : 400080f0 TEA : c0007344 Not tainted
[ 66.677050] R0 : 00000000 R1 : 00000000 R2 : 8c98bed8 R3 : 00000000
[ 66.677050] R4 : 8c2343b0 R5 : 000080d0 R6 : 00007fff R7 : 8c0a09a8
[ 66.677050] R8 : 00000000 R9 : 000080d0 R10 : ffffffff R11 : 8c2343f4
[ 66.677050] R12 : 8c8860c0 R13 : 7ba3f5f8 R14 : 8c44adc0
[ 66.677050] MACH: 00000002 MACL: 00000000 GBR : 29708440 PR : 8c0a09a8
[ 66.677050]
[ 66.677050] Call trace:
[ 66.677050] [<8c0a09a8>] show_stat+0x28/0x380
[ 66.677050] [<8c081df8>] single_open+0x38/0xa0
[ 66.677050] [<8c0a0980>] show_stat+0x0/0x380
[ 66.677050] [<8c0a0d30>] stat_open+0x30/0xa0
[ 66.677050] [<8c0a0d00>] stat_open+0x0/0xa0
[ 66.677050] [<8c099b90>] proc_reg_open+0x90/0xc0
[ 66.677050] [<8c0a0d00>] stat_open+0x0/0xa0
[ 66.677050] [<8c099b98>] proc_reg_open+0x98/0xc0
[ 66.677050] [<8c062d86>] __dentry_open+0x126/0x240
[ 66.677050] [<8c099b00>] proc_reg_open+0x0/0xc0
[ 66.677050] [<8c081622>] seq_read+0x102/0x340
[ 66.677050] [<8c099794>] proc_reg_read+0x94/0xc0
[ 66.677050] [<8c081520>] seq_read+0x0/0x340
[ 66.677050] [<8c063cd8>] vfs_read+0x78/0xc0
[ 66.677050] [<8c063ef4>] sys_read+0x34/0x80
[ 66.677050] [<8c008240>] syscall_call+0xc/0x10
[ 66.677050] [<8c063ec0>] sys_read+0x0/0x80
[ 66.677050]
[ 66.677050] Process: udevd (pid: 752, stack limit = 8c98a001)
[ 66.677050] Stack: (0x8c98be84 to 0x8c98c000)
[ 66.677050] be80: 8c0a09a8 00000001 8c44ad80 00000001 8c8860c0 8c44ad80 8c44adc0
[ 66.677050] bea0: 00000000 8c081df8 8c0a0980 8c44ad80 fffffff4 8c608180 8c0a0d30 00000000
[ 66.677050] bec0: 8c44ad80 8c0a0d00 8c44ad80 8c522000 8c099b90 8c0a0d00 8c957010 8c099b98
[ 66.677050] bee0: 8c062d86 00000000 00000000 8c099b00 8c95708c 8c081622 8c44adc0 7ba3f5f8
[ 66.677050] bf00: 00007fff 00000001 8c44ad80 00000001 8c8860c0 00000000 00000000 8c44ad80
[ 66.677050] bf20: 00000000 8c98bf84 00000000 8c8860e0 8c099794 fffffffb 8c98bf84 00007fff
[ 66.677050] bf40: 7ba3f5f8 8c44ad80 8c081520 8cc039c0 8c063cd8 7ba3f5ec 00000000 0041e8a0
[ 66.677050] bf60: fffffff7 8c98bf84 7ba3f5f8 8c44ad80 8c063ef4 00007fff 7ba3f5f8 8c44ad80
[ 66.677050] bf80: 00000000 00000000 00000000 8c008240 ffffff0f 00000021 8c98bff8 8c063ec0
[ 66.677050] bfa0: 00000000 00000440 fffffff9 00000003 00000007 7ba3f5f8 00007fff 0041cb3c
[ 66.677050] bfc0: 00000007 7ba3f5f8 fffffffb 00000000 0041e8a0 00000000 7ba3f5ec 7ba3f5ec
[ 66.677050] bfe0: 2962d174 00402b22 00000001 29708440 00000002 00000000 0000004c 00000160
[ 66.703223] ---[ end trace 980183e2f79fb803 ]---
(You can see the maple code here: http://newgolddream.dyndns.info/cgi-bin/gitweb.cgi?p=.git;a=blob;f=drivers/sh/maple/maple.c;hýf3ecb57043e6e7b42d905c5c13a177c42fd569;hbÇb943a7ae92039458258da34acd233123637693 )
I really cannot see what I am doing wrong - any clues? Could it be some issue with the P2SEGADDR?
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: slab cache allocator problem?
2008-01-29 22:23 slab cache allocator problem? Adrian McMenamin
@ 2008-01-31 4:19 ` Paul Mundt
2008-01-31 23:39 ` Adrian McMenamin
1 sibling, 0 replies; 3+ messages in thread
From: Paul Mundt @ 2008-01-31 4:19 UTC (permalink / raw)
To: linux-sh
On Tue, Jan 29, 2008 at 10:23:24PM +0000, Adrian McMenamin wrote:
> Then allocate objects like this:
>
> mq->recvbufdcsp = kmem_cache_zalloc(maple_queue_cache, GFP_KERNEL);
> mq->recvbuf = (void *) P2SEGADDR(mq->recvbufdcsp);
>
[snip]
>
> But when I allocate and free them via hotplugging maple devices, I get
> things like this (after what appears to be a random number of
> hotplugs):
>
This sort of abuse of P2SEGADDR is really not what you want to be doing
in the first place. I'd really like to just get rid of the P1/P2SEGADDR
wrappers completely, as people do nothing but abuse them, and more often
than not depend on the wrappers to deal with all of their problems,
rather than thinking about what they actually want to do.
If you need to do a cacheflush after the copy, then do so explicitly. You
can probably even get away with just doing an invalidation in the read
path and a write-back in the write path, which will be faster than always
accessing it uncached. Trying to hack around dealing with the cache
management directly is nasty at best.
My guess is that ->recvbufdcsp is hitting a cached slab object that still
has cachelines associated with it which you are hitting after a few
iterations. The way to fix this is to make the cache management explicit,
both for ->recvbufdcsp and ->recvbuf. I would also wager that once this
is done, you should be able to do away with ->recvbufdcsp completely and
just use ->recvbuf.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: slab cache allocator problem?
2008-01-29 22:23 slab cache allocator problem? Adrian McMenamin
2008-01-31 4:19 ` Paul Mundt
@ 2008-01-31 23:39 ` Adrian McMenamin
1 sibling, 0 replies; 3+ messages in thread
From: Adrian McMenamin @ 2008-01-31 23:39 UTC (permalink / raw)
To: linux-sh
On Thu, 2008-01-31 at 13:19 +0900, Paul Mundt wrote:
> On Tue, Jan 29, 2008 at 10:23:24PM +0000, Adrian McMenamin wrote:
> > Then allocate objects like this:
> >
> > mq->recvbufdcsp = kmem_cache_zalloc(maple_queue_cache, GFP_KERNEL);
> > mq->recvbuf = (void *) P2SEGADDR(mq->recvbufdcsp);
> >
> [snip]
>
> >
> > But when I allocate and free them via hotplugging maple devices, I get
> > things like this (after what appears to be a random number of
> > hotplugs):
> >
> This sort of abuse of P2SEGADDR is really not what you want to be doing
> in the first place. I'd really like to just get rid of the P1/P2SEGADDR
> wrappers completely, as people do nothing but abuse them, and more often
> than not depend on the wrappers to deal with all of their problems,
> rather than thinking about what they actually want to do.
>
> If you need to do a cacheflush after the copy, then do so explicitly. You
> can probably even get away with just doing an invalidation in the read
> path and a write-back in the write path, which will be faster than always
> accessing it uncached. Trying to hack around dealing with the cache
> management directly is nasty at best.
>
> My guess is that ->recvbufdcsp is hitting a cached slab object that still
> has cachelines associated with it which you are hitting after a few
> iterations. The way to fix this is to make the cache management explicit,
> both for ->recvbufdcsp and ->recvbuf. I would also wager that once this
> is done, you should be able to do away with ->recvbufdcsp completely and
> just use ->recvbuf.
I am not sure that this is it - unless I've misunderstood you.
I've put in some really crude cache flushing:
Flushed on every time a packet is added to the write queue...
void maple_add_packet(struct mapleq *mq)
{
mutex_lock(&maple_list_lock);
list_add(&mq->list, &maple_waitq);
mutex_unlock(&maple_list_lock);
flush_cache_all();
}
Flushed on every read....
/* maple dma end bottom half - implemented via workqueue */
static void maple_dma_handler(struct work_struct *work)
{
struct mapleq *mq, *nmq;
struct maple_device *dev;
char *recvbuf;
enum maple_code code;
if (!maple_dma_done())
return;
flush_cache_all();
ctrl_outl(0, MAPLE_ENABLE);
if (!list_empty(&maple_sentq)) {
list_for_each_entry_safe(mq, nmq, &maple_sentq, list) {
recvbuf = mq->recvbuf;
code = recvbuf[0];
dev = mq->dev;
switch (code) {
case MAPLE_RESPONSE_NONE:
Flushed on every removal...
static void maple_release_device(struct device *dev)
{
struct maple_device *mdev;
struct mapleq *mq;
flush_cache_all();
if (!dev)
return;
mdev = to_maple_dev(dev);
mq = mdev->mq;
if (mq) {
if (mq->recvbufdcsp)
kmem_cache_free(maple_queue_cache, mq->recvbufdcsp);
kfree(mq);
mq = NULL;
}
kfree(mdev);
}
Still broken...
[ 80.720876] Maple bus device detaching at (0, 2)
[ 82.746010] Maple bus at (0, 2): Connected function 0x10
[ 82.750588] No maple driver found for this device
[ 84.740817] Maple bus device detaching at (0, 2)
[ 86.765960] Maple bus at (0, 2): Connected function 0x10
[ 86.770655] No maple driver found for this device
[ 87.755778] Maple bus device detaching at (0, 2)
[ 89.780908] Maple bus at (0, 2): Connected function 0x10
[ 89.782706] No maple driver found for this device
[ 91.771272] Maple bus device detaching at (0, 2)
[ 93.796837] Maple bus at (0, 2): Connected function 0x10
[ 93.798743] No maple driver found for this device
[ 93.805650] Fault in unaligned fixup: 0000 [#1]
[ 93.809477] Modules linked in: nbd
[ 93.809477]
[ 93.809477] Pid : 751, Comm: udevd
[ 93.809477] PC is at kmem_cache_alloc+0x46/0xc0
[ 93.809477] PC : 8c0606a6 SP : 8cd8de84 SR : 400080f0 TEA : c0007344 Not tainted
[ 93.809477] R0 : 00000000 R1 : 00000000 R2 : 8cd8ded8 R3 : 00000000
[ 93.809477] R4 : 8c2343b0 R5 : 000080d0 R6 : 00007fff R7 : 8c0a09a8
[ 93.809477] R8 : 00000000 R9 : 000080d0 R10 : ffffffff R11 : 8c2343f4
[ 93.809477] R12 : 8cf504c0 R13 : 7ba9c5f8 R14 : 8cf92840
[ 93.809477] MACH: 00000002 MACL: 00000000 GBR : 29708440 PR : 8c0a09a8
[ 93.809477]
[ 93.809477] Call trace:
[ 93.809477] [<8c0a09a8>] show_stat+0x28/0x380
[ 93.809477] [<8c081df8>] single_open+0x38/0xa0
[ 93.809477] [<8c0a0980>] show_stat+0x0/0x380
[ 93.809477] [<8c0a0d30>] stat_open+0x30/0xa0
[ 93.809477] [<8c0a0d00>] stat_open+0x0/0xa0
[ 93.809477] [<8c099b90>] proc_reg_open+0x90/0xc0
[ 93.809477] [<8c0a0d00>] stat_open+0x0/0xa0
[ 93.809477] [<8c099b98>] proc_reg_open+0x98/0xc0
[ 93.809477] [<8c062d86>] __dentry_open+0x126/0x240
[ 93.809477] [<8c099b00>] proc_reg_open+0x0/0xc0
[ 93.809477] [<8c081622>] seq_read+0x102/0x340
[ 93.809477] [<8c099794>] proc_reg_read+0x94/0xc0
[ 93.809477] [<8c081520>] seq_read+0x0/0x340
[ 93.809477] [<8c063cd8>] vfs_read+0x78/0xc0
[ 93.809477] [<8c063ef4>] sys_read+0x34/0x80
[ 93.809477] [<8c008240>] syscall_call+0xc/0x10
[ 93.809477] [<8c063ec0>] sys_read+0x0/0x80
[ 93.809477]
[ 93.809477] Process: udevd (pid: 751, stack limit = 8cd8c001)
[ 93.809477] Stack: (0x8cd8de84 to 0x8cd8e000)
[ 93.809477] de80: 8c0a09a8 00000001 8cf92800 00000001 8cf504c0 8cf92800 8cf92840
[ 93.809477] dea0: 00000000 8c081df8 8c0a0980 8cf92800 fffffff4 8cf19100 8c0a0d30 00000000
[ 93.809477] dec0: 8cf92800 8c0a0d00 8cf92800 8cecd000 8c099b90 8c0a0d00 8c427010 8c099b98
[ 93.809477] dee0: 8c062d86 00000000 00000000 8c099b00 8c42708c 8c081622 8cf92840 7ba9c5f8
[ 93.809477] df00: 00007fff 00000001 8cf92800 00000001 8cf504c0 00000000 00000000 8cf92800
[ 93.809477] df20: 00000000 8cd8df84 00000000 8cf504e0 8c099794 fffffffb 8cd8df84 00007fff
[ 93.809477] df40: 7ba9c5f8 8cf92800 8c081520 8cc039c0 8c063cd8 7ba9c5ec 00000000 0041e8a0
[ 93.809477] df60: fffffff7 8cd8df84 7ba9c5f8 8cf92800 8c063ef4 00007fff 7ba9c5f8 8cf92800
[ 93.809477] df80: 00000000 00000000 00000000 8c008240 ffffff0f 00000021 8cd8dff8 8c063ec0
[ 93.809477] dfa0: 00000000 00000440 fffffff9 00000003 00000007 7ba9c5f8 00007fff 0041cb3c
[ 93.809477] dfc0: 00000007 7ba9c5f8 fffffffb 00000000 0041e8a0 00000000 7ba9c5ec 7ba9c5ec
[ 93.809477] dfe0: 2962d174 00402b22 00000001 29708440 00000002 00000000 0000004c 00000160
[ 93.839430] ---[ end trace 77b901cc5d0edc23 ]---
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2008-01-31 23:39 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-29 22:23 slab cache allocator problem? Adrian McMenamin
2008-01-31 4:19 ` Paul Mundt
2008-01-31 23:39 ` Adrian McMenamin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox