* xfsdump showing system problems -- ? ideas?
@ 2013-02-03 3:49 Linda Walsh
2013-02-03 4:53 ` xfsdump -OOM triggering 'soft' kernel panic Linda Walsh
2013-02-03 23:17 ` xfsdump showing system problems -- ? ideas? Dave Chinner
0 siblings, 2 replies; 6+ messages in thread
From: Linda Walsh @ 2013-02-03 3:49 UTC (permalink / raw)
To: xfs-oss
I was looking through my backup logs and noticed a few of the
*logs* of the backups having abnormally high size.
In looking at them, I saw a bunch of messages (last night 3211 occurrences),
of messages like:
xfsdump: WARNING: could not get list of root attributes for nondir ino
3415547687: Cannot allocate memory (12)
xfsdump: WARNING: could not get list of secure attributes for nondir ino
3415547687: Cannot allocate memory (12)
xfsdump: WARNING: could not get list of non-root attributes for nondir ino
3415547688: Cannot allocate memory (12)
xfsdump: WARNIN: could not get list of non-root attributes for nondir ino
4225270812: Cannot allocate memory (12)
xfsdump: WARNING: could not get list of root attributes for nondir ino
4225270812: Cannot allocate memory (12)
xfsdump: WARNING: could not get list of secure attributes for nondir ino
4225270812: Cannot allocate memory (12)
---
looking at my memory usage I see it's close to full -- with *buffer* space.
> free -l
total used free shared buffers cached
Mem: 49422312 49186412 235900 0 1860 43430572
Low: 49422312 49186412 235900
High: 0 0 0
-/+ buffers/cache: 5753980 43668332
Swap: 8393924 35748 8358176
Wondering if anyone has seen something like this before?
/proc/meminfo has:
Cached: 45541568 kB
Inactive: 45019412 kB
Inactive(file): 44963988 kB
(whole thing is:)
MemTotal: 49422312 kB
MemFree: 1551428 kB
Buffers: 1860 kB
Cached: 45541568 kB
SwapCached: 3228 kB
Active: 881488 kB
Inactive: 45019412 kB
Active(anon): 306736 kB
Inactive(anon): 55424 kB
Active(file): 574752 kB
Inactive(file): 44963988 kB
Unevictable: 16476 kB
Mlocked: 16476 kB
SwapTotal: 8393924 kB
SwapFree: 8358224 kB
Dirty: 50404 kB
Writeback: 0 kB
AnonPages: 371124 kB
Mapped: 61184 kB
Shmem: 996 kB
Slab: 1351896 kB
SReclaimable: 1167144 kB
SUnreclaim: 184752 kB
KernelStack: 4000 kB
PageTables: 13948 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 28136632 kB
Committed_AS: 636552 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 422548 kB
VmallocChunk: 34334065788 kB
HardwareCorrupted: 0 kB
AnonHugePages: 215040 kB
HugePages_Total: 32
HugePages_Free: 32
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7652 kB
DirectMap2M: 2076672 kB
DirectMap1G: 48234496 kB
--------------------
It doesn't *seem* like I'm even close to out of memory -- (not "'really'")....
So why xfsdump complaining?
If this was reproducible on some system besides my own, would it
be a way of preventing file-security attributes from being read...
looks like it affects root, non-root and secure attribs...
hmmm...
I'm still investigating, but thought I'd shoot off an email to see
if anyone has seen anything like this and if the impact might be
security "dehancing"...;-/ (not that this is a usually a big
problem w/my system, but some sites are more touchy about such
things...
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfsdump -OOM triggering 'soft' kernel panic...
2013-02-03 3:49 xfsdump showing system problems -- ? ideas? Linda Walsh
@ 2013-02-03 4:53 ` Linda Walsh
2013-02-03 15:39 ` Stan Hoeppner
2013-02-03 23:17 ` xfsdump showing system problems -- ? ideas? Dave Chinner
1 sibling, 1 reply; 6+ messages in thread
From: Linda Walsh @ 2013-02-03 4:53 UTC (permalink / raw)
To: xfs-oss
I guess this makes sense... .. kerndump in log as well...
So here's some more... is something I should be directing to the kernel list?
I haven't seen it with any other applications...
Feb 2 06:18:28 Ishtar kernel: [990738.011852] xfsdump: page allocation failure:
order:4, mode:0xc0d0
Feb 2 06:18:28 Ishtar kernel: [990738.011856] Pid: 14882, comm: xfsdump
Tainted: G O 3.7.1-Isht-Van #1
Feb 2 06:18:28 Ishtar kernel: [990738.011857] Call Trace:
Feb 2 06:18:28 Ishtar kernel: [990738.011866] [<ffffffff810f1b1b>]
warn_alloc_failed+0xeb/0x130
Feb 2 06:18:28 Ishtar kernel: [990738.011870] [<ffffffff810f57d6>]
__alloc_pages_nodemask+0x756/0x960
Feb 2 06:18:28 Ishtar kernel: [990738.011875] [<ffffffff8115ad83>] ?
iput+0x43/0x190
Feb 2 06:18:28 Ishtar kernel: [990738.011880] [<ffffffff8112d37e>]
alloc_pages_current+0xae/0x110
Feb 2 06:18:28 Ishtar kernel: [990738.011882] [<ffffffff810f0ec9>]
__get_free_pages+0x9/0x40
Feb 2 06:18:28 Ishtar kernel: [990738.011886] [<ffffffff811342fa>]
kmalloc_order_trace+0x3a/0xd0
Feb 2 06:18:28 Ishtar kernel: [990738.011889] [<ffffffff810751e5>] ?
sched_clock_cpu+0xc5/0x120
Feb 2 06:18:28 Ishtar kernel: [990738.011892] [<ffffffff8113508a>]
__kmalloc+0x17a/0x190
Feb 2 06:18:28 Ishtar kernel: [990738.011896] [<ffffffff812851d8>]
xfs_attrlist_by_handle+0xa8/0x130
Feb 2 06:18:28 Ishtar kernel: [990738.011899] [<ffffffff812862ba>]
xfs_file_ioctl+0x7fa/0xa00
Feb 2 06:18:28 Ishtar kernel: [990738.011901] [<ffffffff810751e5>] ?
sched_clock_cpu+0xc5/0x120
Feb 2 06:18:28 Ishtar kernel: [990738.011903] [<ffffffff8107528f>] ?
local_clock+0x4f/0x60
Feb 2 06:18:28 Ishtar kernel: [990738.011908] [<ffffffff8108d19c>] ?
lock_release_holdtime.part.21+0x1c/0x190
Feb 2 06:18:28 Ishtar kernel: [990738.011912] [<ffffffff8117f9d5>] ?
fsnotify+0x85/0x2f0
Feb 2 06:18:28 Ishtar kernel: [990738.011915] [<ffffffff8117fb28>] ?
fsnotify+0x1d8/0x2f0
Feb 2 06:18:28 Ishtar kernel: [990738.011917] [<ffffffff8117fb52>] ?
fsnotify+0x202/0x2f0
Feb 2 06:18:28 Ishtar kernel: [990738.011919] [<ffffffff8117f9d5>] ?
fsnotify+0x85/0x2f0
Feb 2 06:18:28 Ishtar kernel: [990738.011922] [<ffffffff81151ed6>]
do_vfs_ioctl+0x96/0x560
Feb 2 06:18:28 Ishtar kernel: [990738.011925] [<ffffffff81152431>]
sys_ioctl+0x91/0xb0
Feb 2 06:18:28 Ishtar kernel: [990738.011929] [<ffffffff8133b68e>] ?
trace_hardirqs_on_thunk+0x3a/0x3f
Feb 2 06:18:28 Ishtar kernel: [990738.011935] [<ffffffff816a9e12>]
system_call_fastpath+0x16/0x1b
Feb 2 06:18:28 Ishtar kernel: [990738.011937] Mem-Info:
Feb 2 06:18:28 Ishtar kernel: [990738.011938] Node 0 Normal per-cpu:
Feb 2 06:18:28 Ishtar kernel: [990738.011940] CPU 0: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011942] CPU 1: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011943] CPU 2: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011944] CPU 3: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011946] CPU 4: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011947] CPU 5: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011948] CPU 6: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011950] CPU 7: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011951] CPU 8: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011952] CPU 9: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011954] CPU 10: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011955] CPU 11: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011956] Node 1 DMA per-cpu:
Feb 2 06:18:28 Ishtar kernel: [990738.011958] CPU 0: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011959] CPU 1: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011961] CPU 2: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011962] CPU 3: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011963] CPU 4: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011965] CPU 5: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011966] CPU 6: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011967] CPU 7: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011968] CPU 8: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011970] CPU 9: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011971] CPU 10: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011972] CPU 11: hi: 0, btch: 1
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011973] Node 1 DMA32 per-cpu:
Feb 2 06:18:28 Ishtar kernel: [990738.011975] CPU 0: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011977] CPU 1: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011978] CPU 2: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011980] CPU 3: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011981] CPU 4: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011982] CPU 5: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011983] CPU 6: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011985] CPU 7: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011986] CPU 8: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011987] CPU 9: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011989] CPU 10: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011990] CPU 11: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011991] Node 1 Normal per-cpu:
Feb 2 06:18:28 Ishtar kernel: [990738.011993] CPU 0: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011994] CPU 1: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011995] CPU 2: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011997] CPU 3: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011998] CPU 4: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.011999] CPU 5: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012001] CPU 6: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012002] CPU 7: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012003] CPU 8: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012005] CPU 9: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012006] CPU 10: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012007] CPU 11: hi: 186, btch: 31
usd: 0
Feb 2 06:18:28 Ishtar kernel: [990738.012010] active_anon:2130771
inactive_anon:336859 isolated_anon:0
Feb 2 06:18:28 Ishtar kernel: [990738.012010] active_file:100057
inactive_file:8076638 isolated_file:0
Feb 2 06:18:28 Ishtar kernel: [990738.012010] unevictable:4119 dirty:459152
writeback:0 unstable:0
Feb 2 06:18:28 Ishtar kernel: [990738.012010] free:69090
slab_reclaimable:1184788 slab_unreclaimable:85544
Feb 2 06:18:28 Ishtar kernel: [990738.012010] mapped:17204 shmem:6140
pagetables:8183 bounce:0
Feb 2 06:18:28 Ishtar kernel: [990738.012010] free_cma:0
Feb 2 06:18:28 Ishtar kernel: [990738.012013] Node 0 Normal free:82968kB
min:45076kB low:56344kB high:67612kB active_anon:3892252kB
inactive_anon:546760kB active_file:243676kB inactive_file:16829348kB
unevictable:9648kB isolated(anon):0kB isolated(file):0kB present:24772608kB
mlocked:9648kB dirty:1357160kB writeback:0kB mapped:41152kB shmem:24072kB
slab_reclaimable:1912936kB slab_unreclaimable:224784kB kernel_stack:3328kB
pagetables:18560kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? no
Feb 2 06:18:28 Ishtar kernel: [990738.012018] lowmem_reserve[]: 0 0 0 0
Feb 2 06:18:28 Ishtar kernel: [990738.012022] Node 1 DMA free:15948kB min:28kB
low:32kB high:40kB active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
present:15708kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0
all_unreclaimable? yes
Feb 2 06:18:28 Ishtar kernel: [990738.012026] lowmem_reserve[]: 0 3235 24151 24151
Feb 2 06:18:28 Ishtar kernel: [990738.012029] Node 1 DMA32 free:97280kB
min:6028kB low:7532kB high:9040kB active_anon:545180kB inactive_anon:192576kB
active_file:6740kB inactive_file:2048200kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB present:3313380kB mlocked:0kB dirty:48076kB writeback:0kB
mapped:12kB shmem:0kB slab_reclaimable:366088kB slab_unreclaimable:7092kB
kernel_stack:48kB pagetables:896kB unstable:0kB bounce:0kB free_cma:0kB
writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Feb 2 06:18:28 Ishtar kernel: [990738.012034] lowmem_reserve[]: 0 0 20916 20916
Feb 2 06:18:28 Ishtar kernel: [990738.012037] Node 1 Normal free:80164kB
min:38972kB low:48712kB high:58456kB active_anon:4085652kB
inactive_anon:608100kB active_file:149812kB inactive_file:13429004kB
unevictable:6828kB isolated(anon):0kB isolated(file):0kB present:21417984kB
mlocked:6828kB dirty:431372kB writeback:0kB mapped:27652kB shmem:488kB
slab_reclaimable:2460128kB slab_unreclaimable:110284kB kernel_stack:1312kB
pagetables:13276kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? no
Feb 2 06:18:28 Ishtar kernel: [990738.012042] lowmem_reserve[]: 0 0 0 0
Feb 2 06:18:28 Ishtar kernel: [990738.012045] Node 0 Normal: 15190*4kB 674*8kB
222*16kB 231*32kB 77*64kB 7*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB =
83176kB
Feb 2 06:18:28 Ishtar kernel: [990738.012054] Node 1 DMA: 1*4kB 1*8kB 0*16kB
2*32kB 2*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15948kB
Feb 2 06:18:28 Ishtar kernel: [990738.012062] Node 1 DMA32: 12976*4kB 3609*8kB
289*16kB 373*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB =
97336kB
Feb 2 06:18:28 Ishtar kernel: [990738.012071] Node 1 Normal: 8924*4kB 3469*8kB
988*16kB 31*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 80312kB
Feb 2 06:18:28 Ishtar kernel: [990738.012079] 8184927 total pagecache pages
Feb 2 06:18:28 Ishtar kernel: [990738.012081] 800 pages in swap cache
Feb 2 06:18:28 Ishtar kernel: [990738.012082] Swap cache stats: add 10047,
delete 9247, find 558706/558825
Feb 2 06:18:28 Ishtar kernel: [990738.012083] Free swap = 8358172kB
Feb 2 06:18:28 Ishtar kernel: [990738.012084] Total swap = 8393924kB
Feb 2 06:18:28 Ishtar kernel: [990738.146080] 12582911 pages RAM
Feb 2 06:18:28 Ishtar kernel: [990738.146083] 227333 pages reserved
Feb 2 06:18:28 Ishtar kernel: [990738.146084] 4260752 pages shared
Feb 2 06:18:28 Ishtar kernel: [990738.146085] 8281368 pages non-shared
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfsdump -OOM triggering 'soft' kernel panic...
2013-02-03 4:53 ` xfsdump -OOM triggering 'soft' kernel panic Linda Walsh
@ 2013-02-03 15:39 ` Stan Hoeppner
0 siblings, 0 replies; 6+ messages in thread
From: Stan Hoeppner @ 2013-02-03 15:39 UTC (permalink / raw)
To: Linda Walsh; +Cc: xfs-oss
On 2/2/2013 10:53 PM, Linda Walsh wrote:
> Feb 2 06:18:28 Ishtar kernel: [990738.011852] xfsdump: page allocation
> failure: order:4, mode:0xc0d0
> Feb 2 06:18:28 Ishtar kernel: [990738.011856] Pid: 14882, comm: xfsdump
> Tainted: G O 3.7.1-Isht-Van #1
Your kernel is tainted, albeit by a GPL (or compatible) licensed module.
Any idea which module? You may need to reproduce this with a
non-tainted kernel before anyone would run with it.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfsdump showing system problems -- ? ideas?
2013-02-03 3:49 xfsdump showing system problems -- ? ideas? Linda Walsh
2013-02-03 4:53 ` xfsdump -OOM triggering 'soft' kernel panic Linda Walsh
@ 2013-02-03 23:17 ` Dave Chinner
2013-02-04 1:49 ` Linda Walsh
1 sibling, 1 reply; 6+ messages in thread
From: Dave Chinner @ 2013-02-03 23:17 UTC (permalink / raw)
To: Linda Walsh; +Cc: xfs-oss
On Sat, Feb 02, 2013 at 07:49:53PM -0800, Linda Walsh wrote:
>
>
> I was looking through my backup logs and noticed a few of the
> *logs* of the backups having abnormally high size.
>
> In looking at them, I saw a bunch of messages (last night 3211 occurrences),
> of messages like:
>
> xfsdump: WARNING: could not get list of root attributes for nondir
> ino 3415547687: Cannot allocate memory (12)
> xfsdump: WARNING: could not get list of secure attributes for nondir
> ino 3415547687: Cannot allocate memory (12)
> xfsdump: WARNING: could not get list of non-root attributes for
> nondir ino 3415547688: Cannot allocate memory (12)
> xfsdump: WARNIN: could not get list of non-root attributes for
> nondir ino 4225270812: Cannot allocate memory (12)
> xfsdump: WARNING: could not get list of root attributes for nondir
> ino 4225270812: Cannot allocate memory (12)
> xfsdump: WARNING: could not get list of secure attributes for nondir
> ino 4225270812: Cannot allocate memory (12)
Was fixed in 3.4:
$ gl -n 1 ad650f5
commit ad650f5b27bc9858360b42aaa0d9204d16115316
Author: Dave Chinner <dchinner@redhat.com>
Date: Wed Mar 7 04:50:21 2012 +0000
xfs: fallback to vmalloc for large buffers in xfs_attrmulti_attr_get
xfsdump uses for a large buffer for extended attributes, which has a
kmalloc'd shadow buffer in the kernel. This can fail after the
system has been running for some time as it is a high order
allocation. Add a fallback to vmalloc so that it doesn't require
contiguous memory and so won't randomly fail while xfsdump is
running.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
$ git describe --contains ad650f5
v3.4-rc1~55^2~11
$
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfsdump showing system problems -- ? ideas?
2013-02-03 23:17 ` xfsdump showing system problems -- ? ideas? Dave Chinner
@ 2013-02-04 1:49 ` Linda Walsh
2013-02-04 9:01 ` xfsdump fix tested... level 0's on all partitions: no problems Linda Walsh
0 siblings, 1 reply; 6+ messages in thread
From: Linda Walsh @ 2013-02-04 1:49 UTC (permalink / raw)
To: Dave Chinner; +Cc: stan, xfs-oss
Dave Chinner wrote:
> Was fixed in 3.4:
>
> $ gl -n 1 ad650f5
> commit ad650f5b27bc9858360b42aaa0d9204d16115316
> Author: Dave Chinner <dchinner@redhat.com>
> Date: Wed Mar 7 04:50:21 2012 +0000
>
> xfs: fallback to vmalloc for large buffers in xfs_attrmulti_attr_get
>
> xfsdump uses for a large buffer for extended attributes, which has a
> kmalloc'd shadow buffer in the kernel. This can fail after the
> system has been running for some time as it is a high order
> allocation. Add a fallback to vmalloc so that it doesn't require
> contiguous memory and so won't randomly fail while xfsdump is
> running.
----
AWESOME!!!!!
Thank-you! Thank-you! Thank-you!!!....
I would have been trying all sorts of random things to figure out
what I had messed up!...
==================
Note to Stan:
I was right with ya!.... I felt sketchy about that module..
even though I wasn't using it, and wasn't even loaded, it
got loaded at boot (for Oracle's 'vm' implementation) which
contaminated things.
As for my misconfiguring... was gonna say it was not impossible.
While I've been rolling my own for over a decade, things just get
more and more complicated as time goes on and sometimes I really don't
know what I am doing -- I guess -- usually it ends up with drivers
that "do nothing", though at worst, I could end up with a non-booting
kernel (rare, but non-booting system happening alot more often since
Suse has moved to systemd and started putting needed boot utils in a
non-root partition (/usr is separate on my system). They tell me either
move /usr to root, or they expect their users to be booting from an initrd.
(which I don't -- my boot HW is builtin to my kernel).
Things I wanna try out, or think I "might" use, I build as
modules:
Right now (some of these probably could or should be builtin, but the system
boots w/out them), have:
# lsmod
Module Size Used by
sch_sfq 10080 3
sch_htb 15123 1
mousedev 11440 1
acpi_cpufreq 7577 1
mperf 1348 1 acpi_cpufreq
processor 35852 1 acpi_cpufreq
---
but if I type modprobe and press complete (and have my complete
facility loaded), it asks me if I want to display all 342 possibilities.
Probably most of that is network related depending on how I want to
config things (nat/firewall/routing/shaping...none of which I usually
run all the time, so their all mods)...
But I know I have some mods on there that do nothing on my HW, but
I've had a few that didn't at first, but then with tweaks in the BIOS
or kernel updates, started getting used.
So wanted to let you know -- I appreciated the pointers -- I didn't
reject any of them...
But in this case THANK YOU DAVE!!!!
(that's why I post random symptoms sometimes --- on the minuscule hope
that someone might have seen it (or better, has already put in a fix for
it!)....
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfsdump fix tested... level 0's on all partitions: no problems
2013-02-04 1:49 ` Linda Walsh
@ 2013-02-04 9:01 ` Linda Walsh
0 siblings, 0 replies; 6+ messages in thread
From: Linda Walsh @ 2013-02-04 9:01 UTC (permalink / raw)
To: xfs-oss
FYI...
reran the level 0's that had problems on Feb01 (~1.3T total)
no errors/no kernel panics...
(same kernel/no reboot)
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-02-04 9:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-03 3:49 xfsdump showing system problems -- ? ideas? Linda Walsh
2013-02-03 4:53 ` xfsdump -OOM triggering 'soft' kernel panic Linda Walsh
2013-02-03 15:39 ` Stan Hoeppner
2013-02-03 23:17 ` xfsdump showing system problems -- ? ideas? Dave Chinner
2013-02-04 1:49 ` Linda Walsh
2013-02-04 9:01 ` xfsdump fix tested... level 0's on all partitions: no problems Linda Walsh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox