From: Juergen Urban <JuergenUrban@gmx.de>
To: xfs@oss.sgi.com
Subject: BUG() in end_page_writeback(), stack overflows and system speed decrease with XFS over USB
Date: Thu, 19 Nov 2009 09:57:45 +0100 [thread overview]
Message-ID: <200911190957.45957.JuergenUrban@gmx.de> (raw)
Hello,
my machine is running very unstable since I use XFS on an external USB
harddisc (855 GByte XFS partition on 1TByte). One problem was the stack
overflows caused by the large stack use of XFS, USB, SCSI and VFS in Linux
2.6.23.13. NFS on XFS caused much more stack overflows. I think I got around
the stack overflows by disabling preemption, SMP and NFS in Linux, but I am not
sure about it. I think that I didn't got a message from the stack overflow
detection after this. I also tried a Live-CD (KNOPPIX), but there are the same
problems. I exchanged some of the hardware. XFS is decreasing system
performance. I use the Linux VDR with DVB-S which seems to increase the
problems. I was able to record 3 high bandwidth streams in parallel before
using XFS. Now it has problems to record one high bandwidth stream. The
system got a little bit usable after I changed the IO scheduler to deadline.
It is difficult to get a good backtrace of the kernel crash, because the backlog
is not saved on the internal harddisc (reiserfs and ext3). I was able to find
out that XFS triggers a BUG() in end_page_writeback() at mm/filemap.c:552:
void end_page_writeback(struct page *page)
{
if (!TestClearPageReclaim(page) || rotate_reclaimable_page(page)) {
if (!test_clear_page_writeback(page))
BUG();
}
smp_mb__after_clear_bit();
wake_up_page(page, PG_writeback);
}
The backtrace looks like this (Sorry, I needed to write it down from screen
and I don't have everything):
end_page_writeback()
end_buffer_async_write()
update_stats_wait_end()
xfs_setfilesize()
xfs_???_dealloc()
xfs_destroy_ioend()
run_workqueue()
After searching in the code I found:
/* TODO: cleanup count and page_dirty */
It seems that page_dirty may be handled wrong and could cause the problem, but
I don't know the purpose of this stuff. The same comment is in the latest
source code from GIT.
After running the system for while, I was able to trigger the kernel crash by
starting "sync" in the command line.
My stack traces includes often dvb_dmx_swfilter_packets(), do_IRQ()/tasklets
and sys_write()/vfs_write(). I can't scroll up in most situations.
Can anyone help me?
Is there an easy way to backup the data or replace the file system without
kernel crash in between?
Best regards
Juergen Urban
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next reply other threads:[~2009-11-19 8:57 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-19 8:57 Juergen Urban [this message]
2009-11-19 18:00 ` BUG() in end_page_writeback(), stack overflows and system speed decrease with XFS over USB Eric Sandeen
2009-11-20 16:23 ` Juergen Urban
2009-11-20 16:36 ` Eric Sandeen
2009-11-20 17:08 ` Eric Sandeen
2009-11-21 1:00 ` Juergen Urban
2009-11-21 10:51 ` Michael Monnerie
2009-11-21 17:33 ` Juergen Urban
2009-11-21 22:04 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200911190957.45957.JuergenUrban@gmx.de \
--to=juergenurban@gmx.de \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox