public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [STACK] >3k call path in xfs
@ 2004-06-09 12:26 Jörn Engel
  2004-06-09 12:54 ` Christian Borntraeger
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Jörn Engel @ 2004-06-09 12:26 UTC (permalink / raw)
  To: nathans, owner-xfs; +Cc: linux-kernel

xfs is quite interesting.  No single function is particularly
stack-hungry, but the sheer depth of the call path adds up.  Nathan,
can you see if some bytes can be saved here and there?

3k is not really bad yet, I just like to keep 1k of headroom for
surprises like an extra int foo[256] in a structure.

stackframes for call path too long (3064):
    size  function
     144  xfs_ioctl
     328  xfs_swapext
       0  xfs_iaccess
      16  xfs_acl_iaccess
     104  xfs_attr_fetch
       0  xfs_attr_node_get
      28  xfs_da_node_lookup_int
      68  xfs_dir2_leafn_lookup_int
       0  xfs_da_read_buf
     288  xfs_bmapi
      52  xfs_rtpick_extent
      24  xfs_trans_iget
      32  xfs_iget
      32  xfs_iread
      72  xfs_itobp
      60  xfs_imap
      84  xfs_dilocate
       0  xfs_inobt_lookup_le
      16  xfs_inobt_increment
      28  xfs_btree_readahead_core
      20  xfs_btree_reada_bufl
      12  pagebuf_readahead
      16  pagebuf_get
       0  pagebuf_iostart
       0  xfs_bdstrat_cb
      68  pagebuf_iorequest
       0  pagebuf_iodone
       0  pagebuf_iodone_work
       0  pagebuf_rele
       0  preempt_schedule
      84  schedule
      16  __put_task_struct
      20  audit_free
      36  audit_log_start
      16  __kmalloc
       0  __get_free_pages
      28  __alloc_pages
     284  try_to_free_pages
       0  out_of_memory
       0  mmput
      16  exit_aio
       0  __put_ioctx
      16  do_munmap
       0  split_vma
      36  vma_adjust
       0  fput
       0  __fput
       0  locks_remove_flock
      12  panic
       0  sys_sync
       0  sync_inodes
     308  sync_inodes_sb
       0  do_writepages
     128  mpage_writepages
       4  write_boundary_block
       0  ll_rw_block
      28  submit_bh
       0  bio_alloc
      88  mempool_alloc
     256  wakeup_bdflush
      20  pdflush_operation
       0  printk
      16  release_console_sem
      16  __wake_up
       0  printk
       0  vscnprintf
      32  vsnprintf
     112  number

Jörn

-- 
Ninety percent of everything is crap.
-- Sturgeon's Law

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 12:26 [STACK] >3k call path in xfs Jörn Engel
@ 2004-06-09 12:54 ` Christian Borntraeger
  2004-06-09 13:05   ` Jörn Engel
  2004-06-09 13:10 ` Jörn Engel
  2004-06-09 15:05 ` Steve Lord
  2 siblings, 1 reply; 7+ messages in thread
From: Christian Borntraeger @ 2004-06-09 12:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: nathans, owner-xfs, Jörn Engel

Jörn Engel wrote:
> 3k is not really bad yet, I just like to keep 1k of headroom for
> surprises like an extra int foo[256] in a structure.
> stackframes for call path too long (3064):
[...]
>       12  panic
[...]

I agree thats good to reduce stack size. 

On the other hand I think call traces containing panic are not a call trace 
I want to see at all.

cheers

Christian

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 12:54 ` Christian Borntraeger
@ 2004-06-09 13:05   ` Jörn Engel
  0 siblings, 0 replies; 7+ messages in thread
From: Jörn Engel @ 2004-06-09 13:05 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: linux-kernel, nathans, owner-xfs

On Wed, 9 June 2004 14:54:17 +0200, Christian Borntraeger wrote:
> Jörn Engel wrote:
> > 3k is not really bad yet, I just like to keep 1k of headroom for
> > surprises like an extra int foo[256] in a structure.
> > stackframes for call path too long (3064):
> [...]
> >       12  panic
> [...]
> 
> I agree thats good to reduce stack size. 
> 
> On the other hand I think call traces containing panic are not a call trace 
> I want to see at all.

Does panic switch to a different stack?  If that was the case, you'd
be right and I'd have to make some adjustments.

Or do you mean that at the time of panic, a stack overflow simply
doesn't matter anymore.  The only data that may get corrupted is the
dump for a developer to analyse, after all.  (I'd like to make such a
claim someday, just to hear RAS people scream bloody murder. ;))

Jörn

-- 
"Security vulnerabilities are here to stay."
-- Scott Culp, Manager of the Microsoft Security Response Center, 2001

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 12:26 [STACK] >3k call path in xfs Jörn Engel
  2004-06-09 12:54 ` Christian Borntraeger
@ 2004-06-09 13:10 ` Jörn Engel
  2004-06-09 22:58   ` Nathan Scott
  2004-06-09 15:05 ` Steve Lord
  2 siblings, 1 reply; 7+ messages in thread
From: Jörn Engel @ 2004-06-09 13:10 UTC (permalink / raw)
  To: nathans, linux-xfs; +Cc: linux-kernel

And since I don't like bounces, how about this patch?

Jörn

-- 
Beware of bugs in the above code; I have only proved it correct, but
not tried it.
-- Donald Knuth

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>

 MAINTAINERS |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

--- linux-2.6.6cow/MAINTAINERS~xfs_list	2004-05-10 18:10:05.000000000 +0200
+++ linux-2.6.6cow/MAINTAINERS	2004-06-09 15:08:08.000000000 +0200
@@ -2354,7 +2354,7 @@
 
 XFS FILESYSTEM
 P:	Silicon Graphics Inc
-M:	owner-xfs@oss.sgi.com
+M:	linux-xfs@oss.sgi.com
 M:	nathans@sgi.com
 L:	linux-xfs@oss.sgi.com
 W:	http://oss.sgi.com/projects/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 12:26 [STACK] >3k call path in xfs Jörn Engel
  2004-06-09 12:54 ` Christian Borntraeger
  2004-06-09 13:10 ` Jörn Engel
@ 2004-06-09 15:05 ` Steve Lord
  2004-06-09 16:03   ` Jörn Engel
  2 siblings, 1 reply; 7+ messages in thread
From: Steve Lord @ 2004-06-09 15:05 UTC (permalink / raw)
  To: Jörn Engel; +Cc: nathans, linux-xfs, linux-kernel

Jörn Engel wrote:
> xfs is quite interesting.  No single function is particularly
> stack-hungry, but the sheer depth of the call path adds up.  Nathan,
> can you see if some bytes can be saved here and there?
> 
> 3k is not really bad yet, I just like to keep 1k of headroom for
> surprises like an extra int foo[256] in a structure.

Are you doing some form of call chain analysis to arrive
at this stack?

Actually this one is impossible, the function xfs_bmapi is
confusing you analyser, it is used for read and write
calls to map from file offsets to disk blocks. The path
you chased down from xfs_bmapi is doing a realtime
allocation, the swapext call does not do allocations,
it is in this case looking up the contents of an acl
for a permission check - xfs_bmapi in this case will
not call much of anything.

The bmapi path you did follow, for an allocation in a
realtime file, is doing a read of the realtime bitmap
inode which holds the layout of the realtime component
of the filesystem.

Once it gets to schedule it is  a little out of XFS's hands what
happens, or which stack is actually in use. I think the
path you followed out of schedule is the cleanup of the
audit structure of a dead process - it is the one doing
the panicing here. An xfs call into schedule to wait for
I/O will not be going down that path.

I think you have to be careful looking at these call chains.

Steve

> 
> stackframes for call path too long (3064):
>     size  function
>      144  xfs_ioctl
>      328  xfs_swapext
>        0  xfs_iaccess
>       16  xfs_acl_iaccess
>      104  xfs_attr_fetch
>        0  xfs_attr_node_get
>       28  xfs_da_node_lookup_int
>       68  xfs_dir2_leafn_lookup_int
>        0  xfs_da_read_buf
>      288  xfs_bmapi
>       52  xfs_rtpick_extent
>       24  xfs_trans_iget
>       32  xfs_iget
>       32  xfs_iread
>       72  xfs_itobp
>       60  xfs_imap
>       84  xfs_dilocate
>        0  xfs_inobt_lookup_le
>       16  xfs_inobt_increment
>       28  xfs_btree_readahead_core
>       20  xfs_btree_reada_bufl
>       12  pagebuf_readahead
>       16  pagebuf_get
>        0  pagebuf_iostart
>        0  xfs_bdstrat_cb
>       68  pagebuf_iorequest
>        0  pagebuf_iodone
>        0  pagebuf_iodone_work
>        0  pagebuf_rele
>        0  preempt_schedule
>       84  schedule
>       16  __put_task_struct
>       20  audit_free
>       36  audit_log_start
>       16  __kmalloc
>        0  __get_free_pages
>       28  __alloc_pages
>      284  try_to_free_pages
>        0  out_of_memory
>        0  mmput
>       16  exit_aio
>        0  __put_ioctx
>       16  do_munmap
>        0  split_vma
>       36  vma_adjust
>        0  fput
>        0  __fput
>        0  locks_remove_flock
>       12  panic
>        0  sys_sync
>        0  sync_inodes
>      308  sync_inodes_sb
>        0  do_writepages
>      128  mpage_writepages
>        4  write_boundary_block
>        0  ll_rw_block
>       28  submit_bh
>        0  bio_alloc
>       88  mempool_alloc
>      256  wakeup_bdflush
>       20  pdflush_operation
>        0  printk
>       16  release_console_sem
>       16  __wake_up
>        0  printk
>        0  vscnprintf
>       32  vsnprintf
>      112  number
> 
> Jörn
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 15:05 ` Steve Lord
@ 2004-06-09 16:03   ` Jörn Engel
  0 siblings, 0 replies; 7+ messages in thread
From: Jörn Engel @ 2004-06-09 16:03 UTC (permalink / raw)
  To: Steve Lord; +Cc: nathans, linux-xfs, linux-kernel

On Wed, 9 June 2004 10:05:42 -0500, Steve Lord wrote:
> Jörn Engel wrote:
> >xfs is quite interesting.  No single function is particularly
> >stack-hungry, but the sheer depth of the call path adds up.  Nathan,
> >can you see if some bytes can be saved here and there?
> >
> >3k is not really bad yet, I just like to keep 1k of headroom for
> >surprises like an extra int foo[256] in a structure.
> 
> Are you doing some form of call chain analysis to arrive
> at this stack?

Yup.

> Actually this one is impossible, the function xfs_bmapi is
> confusing you analyser, it is used for read and write
> calls to map from file offsets to disk blocks. The path
> you chased down from xfs_bmapi is doing a realtime
> allocation, the swapext call does not do allocations,
> it is in this case looking up the contents of an acl
> for a permission check - xfs_bmapi in this case will
> not call much of anything.

Ack.  Clearly some sort of semantical checking would be nice to have,
but for the moment, we still need humans to do that job.  Thanks!

> Once it gets to schedule it is  a little out of XFS's hands what
> happens, or which stack is actually in use. I think the
> path you followed out of schedule is the cleanup of the
> audit structure of a dead process - it is the one doing
> the panicing here. An xfs call into schedule to wait for
> I/O will not be going down that path.

Yes, same problem.

> I think you have to be careful looking at these call chains.

For sure.  But since I scale extremely badly, I try to distribute this
work.  Thanks for taking a share.

There are a few more spots worth looking at, though.  I have no idea,
how often the loops below can iterate, so they may cause problems as
well.  If they are harmless as well, xfs is in a good state wrt stack
usage.  Nice.

WARNING: trivial recursion detected:
      44  xfs_attr_node_inactive
WARNING: trivial recursion detected:
      24  xfs_bmap_count_tree
WARNING: recursion detected:
      56  xfs_map_unwritten
      60  xfs_convert_page
WARNING: recursion detected:
       0  xfs_iaccess
      16  xfs_acl_iaccess
     104  xfs_attr_fetch

Jörn

-- 
Public Domain  - Free as in Beer
General Public - Free as in Speech
BSD License    - Free as in Enterprise
Shared Source  - Free as in "Work will make you..."

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [STACK] >3k call path in xfs
  2004-06-09 13:10 ` Jörn Engel
@ 2004-06-09 22:58   ` Nathan Scott
  0 siblings, 0 replies; 7+ messages in thread
From: Nathan Scott @ 2004-06-09 22:58 UTC (permalink / raw)
  To: J?rn Engel; +Cc: linux-xfs, linux-kernel

On Wed, Jun 09, 2004 at 03:10:49PM +0200, J?rn Engel wrote:
> And since I don't like bounces, how about this patch?
> 

This was fixed awhile back, try a more recent kernel.

cheers.

-- 
Nathan

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2004-06-09 22:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-06-09 12:26 [STACK] >3k call path in xfs Jörn Engel
2004-06-09 12:54 ` Christian Borntraeger
2004-06-09 13:05   ` Jörn Engel
2004-06-09 13:10 ` Jörn Engel
2004-06-09 22:58   ` Nathan Scott
2004-06-09 15:05 ` Steve Lord
2004-06-09 16:03   ` Jörn Engel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox