linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Mike Galbraith <efault@gmx.de>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Steven Rostedt <rostedt@goodmis.org>,
	reiserfs-devel@vger.kernel.org,
	RT <linux-rt-users@vger.kernel.org>
Subject: Re: [patch] fs,reiserfs: unlock superblock before callling reiserfs_quota_on_mount()
Date: Thu, 16 Aug 2012 15:05:03 +0200	[thread overview]
Message-ID: <20120816130501.GH19716@somewhere> (raw)
In-Reply-To: <1345121074.4314.69.camel@marge.simpson.net>

On Thu, Aug 16, 2012 at 02:44:34PM +0200, Mike Galbraith wrote:
> On Tue, 2012-08-14 at 17:15 +0200, Frederic Weisbecker wrote:
> 
> > Looks ok. Thanks.
> > 
> > Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
> 
> Thanks.  Slightly reworded patchlet below. 
> 
> 
> If we hold the superblock lock while calling reiserfs_quota_on_mount(), we can
> deadlock our own worker - mount blocks kworker/3:2, sleeps forever more.
> 
> crash> ps|grep UN
>     715      2   3  ffff880220734d30  UN   0.0       0      0  [kworker/3:2]
>    9369   9341   2  ffff88021ffb7560  UN   1.3  493404 123184  Xorg
>    9665   9664   3  ffff880225b92ab0  UN   0.0   47368    812  udisks-daemon
>   10635  10403   3  ffff880222f22c70  UN   0.0   14904    936  mount
> crash> bt ffff880220734d30
> PID: 715    TASK: ffff880220734d30  CPU: 3   COMMAND: "kworker/3:2"
>  #0 [ffff8802244c3c20] schedule at ffffffff8144584b
>  #1 [ffff8802244c3cc8] __rt_mutex_slowlock at ffffffff814472b3
>  #2 [ffff8802244c3d28] rt_mutex_slowlock at ffffffff814473f5
>  #3 [ffff8802244c3dc8] reiserfs_write_lock at ffffffffa05f28fd [reiserfs]
>  #4 [ffff8802244c3de8] flush_async_commits at ffffffffa05ec91d [reiserfs]
>  #5 [ffff8802244c3e08] process_one_work at ffffffff81073726
>  #6 [ffff8802244c3e68] worker_thread at ffffffff81073eba
>  #7 [ffff8802244c3ec8] kthread at ffffffff810782e0
>  #8 [ffff8802244c3f48] kernel_thread_helper at ffffffff81450064
> crash> rd ffff8802244c3cc8 10
> ffff8802244c3cc8:  ffffffff814472b3 ffff880222f23250   .rD.....P2."....
> ffff8802244c3cd8:  0000000000000000 0000000000000286   ................
> ffff8802244c3ce8:  ffff8802244c3d30 ffff880220734d80   0=L$.....Ms ....
> ffff8802244c3cf8:  ffff880222e8f628 0000000000000000   (.."............
> ffff8802244c3d08:  0000000000000000 0000000000000002   ................
> crash> struct rt_mutex ffff880222e8f628
> struct rt_mutex {
>   wait_lock = {
>     raw_lock = {
>       slock = 65537
>     }
>   }, 
>   wait_list = {
>     node_list = {
>       next = 0xffff8802244c3d48, 
>       prev = 0xffff8802244c3d48
>     }
>   }, 
>   owner = 0xffff880222f22c71, 
>   save_state = 0
> }
> crash> bt 0xffff880222f22c70                                                                                                                                                                                                                
> PID: 10635  TASK: ffff880222f22c70  CPU: 3   COMMAND: "mount"                                                                                                                                                                               
>  #0 [ffff8802216a9868] schedule at ffffffff8144584b                                                                                                                                                                                         
>  #1 [ffff8802216a9910] schedule_timeout at ffffffff81446865                                                                                                                                                                                 
>  #2 [ffff8802216a99a0] wait_for_common at ffffffff81445f74
>  #3 [ffff8802216a9a30] flush_work at ffffffff810712d3
>  #4 [ffff8802216a9ab0] schedule_on_each_cpu at ffffffff81074463
>  #5 [ffff8802216a9ae0] invalidate_bdev at ffffffff81178aba
>  #6 [ffff8802216a9af0] vfs_load_quota_inode at ffffffff811a3632
>  #7 [ffff8802216a9b50] dquot_quota_on_mount at ffffffff811a375c
>  #8 [ffff8802216a9b80] finish_unfinished at ffffffffa05dd8b0 [reiserfs]
>  #9 [ffff8802216a9cc0] reiserfs_fill_super at ffffffffa05de825 [reiserfs]
> #10 [ffff8802216a9d90] mount_bdev at ffffffff8114c93f
> #11 [ffff8802216a9e00] mount_fs at ffffffff8114d035
> #12 [ffff8802216a9e50] vfs_kern_mount at ffffffff81167d36
> #13 [ffff8802216a9e90] do_kern_mount at ffffffff811692c3
> #14 [ffff8802216a9ed0] do_mount at ffffffff8116adb5
> #15 [ffff8802216a9f30] sys_mount at ffffffff8116b25a
> #16 [ffff8802216a9f80] system_call_fastpath at ffffffff8144ef12
>     RIP: 00007f7b9303997a  RSP: 00007ffff443c7a8  RFLAGS: 00010202
>     RAX: 00000000000000a5  RBX: ffffffff8144ef12  RCX: 00007f7b932e9ee0
>     RDX: 00007f7b93d9a400  RSI: 00007f7b93d9a3e0  RDI: 00007f7b93d9a3c0
>     RBP: 00007f7b93d9a2c0   R8: 00007f7b93d9a550   R9: 0000000000000001
>     R10: ffffffffc0ed040e  R11: 0000000000000202  R12: 000000000000040e
>     R13: 0000000000000000  R14: 00000000c0ed040e  R15: 00007ffff443ca20
>     ORIG_RAX: 00000000000000a5  CS: 0033  SS: 002b
> 
> Signed-off-by: Mike Galbraith <efault@gmx.de>
> Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: stable <stable@vger.kernel.org>
> ---
>  fs/reiserfs/super.c |   10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> --- a/fs/reiserfs/super.c
> +++ b/fs/reiserfs/super.c
> @@ -184,7 +184,13 @@ static int remove_save_link_only(struct
>  static int reiserfs_quota_on_mount(struct super_block *, int);
>  #endif
>  
> -/* look for uncompleted unlinks and truncates and complete them */
> +/*
> + *look for uncompleted unlinks and truncates and complete them
> + *
> + * Called with super_block write locked.  If quotas are enabled,
> + * we have to release/retake lest we call dquot_quota_on_mount(),
> + * proceed to schedule_on_each_cpu() and deadlock our own worker.

Could you please mention in the comment that the real issue is that we wait for
the per cpu worklets to complete flush_async_commits() that in turn wait for
the superblock write lock? schedule_on_each_cpu() in itself is not the problem.
Otherwise it might sound confusing.

Thanks.

> + */
>  static int finish_unfinished(struct super_block *s)
>  {
>  	INITIALIZE_PATH(path);
> @@ -231,7 +237,9 @@ static int finish_unfinished(struct supe
>  				quota_enabled[i] = 0;
>  				continue;
>  			}
> +			reiserfs_write_unlock(s);
>  			ret = reiserfs_quota_on_mount(s, i);
> +			reiserfs_write_lock(s);
>  			if (ret < 0)
>  				reiserfs_warning(s, "reiserfs-2500",
>  						 "cannot turn on journaled "
> 
> 

  reply	other threads:[~2012-08-16 13:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-14 13:06 [rfc patch] fs,reiserfs: unlock superblock before callling reiserfs_quota_on_mount() Mike Galbraith
2012-08-14 14:23 ` Steven Rostedt
2012-08-14 14:39   ` Mike Galbraith
2012-08-14 14:56     ` Mike Galbraith
2012-08-14 15:18       ` Thomas Gleixner
2012-08-14 17:26         ` Mike Galbraith
2012-08-14 17:44           ` Steven Rostedt
2012-08-14 18:09             ` Mike Galbraith
2012-08-14 15:15 ` Frederic Weisbecker
2012-08-16 12:44   ` [patch] " Mike Galbraith
2012-08-16 13:05     ` Frederic Weisbecker [this message]
2012-08-16 13:49       ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120816130501.GH19716@somewhere \
    --to=fweisbec@gmail.com \
    --cc=efault@gmx.de \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=reiserfs-devel@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).