public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Kamal Dasu <kdasu.kdev@gmail.com>
To: xfs@oss.sgi.com
Subject: Re: [PATCH 4/4] V2 xfs: fix deadlock in xfs_rtfree_extent with kernel v2.6.37
Date: Sat, 25 Feb 2012 07:46:41 -0800 (PST)	[thread overview]
Message-ID: <33390983.post@talk.nabble.com> (raw)
In-Reply-To: <20120225094030.GA3148@infradead.org>


Christoph,

I have not been able to create a simple test case for this yet.

Currently the only way I have is to use an a time shift recording
application 
that stored video streams on a real-time subvolume.  Sometimes when such
a stream is deleted I see the problem. I have not  figured out how a test to 
consistently get allocation where the bit map span multiple extents while 
freeing  the inode. 

I am still trying to come with a simple test case. 
If you have any ideas let me know I will be happy to try it out.

Kamal



Christoph Hellwig wrote:
> 
> On Thu, Feb 23, 2012 at 08:52:57AM -0800, Kamal Dasu wrote:
>> 
>> To fix the deadlock caused by recursively calling xfs_rtfree_extent
>> from xfs_bunmapi():
>> 
>>  - removed  xfs_trans_iget() from xfs_rtfree_extent(),
>>    instead added asserts that the inode is locked and has an inode_item
>>    attached to it.
>>  - in xfs_bunmapi() when dealing with an inode with the rt flag
>>    call xfs_ilock() and xfs_trans_ijoin() so that the
>>    reference count is bumped on the inode and attached it to the
>>    transaction before calling into xfs_bmap_del_extent, similar to
>>    what we do in xfs_bmap_rtalloc.
>> 
>> Signed-off-by: Kamal Dasu <kdasu.kdev@gmail.com>
> 
> This looks good, thanks a lot!
> 
> Do you have an easily reproducable testcase for this which we could
> put into xfstests?
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 
> 

-- 
View this message in context: http://old.nabble.com/-PATCH-0-4--RFC-xfs%3A-resurrect-realtime-subvolume-support-on-kernel-2.6.37-tp33345988p33390983.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-02-25 15:46 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-17 22:46 [PATCH 0/4] xfs: resurrect realtime subvolume support on kernel 2.6.37 kdasu
2012-02-17 22:51 ` [PATCH 1/4] xfs: only lock the rt bitmap inode once per allocation kdasu
2012-02-17 22:55   ` [PATCH 2/4] xfs: fix xfs_get_extsz_hint for a zero extent size hint kdasu
2012-02-17 22:58     ` [PATCH 3/4] xfs: add lockdep annotations for the rt inodes kdasu
2012-02-17 23:00       ` [PATCH 4/4] xfs: fix deadlock in xfs_rtfree_extent with kernel v2.6.37 kdasu
2012-02-19 22:41         ` Christoph Hellwig
2012-02-21 17:22           ` Kamal Dasu
2012-02-23 16:52             ` [PATCH 4/4] V2 " Kamal Dasu
2012-02-25  9:40               ` Christoph Hellwig
2012-02-25 15:46                 ` Kamal Dasu [this message]
2012-02-28  8:36                   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=33390983.post@talk.nabble.com \
    --to=kdasu.kdev@gmail.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox