public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@redhat.com>
To: Christoph Hellwig <hch@infradead.org>,
	Eric Sandeen <sandeen@sandeen.net>
Cc: Brian Foster <bfoster@redhat.com>, xfs-oss <xfs@oss.sgi.com>
Subject: Re: [PATCH] xfs: fix buffer use after free on IO error
Date: Tue, 25 Mar 2014 10:39:31 -0700	[thread overview]
Message-ID: <5331BF53.6000300@redhat.com> (raw)
In-Reply-To: <20140325172508.GA4446@infradead.org>

On 3/25/14, 10:25 AM, Christoph Hellwig wrote:
> On Tue, Mar 25, 2014 at 09:05:04AM -0700, Eric Sandeen wrote:
>>>> Out of curiosity, is there any major reason we don't use 0 here
>>>> unconditionally? Are we worried about I/O completing before we have a
>>>> chance to decrement the reference?
>>>
>>> I think this should unconditionally avoid the schedule, and while we're
>>> at it we should kill _xfs_buf_ioend and opencode it here and at the
>>> other callsite.
>>
>> And then remove the flag from xfs_buf_ioend which is always 0 at that
>> point ...
> 
> Is it?  xfs_buf_bio_end_io should stil be passing 1, the bio end_io
> handler is the place we really need the workqueue for anyway.

These are the callers of xfs_buf_ioend:

  File              Function                 Line
0 xfs_buf.c         xfs_bioerror             1085 xfs_buf_ioend(bp, 0);
1 xfs_buf.c         _xfs_buf_ioend           1177 xfs_buf_ioend(bp, schedule);
2 xfs_buf_item.c    xfs_buf_item_unpin        494 xfs_buf_ioend(bp, 0);
3 xfs_buf_item.c    xfs_buf_iodone_callbacks 1138 xfs_buf_ioend(bp, 0);
4 xfs_inode.c       xfs_iflush_cluster       3015 xfs_buf_ioend(bp, 0);
5 xfs_log.c         xlog_bdstrat             1644 xfs_buf_ioend(bp, 0);
6 xfs_log_recover.c xlog_recover_iodone       386 xfs_buf_ioend(bp, 0);

so only _xfs_buf_ioend *might* pass something other than 0, and:

  File      Function           Line
0 xfs_buf.c xfs_buf_bio_end_io 1197 _xfs_buf_ioend(bp, 1);
1 xfs_buf.c xfs_buf_iorequest  1377 _xfs_buf_ioend(bp, bp->b_error ? 0 : 1);

At least up until now that was always called with "1"

>> Yeah I have a patch to do that as well; I wanted to separate the
>> bugfix from the more invasive cleanup, though - and I wanted to
>> get the fix out for review sooner.
> 
> Sure, feel free to leave all the cleanups to another patch.
> 
>> But yeah, I was unsure about whether or not to schedule at all here.
>> We come here from a lot of callsites and I'm honestly not sure what
>> the implications are yet.
> 
> I think the the delayed completion is always wrong from the submission
> path.  The error path is just a special case of a completion happening
> before _xfs_buf_ioapply returns.  The combination of incredibly fast
> hardware and bad preemption could cause the same bug you observed.

I wondered about that.

I'm not sure; I don't think it was the buf_rele inside xfs_buf_iorequest
that freed it, I think it was specifically the error path afterwards -
in my case, in xfs_trans_read_buf_map():

                                xfs_buf_	(bp);
					// xfs_buf_iorequest code below
					xfs_buf_hold(bp);
					atomic_set(&bp->b_io_remaining, 1);
					_xfs_buf_ioapply(bp); <-- gets error
					if (atomic_dec_and_test(&bp->b_io_remaining)
						xfs_buf_ioend(bp, bp->b_error ? 0 : 1);
					xfs_buf_rele(bp); <- releases our hold
                        }

                        error = xfs_buf_iowait(bp); <-- sees error; would have waited otherwise
                        if (error) {
                                xfs_buf_ioerror_alert(bp, __func__);
                                xfs_buf_relse(bp); <--- freed here ?

but my bp refcounting & lifetime knowledge is lacking :(

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-03-25 17:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-22  2:48 [PATCH] xfs: fix buffer use after free on IO error Eric Sandeen
2014-03-25 12:58 ` Brian Foster
2014-03-25 13:17   ` Christoph Hellwig
2014-03-25 16:05     ` Eric Sandeen
2014-03-25 17:25       ` Christoph Hellwig
2014-03-25 17:39         ` Eric Sandeen [this message]
2014-03-25 17:44           ` Christoph Hellwig
2014-03-25 18:08     ` Dave Chinner
2014-03-25 18:13       ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5331BF53.6000300@redhat.com \
    --to=sandeen@redhat.com \
    --cc=bfoster@redhat.com \
    --cc=hch@infradead.org \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox