linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ian Kent <raven@themaw.net>
To: Jeff Layton <jlayton@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>,
	Dave Chinner <david@fromorbit.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"jens.axboe@oracle.com" <jens.axboe@oracle.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"hch@infradead.org" <hch@infradead.org>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list
Date: Wed, 25 Mar 2009 22:18:57 +0900	[thread overview]
Message-ID: <49CA2F41.8030804@themaw.net> (raw)
In-Reply-To: <20090325091325.17c997fd@tleilax.poochiereds.net>

Jeff Layton wrote:
> On Wed, 25 Mar 2009 20:17:43 +0800
> Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
>>> On Wed, 25 Mar 2009 10:50:37 +0800
>>> Wu Fengguang <fengguang.wu@intel.com> wrote:
>>>
>>>>> Given the right situation though (or maybe the right filesystem), it's
>>>>> not too hard to imagine this problem occurring even in current mainline
>>>>> code with an inode that's frequently being redirtied.
>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
>>>> happen in __mark_inode_dirty() and redirty_tail().  Newly dirtied
>>>> inodes will be parked in s_dirty for 30s. During which time the
>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
>>>> value, will be retried for writeback and then re-inserted into a
>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
>>>>
>>> Doesn't that assume that there are new inodes that are being dirtied?
>>> If you only have the same inodes being redirtied and never any new
>>> ones, the problem still occurs, right?
>> Yes. But will a production server run months without making one single
>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
>> fix this possible issue.:)
>>
> 
> Yes. It's not that the box will run that long without creating a
> single new dirtied inode, but rather that it won't necessarily create
> one on all of its mounts. It's often the case that someone has a
> mountpoint for a dedicated purpose.
> 
> Consider a host that has a mountpoint that contains logfiles that are
> being heavily written. There's nothing that says that they must rotate
> those logs over a particular period (assuming the fs has enough space,
> etc). If the same ones are constantly being redirtied and no new
> ones are created, then I think this problem can easily happen.
> 
>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
>>>>>>
>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
>>>>>> if one big file is repeatedly redirtied hence its writeback being
>>>>>> delayed considerably.
>>>>>>
>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
>>>>>> in a _best effort_ way. It at best partially hides the above issues,
>>>>>> if there are any. In particular, if (b) is possible, the bug should
>>>>>> already show up at least in some situations.
>>>>>>
>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
>>>>>>
>>>>>>         http://lkml.org/lkml/2008/1/16/491
>>>>>>
>>>>>>
>>>>> Ok, those are good points that I need to think about.
>>>>>
>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
>>>>> how best to fix this.
>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
>>>> doesn't matter?
>>>>
>>> I don't really consider NFS to be a special case here. It just happens
>>> to be where we saw the problem originally. Some of its characteristics
>>> might make it easier to hit this, but I'm not certain of that.
>> Now there are now two possible solutions:
>> - unconditionally update dirtied_when in redirty_tail();
>> - keep dirtied_when and redirty inodes to a new dedicated queue.
>> The first one involves less code, the second one allows more flexible timing.
>>
>> NFS/XFS could be a good starting point for discussing the
>> requirements, so that we can reach a suitable solution.
>>
> 
> It sounds like it, yes. I saw that you posted some patches in January
> (including your s_more_io_wait patch). I'll give those a closer look.
> Adding the new s_more_io_wait queue is interesting and might sidestep
> this problem nicely.
> 

Yes, I was looking at that bit of code but, so far, I think it won't be
called for the case we are trying to describe.

Ian

  reply	other threads:[~2009-03-25 13:19 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-23 20:30 [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list Jeff Layton
2009-03-24  4:41 ` Ian Kent
2009-03-24  5:04   ` Ian Kent
2009-03-24 13:57 ` Wu Fengguang
2009-03-24 14:27   ` Ian Kent
2009-03-24 14:28   ` Jeff Layton
2009-03-24 14:46     ` Jeff Layton
2009-03-24 15:04       ` Ian Kent
2009-03-25  2:25         ` Wu Fengguang
2009-03-25  1:28       ` Wu Fengguang
2009-03-25  2:15         ` Jeff Layton
     [not found]           ` <20090324221528.2bb7c50b-RtJpwOs3+0O+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2009-03-25  2:50             ` Wu Fengguang
2009-03-25 11:51               ` Jeff Layton
     [not found]                 ` <20090325075110.028f0d1d-RtJpwOs3+0O+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2009-03-25 12:17                   ` Wu Fengguang
2009-03-25 13:13                     ` Jeff Layton
2009-03-25 13:18                       ` Ian Kent [this message]
2009-03-25 13:38                         ` Ian Kent
2009-03-25 13:44                           ` Wu Fengguang
2009-03-25 14:00                           ` Jeff Layton
2009-03-25 14:16                             ` Wu Fengguang
2009-03-25 14:28                               ` Jeff Layton
     [not found]                                 ` <20090325102833.138819d1-RtJpwOs3+0O+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2009-03-25 14:38                                   ` Wu Fengguang
2009-03-26 17:03                               ` Jeff Layton
2009-03-27  2:13                                 ` Wu Fengguang
2009-03-27 11:16                                   ` Jeff Layton
     [not found]                                     ` <20090327071633.0c1a0e3a-RtJpwOs3+0O+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2009-03-28 12:44                                       ` Wu Fengguang
2009-03-25 16:55                     ` hch
     [not found]                       ` <20090325165500.GA6047-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2009-03-25 20:07                         ` Chris Mason
2009-03-25  2:56         ` Ian Kent
2009-03-25  3:28           ` Wu Fengguang
2009-03-25  5:03             ` Ian Kent

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49CA2F41.8030804@themaw.net \
    --to=raven@themaw.net \
    --cc=akpm@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=fengguang.wu@intel.com \
    --cc=hch@infradead.org \
    --cc=jens.axboe@oracle.com \
    --cc=jlayton@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).