public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xfs@oss.sgi.com
Subject: Re: [PATCH 2/2] XFS: Don't wake xfsbufd when idle
Date: Tue, 5 Jan 2010 10:52:15 +1100	[thread overview]
Message-ID: <20100104235215.GN13802@discord.disaster> (raw)
In-Reply-To: <20100104152048.GC24810@infradead.org>

On Mon, Jan 04, 2010 at 10:20:48AM -0500, Christoph Hellwig wrote:
> On Sat, Jan 02, 2010 at 01:43:35PM +1100, Dave Chinner wrote:
> > The xfsbufd wakes every xfsbufd_centisecs (once per second by
> > default) for each filesystem even when the filesystem is idle.
> > If the xfsbufd has nothing to do, put it into a long term sleep
> > and only wake it up when there is work pending (i.e. dirty
> > buffers to flush soon). This will make laptop power misers happy.
> > 
> > Signed-off-by: Dave Chinner <david@fromorbit.com>
> > ---
> >  fs/xfs/linux-2.6/xfs_buf.c |   28 +++++++++++++++++++---------
> >  1 files changed, 19 insertions(+), 9 deletions(-)
> > 
> 
> >  STATIC int xfsbufd(void *);
> > -STATIC int xfsbufd_wakeup(int, gfp_t);
> > +STATIC int xfsbufd_wakeup_all(int, gfp_t);
> 
> this rename seems unrelated to the rest of the patch.

For memory reclaim we want to wake up the xfsbufd threads on every
single filesystem to free up as much memory as possible.  Hence with
the addition of demand flushing we have a situation now where we can
wakeup either a single xfsbufd or we need to wake up all of them.
It seemed right to make the distinction clear by renaming the
function. I can drop it if this doesn't make sense....

> > @@ -1595,6 +1595,11 @@ xfs_buf_delwri_queue(
> >  		list_del(&bp->b_list);
> >  	}
> >  
> > +	if (list_empty(dwq)) {
> > +		/* start xfsbufd as it has something to do now */
> > +		wake_up_process(bp->b_target->bt_task);
> > +	}
> 
> Does it make sense to wake xfsbufd before actually adding the item and
> unlocking the queue lock?  Shouldn't this be defered until after the
> addition?

I did it to avoid a temp var - if the xfsbufd runs before we finish
here then is will spin on the lock until we have added the buffer to
the list and dropped the lock. wake_up_process() is safe to call
under a spinlock, so that is not an issue here.

Also, the xfsbufd checks for an empty list before it sleeps, so on
wakeup it will always see the newly added buffer because it tries
unconditionally to dequeue buffers on wakeup. Hence I think this is
safe and race free.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2010-01-04 23:51 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-02  2:43 [PATCH 0/2] XFS: Run kernel threads on demand Dave Chinner
2010-01-02  2:43 ` [PATCH 1/2] XFS: Don't wake the aild once per second Dave Chinner
2010-01-04 15:16   ` Christoph Hellwig
2010-01-04 23:28     ` Dave Chinner
2010-01-02  2:43 ` [PATCH 2/2] XFS: Don't wake xfsbufd when idle Dave Chinner
2010-01-04 15:20   ` Christoph Hellwig
2010-01-04 23:52     ` Dave Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100104235215.GN13802@discord.disaster \
    --to=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox