public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xfs@oss.sgi.com
Subject: Re: [PATCH 2/3] xfs: use per-filesystem I/O completion workqueues
Date: Thu, 25 Aug 2011 10:48:11 +1000	[thread overview]
Message-ID: <20110825004811.GK3162@dastard> (raw)
In-Reply-To: <20110824060150.001321834@bombadil.infradead.org>

On Wed, Aug 24, 2011 at 01:59:26AM -0400, Christoph Hellwig wrote:
> The new concurrency managed workqueues are cheap enough that we can
> create them per-filesystem instead of global.  This allows us to only
> flush items for the current filesystem during sync, and to remove the
> trylock or defer scheme on the ilock, which is not compatible with
> using the workqueue flush for integrity purposes in the sync code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

The only issue I see with this is that it brings back per-filesystem
workqueue threads. Because all the workqueues are defined with
MEM_RECLAIM, there is a rescuer thread per workqueue that is used
when the CWMQ cannot allocate memory to queue the work to the
appropriate per-cpu queue.

Right now we have:

$ ps -ef |grep [x]fs
root       748     2  0 Aug23 ?        00:00:00 [xfs_mru_cache]
root       749     2  0 Aug23 ?        00:00:00 [xfslogd]
root       750     2  0 Aug23 ?        00:00:00 [xfsdatad]
root       751     2  0 Aug23 ?        00:00:00 [xfsconvertd]
$

where the xfslogd, xfsdatad and xfsconvertd are the rescuer threads.

I don't think this is a big problem, but it is definitely something
worth noting (at least in the commit message) given that we've
removed just about all the per-filesystem threads recently...

Cheers,

Dave.

> Index: xfs/fs/xfs/xfs_aops.c
> ===================================================================
> --- xfs.orig/fs/xfs/xfs_aops.c	2011-08-23 04:35:20.822345321 +0200
> +++ xfs/fs/xfs/xfs_aops.c	2011-08-23 04:37:02.425128226 +0200
> @@ -131,30 +131,22 @@ static inline bool xfs_ioend_is_append(s
>   * will be the intended file size until i_size is updated.  If this write does
>   * not extend all the way to the valid file size then restrict this update to
>   * the end of the write.
> - *
> - * This function does not block as blocking on the inode lock in IO completion
> - * can lead to IO completion order dependency deadlocks.. If it can't get the
> - * inode ilock it will return EAGAIN. Callers must handle this.
>   */
> -STATIC int
> +STATIC void
>  xfs_setfilesize(
>  	xfs_ioend_t		*ioend)
>  {
>  	xfs_inode_t		*ip = XFS_I(ioend->io_inode);
>  	xfs_fsize_t		isize;
>  
> -	if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
> -		return EAGAIN;
> -
> +	xfs_ilock(ip, XFS_ILOCK_EXCL);
>  	isize = xfs_ioend_new_eof(ioend);
>  	if (isize) {
>  		trace_xfs_setfilesize(ip, ioend->io_offset, ioend->io_size);
>  		ip->i_d.di_size = isize;
>  		xfs_mark_inode_dirty(ip);
>  	}
> -
>  	xfs_iunlock(ip, XFS_ILOCK_EXCL);
> -	return 0;
>  }

If we are going to block here, then we probably should increase the
per-cpu concurrency of the work queue so that we can continue to
process other ioends while this one is blocked.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-08-25  0:48 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-24  5:59 [PATCH 0/3] RFC: log all i_size updates Christoph Hellwig
2011-08-24  5:59 ` [PATCH 1/3] xfs: improve ioend error handling Christoph Hellwig
2011-09-02  0:19   ` Dave Chinner
2011-09-12 14:40   ` Alex Elder
2011-09-12 14:49     ` Christoph Hellwig
2011-08-24  5:59 ` [PATCH 2/3] xfs: use per-filesystem I/O completion workqueues Christoph Hellwig
2011-08-25  0:48   ` Dave Chinner [this message]
2011-08-25  5:20     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110825004811.GK3162@dastard \
    --to=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox