public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: David Chinner <dgc@sgi.com>
To: xfs-dev <xfs-dev@sgi.com>
Cc: xfs-oss <xfs@oss.sgi.com>
Subject: [patch] Prevent AIL lock contention during transaction completion
Date: Mon, 21 Jan 2008 16:23:30 +1100	[thread overview]
Message-ID: <20080121052330.GG155259@sgi.com> (raw)

When hundreds of processors attempt to commit
transactions at the same time, they can contend on the AIL
lock when updating the tail LSN held in the in-core log
structure.

At the moment, the tail LSN is only needed when actually writing
out an iclog, so it really does not need to be updated on every
single transaction completion - only those that result in switching
iclogs and flushing them to disk.

The result is that we reduce the number oftimes we need to grab the
AIL lock and the log grant lock by up to two orders of magnitude
on large processor count machines. The problem has previously been
hidden by AIL lock contention walking the AIL list, which has
recently been solved.

Signed-off-by: Dave Chinner <dgc@sgi.com>
---
 fs/xfs/xfs_log.c |   15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

Index: 2.6.x-xfs-new/fs/xfs/xfs_log.c
===================================================================
--- 2.6.x-xfs-new.orig/fs/xfs/xfs_log.c	2008-01-21 16:06:27.187549816 +1100
+++ 2.6.x-xfs-new/fs/xfs/xfs_log.c	2008-01-21 16:16:51.804146394 +1100
@@ -2815,15 +2815,13 @@ xlog_state_put_ticket(xlog_t	    *log,
  *
  */
 STATIC int
-xlog_state_release_iclog(xlog_t		*log,
-			 xlog_in_core_t	*iclog)
+xlog_state_release_iclog(
+	xlog_t		*log,
+	xlog_in_core_t	*iclog)
 {
 	int		sync = 0;	/* do we sync? */
 
-	xlog_assign_tail_lsn(log->l_mp);
-
 	spin_lock(&log->l_icloglock);
-
 	if (iclog->ic_state & XLOG_STATE_IOERROR) {
 		spin_unlock(&log->l_icloglock);
 		return XFS_ERROR(EIO);
@@ -2835,13 +2833,14 @@ xlog_state_release_iclog(xlog_t		*log,
 
 	if (--iclog->ic_refcnt == 0 &&
 	    iclog->ic_state == XLOG_STATE_WANT_SYNC) {
+		/* update tail before writing to iclog */
+		xlog_assign_tail_lsn(log->l_mp);
 		sync++;
 		iclog->ic_state = XLOG_STATE_SYNCING;
 		iclog->ic_header.h_tail_lsn = cpu_to_be64(log->l_tail_lsn);
 		xlog_verify_tail_lsn(log, iclog, log->l_tail_lsn);
 		/* cycle incremented when incrementing curr_block */
 	}
-
 	spin_unlock(&log->l_icloglock);
 
 	/*
@@ -2851,11 +2850,9 @@ xlog_state_release_iclog(xlog_t		*log,
 	 * this iclog has consistent data, so we ignore IOERROR
 	 * flags after this point.
 	 */
-	if (sync) {
+	if (sync)
 		return xlog_sync(log, iclog);
-	}
 	return 0;
-
 }	/* xlog_state_release_iclog */
 
 

             reply	other threads:[~2008-01-21  5:23 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-21  5:23 David Chinner [this message]
2008-01-23  7:12 ` [patch] Prevent AIL lock contention during transaction completion Timothy Shimmin
2008-01-23  7:34   ` David Chinner
2008-01-25  6:51     ` Timothy Shimmin
2008-01-25  7:42       ` David Chinner
2008-02-14 23:45         ` David Chinner
2008-02-14 23:52           ` Timothy Shimmin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080121052330.GG155259@sgi.com \
    --to=dgc@sgi.com \
    --cc=xfs-dev@sgi.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox