* [PATCH 01/12] xfs: split tail_lsn assignments from log space wakeups
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 02/12] xfs: do exact log space wakeups in xlog_ungrant_log_space Christoph Hellwig
` (11 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-cleanup-xfs_log_move_tail --]
[-- Type: text/plain, Size: 10272 bytes --]
Currently xfs_log_move_tail has a tail_lsn argument that is horribly
overloaded: it may contain either an actual lsn to assign to the log tail,
0 as a special case to use the last sync LSN, or 1 to indicate that no tail
LSN assignment should be performed, and we should opportunisticly wake up
at one task waiting for log space even if we did not move the LSN.
Remove the tail lsn assigned from xfs_log_move_tail and make the two callers
use xlog_assign_tail_lsn instead of the current variant of partially using
the code in xfs_log_move_tail and partially opencoding it. Note that means
we grow an addition lock roundtrip on the AIL lock for each bulk update
or delete, which is still far less than what we had before introducing the
bulk operations. If this proves to be a problem we can still add a variant
of xlog_assign_tail_lsn that expects the lock to be held already.
Also rename the remainder of xfs_log_move_tail to xfs_log_space_wake as
that name describes its functionality much better.
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 74 ++++++++++++++++++++-----------------------------
fs/xfs/xfs_log.h | 5 +--
fs/xfs/xfs_log_priv.h | 1
fs/xfs/xfs_trans_ail.c | 45 +++++++----------------------
4 files changed, 45 insertions(+), 80 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2012-02-17 18:45:34.733333161 -0800
+++ xfs/fs/xfs/xfs_log.c 2012-02-19 12:58:49.466628833 -0800
@@ -760,37 +760,35 @@ xfs_log_item_init(
INIT_LIST_HEAD(&item->li_cil);
}
+/*
+ * Wake up processes waiting for log space after we have moved the log tail.
+ *
+ * If opportunistic is set wake up one waiter even if we do not have enough
+ * free space by our strict accounting.
+ */
void
-xfs_log_move_tail(xfs_mount_t *mp,
- xfs_lsn_t tail_lsn)
+xfs_log_space_wake(
+ struct xfs_mount *mp,
+ bool opportunistic)
{
- xlog_ticket_t *tic;
- xlog_t *log = mp->m_log;
- int need_bytes, free_bytes;
+ struct xlog_ticket *tic;
+ struct log *log = mp->m_log;
+ int need_bytes, free_bytes;
if (XLOG_FORCED_SHUTDOWN(log))
return;
- if (tail_lsn == 0)
- tail_lsn = atomic64_read(&log->l_last_sync_lsn);
-
- /* tail_lsn == 1 implies that we weren't passed a valid value. */
- if (tail_lsn != 1)
- atomic64_set(&log->l_tail_lsn, tail_lsn);
-
if (!list_empty_careful(&log->l_writeq)) {
-#ifdef DEBUG
- if (log->l_flags & XLOG_ACTIVE_RECOVERY)
- panic("Recovery problem");
-#endif
+ ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
+
spin_lock(&log->l_grant_write_lock);
free_bytes = xlog_space_left(log, &log->l_grant_write_head);
list_for_each_entry(tic, &log->l_writeq, t_queue) {
ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV);
- if (free_bytes < tic->t_unit_res && tail_lsn != 1)
+ if (free_bytes < tic->t_unit_res && !opportunistic)
break;
- tail_lsn = 0;
+ opportunistic = false;
free_bytes -= tic->t_unit_res;
trace_xfs_log_regrant_write_wake_up(log, tic);
wake_up(&tic->t_wait);
@@ -799,10 +797,8 @@ xfs_log_move_tail(xfs_mount_t *mp,
}
if (!list_empty_careful(&log->l_reserveq)) {
-#ifdef DEBUG
- if (log->l_flags & XLOG_ACTIVE_RECOVERY)
- panic("Recovery problem");
-#endif
+ ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
+
spin_lock(&log->l_grant_reserve_lock);
free_bytes = xlog_space_left(log, &log->l_grant_reserve_head);
list_for_each_entry(tic, &log->l_reserveq, t_queue) {
@@ -810,9 +806,9 @@ xfs_log_move_tail(xfs_mount_t *mp,
need_bytes = tic->t_unit_res*tic->t_cnt;
else
need_bytes = tic->t_unit_res;
- if (free_bytes < need_bytes && tail_lsn != 1)
+ if (free_bytes < need_bytes && !opportunistic)
break;
- tail_lsn = 0;
+ opportunistic = false;
free_bytes -= need_bytes;
trace_xfs_log_grant_wake_up(log, tic);
wake_up(&tic->t_wait);
@@ -867,21 +863,7 @@ xfs_log_need_covered(xfs_mount_t *mp)
return needed;
}
-/******************************************************************************
- *
- * local routines
- *
- ******************************************************************************
- */
-
-/* xfs_trans_tail_ail returns 0 when there is nothing in the list.
- * The log manager must keep track of the last LR which was committed
- * to disk. The lsn of this LR will become the new tail_lsn whenever
- * xfs_trans_tail_ail returns 0. If we don't do this, we run into
- * the situation where stuff could be written into the log but nothing
- * was ever in the AIL when asked. Eventually, we panic since the
- * tail hits the head.
- *
+/*
* We may be holding the log iclog lock upon entering this routine.
*/
xfs_lsn_t
@@ -891,10 +873,17 @@ xlog_assign_tail_lsn(
xfs_lsn_t tail_lsn;
struct log *log = mp->m_log;
+ /*
+ * To make sure we always have a valid LSN for the log tail we keep
+ * track of the last LSN which was committed in log->l_last_sync_lsn,
+ * and use that when the AIL was empty and xfs_ail_min_lsn returns 0.
+ *
+ * If the AIL has been emptied we also need to wake any process
+ * waiting for this condition.
+ */
tail_lsn = xfs_ail_min_lsn(mp->m_ail);
if (!tail_lsn)
tail_lsn = atomic64_read(&log->l_last_sync_lsn);
-
atomic64_set(&log->l_tail_lsn, tail_lsn);
return tail_lsn;
}
@@ -2759,9 +2748,8 @@ xlog_ungrant_log_space(xlog_t *log,
trace_xfs_log_ungrant_exit(log, ticket);
- xfs_log_move_tail(log->l_mp, 1);
-} /* xlog_ungrant_log_space */
-
+ xfs_log_space_wake(log->l_mp, true);
+}
/*
* Flush iclog to disk if this is the last reference to the given iclog and
Index: xfs/fs/xfs/xfs_trans_ail.c
===================================================================
--- xfs.orig/fs/xfs/xfs_trans_ail.c 2012-02-17 18:45:34.746666495 -0800
+++ xfs/fs/xfs/xfs_trans_ail.c 2012-02-19 12:59:53.709961992 -0800
@@ -643,15 +643,15 @@ xfs_trans_unlocked_item(
* at the tail, it doesn't matter what result we get back. This
* is slightly racy because since we were just unlocked, we could
* go to sleep between the call to xfs_ail_min and the call to
- * xfs_log_move_tail, have someone else lock us, commit to us disk,
+ * xfs_log_space_wake, have someone else lock us, commit to us disk,
* move us out of the tail of the AIL, and then we wake up. However,
- * the call to xfs_log_move_tail() doesn't do anything if there's
+ * the call to xfs_log_space_wake() doesn't do anything if there's
* not enough free space to wake people up so we're safe calling it.
*/
min_lip = xfs_ail_min(ailp);
if (min_lip == lip)
- xfs_log_move_tail(ailp->xa_mount, 1);
+ xfs_log_space_wake(ailp->xa_mount, true);
} /* xfs_trans_unlocked_item */
/*
@@ -685,7 +685,6 @@ xfs_trans_ail_update_bulk(
xfs_lsn_t lsn) __releases(ailp->xa_lock)
{
xfs_log_item_t *mlip;
- xfs_lsn_t tail_lsn;
int mlip_changed = 0;
int i;
LIST_HEAD(tmp);
@@ -712,22 +711,12 @@ xfs_trans_ail_update_bulk(
if (!list_empty(&tmp))
xfs_ail_splice(ailp, cur, &tmp, lsn);
+ spin_unlock(&ailp->xa_lock);
- if (!mlip_changed) {
- spin_unlock(&ailp->xa_lock);
- return;
+ if (mlip_changed && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
+ xlog_assign_tail_lsn(ailp->xa_mount);
+ xfs_log_space_wake(ailp->xa_mount, false);
}
-
- /*
- * It is not safe to access mlip after the AIL lock is dropped, so we
- * must get a copy of li_lsn before we do so. This is especially
- * important on 32-bit platforms where accessing and updating 64-bit
- * values like li_lsn is not atomic.
- */
- mlip = xfs_ail_min(ailp);
- tail_lsn = mlip->li_lsn;
- spin_unlock(&ailp->xa_lock);
- xfs_log_move_tail(ailp->xa_mount, tail_lsn);
}
/*
@@ -758,7 +747,6 @@ xfs_trans_ail_delete_bulk(
int nr_items) __releases(ailp->xa_lock)
{
xfs_log_item_t *mlip;
- xfs_lsn_t tail_lsn;
int mlip_changed = 0;
int i;
@@ -785,23 +773,12 @@ xfs_trans_ail_delete_bulk(
if (mlip == lip)
mlip_changed = 1;
}
+ spin_unlock(&ailp->xa_lock);
- if (!mlip_changed) {
- spin_unlock(&ailp->xa_lock);
- return;
+ if (mlip_changed && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
+ xlog_assign_tail_lsn(ailp->xa_mount);
+ xfs_log_space_wake(ailp->xa_mount, false);
}
-
- /*
- * It is not safe to access mlip after the AIL lock is dropped, so we
- * must get a copy of li_lsn before we do so. This is especially
- * important on 32-bit platforms where accessing and updating 64-bit
- * values like li_lsn is not atomic. It is possible we've emptied the
- * AIL here, so if that is the case, pass an LSN of 0 to the tail move.
- */
- mlip = xfs_ail_min(ailp);
- tail_lsn = mlip ? mlip->li_lsn : 0;
- spin_unlock(&ailp->xa_lock);
- xfs_log_move_tail(ailp->xa_mount, tail_lsn);
}
/*
Index: xfs/fs/xfs/xfs_log.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log.h 2012-02-17 18:45:34.756666494 -0800
+++ xfs/fs/xfs/xfs_log.h 2012-02-19 12:58:49.469962166 -0800
@@ -160,8 +160,9 @@ int xfs_log_mount(struct xfs_mount *mp
xfs_daddr_t start_block,
int num_bblocks);
int xfs_log_mount_finish(struct xfs_mount *mp);
-void xfs_log_move_tail(struct xfs_mount *mp,
- xfs_lsn_t tail_lsn);
+xfs_lsn_t xlog_assign_tail_lsn(struct xfs_mount *mp);
+void xfs_log_space_wake(struct xfs_mount *mp,
+ bool opportunistic);
int xfs_log_notify(struct xfs_mount *mp,
struct xlog_in_core *iclog,
xfs_log_callback_t *callback_entry);
Index: xfs/fs/xfs/xfs_log_priv.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log_priv.h 2012-02-17 18:45:34.769999827 -0800
+++ xfs/fs/xfs/xfs_log_priv.h 2012-02-19 12:58:49.469962166 -0800
@@ -545,7 +545,6 @@ typedef struct log {
#define XLOG_FORCED_SHUTDOWN(log) ((log)->l_flags & XLOG_IO_ERROR)
/* common routines */
-extern xfs_lsn_t xlog_assign_tail_lsn(struct xfs_mount *mp);
extern int xlog_recover(xlog_t *log);
extern int xlog_recover_finish(xlog_t *log);
extern void xlog_pack_data(xlog_t *log, xlog_in_core_t *iclog, int);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 02/12] xfs: do exact log space wakeups in xlog_ungrant_log_space
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
2012-02-20 2:31 ` [PATCH 01/12] xfs: split tail_lsn assignments from log space wakeups Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 03/12] xfs: remove xfs_trans_unlocked_item Christoph Hellwig
` (10 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-exact-wakeups-in-xlog_ungrant_log_space --]
[-- Type: text/plain, Size: 1009 bytes --]
The only reason that xfs_log_space_wake had to do opportunistic wakeups
was that the old xfs_log_move_tail calling convention didn't allow for
exact wakeups when not updating the log tail LSN. Since this issue has
been fixed we can do exact wakeups now.
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-12-11 15:30:49.977750776 +0100
+++ xfs/fs/xfs/xfs_log.c 2011-12-11 17:49:00.692836117 +0100
@@ -2748,7 +2748,7 @@ xlog_ungrant_log_space(xlog_t *log,
trace_xfs_log_ungrant_exit(log, ticket);
- xfs_log_space_wake(log->l_mp, true);
+ xfs_log_space_wake(log->l_mp, false);
}
/*
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 03/12] xfs: remove xfs_trans_unlocked_item
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
2012-02-20 2:31 ` [PATCH 01/12] xfs: split tail_lsn assignments from log space wakeups Christoph Hellwig
2012-02-20 2:31 ` [PATCH 02/12] xfs: do exact log space wakeups in xlog_ungrant_log_space Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 04/12] xfs: cleanup xfs_log_space_wake Christoph Hellwig
` (9 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-kill-xfs_trans_unlocked_item --]
[-- Type: text/plain, Size: 9039 bytes --]
There is no reason to wake up log space waiters when unlocking inodes or
dquots, and the commit log has no explanation for this function either.
Given that we now have exact log space wakeups everywhere we can assume
the reason for this function was to paper over log space races in earlier
XFS versions.
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_dquot.c | 11 -----------
fs/xfs/xfs_dquot.h | 3 +--
fs/xfs/xfs_iget.c | 13 +------------
fs/xfs/xfs_inode.h | 4 +---
fs/xfs/xfs_inode_item.c | 6 +-----
fs/xfs/xfs_trans_ail.c | 44 --------------------------------------------
fs/xfs/xfs_trans_buf.c | 25 +------------------------
fs/xfs/xfs_trans_priv.h | 3 ---
8 files changed, 5 insertions(+), 104 deletions(-)
Index: xfs/fs/xfs/xfs_iget.c
===================================================================
--- xfs.orig/fs/xfs/xfs_iget.c 2012-02-17 18:45:34.000000000 -0800
+++ xfs/fs/xfs/xfs_iget.c 2012-02-19 13:02:44.699961529 -0800
@@ -642,8 +642,7 @@ xfs_iunlock(
(XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL));
ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) !=
(XFS_ILOCK_SHARED | XFS_ILOCK_EXCL));
- ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_IUNLOCK_NONOTIFY |
- XFS_LOCK_DEP_MASK)) == 0);
+ ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_DEP_MASK)) == 0);
ASSERT(lock_flags != 0);
if (lock_flags & XFS_IOLOCK_EXCL)
@@ -656,16 +655,6 @@ xfs_iunlock(
else if (lock_flags & XFS_ILOCK_SHARED)
mrunlock_shared(&ip->i_lock);
- if ((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) &&
- !(lock_flags & XFS_IUNLOCK_NONOTIFY) && ip->i_itemp) {
- /*
- * Let the AIL know that this item has been unlocked in case
- * it is in the AIL and anyone is waiting on it. Don't do
- * this if the caller has asked us not to.
- */
- xfs_trans_unlocked_item(ip->i_itemp->ili_item.li_ailp,
- (xfs_log_item_t*)(ip->i_itemp));
- }
trace_xfs_iunlock(ip, lock_flags, _RET_IP_);
}
Index: xfs/fs/xfs/xfs_inode.h
===================================================================
--- xfs.orig/fs/xfs/xfs_inode.h 2012-02-17 18:45:34.000000000 -0800
+++ xfs/fs/xfs/xfs_inode.h 2012-02-19 13:02:44.699961529 -0800
@@ -422,7 +422,6 @@ static inline int xfs_isiflocked(struct
#define XFS_IOLOCK_SHARED (1<<1)
#define XFS_ILOCK_EXCL (1<<2)
#define XFS_ILOCK_SHARED (1<<3)
-#define XFS_IUNLOCK_NONOTIFY (1<<4)
#define XFS_LOCK_MASK (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED \
| XFS_ILOCK_EXCL | XFS_ILOCK_SHARED)
@@ -431,8 +430,7 @@ static inline int xfs_isiflocked(struct
{ XFS_IOLOCK_EXCL, "IOLOCK_EXCL" }, \
{ XFS_IOLOCK_SHARED, "IOLOCK_SHARED" }, \
{ XFS_ILOCK_EXCL, "ILOCK_EXCL" }, \
- { XFS_ILOCK_SHARED, "ILOCK_SHARED" }, \
- { XFS_IUNLOCK_NONOTIFY, "IUNLOCK_NONOTIFY" }
+ { XFS_ILOCK_SHARED, "ILOCK_SHARED" }
/*
Index: xfs/fs/xfs/xfs_inode_item.c
===================================================================
--- xfs.orig/fs/xfs/xfs_inode_item.c 2012-02-17 18:45:34.000000000 -0800
+++ xfs/fs/xfs/xfs_inode_item.c 2012-02-19 13:02:44.699961529 -0800
@@ -596,11 +596,7 @@ xfs_inode_item_trylock(
/* Stale items should force out the iclog */
if (ip->i_flags & XFS_ISTALE) {
xfs_ifunlock(ip);
- /*
- * we hold the AIL lock - notify the unlock routine of this
- * so it doesn't try to get the lock again.
- */
- xfs_iunlock(ip, XFS_ILOCK_SHARED|XFS_IUNLOCK_NONOTIFY);
+ xfs_iunlock(ip, XFS_ILOCK_SHARED);
return XFS_ITEM_PINNED;
}
Index: xfs/fs/xfs/xfs_dquot.c
===================================================================
--- xfs.orig/fs/xfs/xfs_dquot.c 2012-02-19 12:58:47.169962173 -0800
+++ xfs/fs/xfs/xfs_dquot.c 2012-02-19 13:02:44.699961529 -0800
@@ -983,17 +983,6 @@ xfs_qm_dqflush(
}
-void
-xfs_dqunlock(
- xfs_dquot_t *dqp)
-{
- xfs_dqunlock_nonotify(dqp);
- if (dqp->q_logitem.qli_dquot == dqp) {
- xfs_trans_unlocked_item(dqp->q_logitem.qli_item.li_ailp,
- &dqp->q_logitem.qli_item);
- }
-}
-
/*
* Lock two xfs_dquot structures.
*
Index: xfs/fs/xfs/xfs_dquot.h
===================================================================
--- xfs.orig/fs/xfs/xfs_dquot.h 2012-02-19 12:30:44.000000000 -0800
+++ xfs/fs/xfs/xfs_dquot.h 2012-02-19 13:02:44.699961529 -0800
@@ -97,7 +97,7 @@ static inline void xfs_dqlock(struct xfs
mutex_lock(&dqp->q_qlock);
}
-static inline void xfs_dqunlock_nonotify(struct xfs_dquot *dqp)
+static inline void xfs_dqunlock(struct xfs_dquot *dqp)
{
mutex_unlock(&dqp->q_qlock);
}
@@ -152,7 +152,6 @@ extern int xfs_qm_dqget(xfs_mount_t *,
extern void xfs_qm_dqput(xfs_dquot_t *);
extern void xfs_dqlock2(struct xfs_dquot *, struct xfs_dquot *);
-extern void xfs_dqunlock(struct xfs_dquot *);
extern void xfs_dqflock_pushbuf_wait(struct xfs_dquot *dqp);
static inline struct xfs_dquot *xfs_qm_dqhold(struct xfs_dquot *dqp)
Index: xfs/fs/xfs/xfs_trans_ail.c
===================================================================
--- xfs.orig/fs/xfs/xfs_trans_ail.c 2012-02-19 12:59:53.000000000 -0800
+++ xfs/fs/xfs/xfs_trans_ail.c 2012-02-19 13:03:06.199961470 -0800
@@ -611,50 +611,6 @@ xfs_ail_push_all(
}
/*
- * This is to be called when an item is unlocked that may have
- * been in the AIL. It will wake up the first member of the AIL
- * wait list if this item's unlocking might allow it to progress.
- * If the item is in the AIL, then we need to get the AIL lock
- * while doing our checking so we don't race with someone going
- * to sleep waiting for this event in xfs_trans_push_ail().
- */
-void
-xfs_trans_unlocked_item(
- struct xfs_ail *ailp,
- xfs_log_item_t *lip)
-{
- xfs_log_item_t *min_lip;
-
- /*
- * If we're forcibly shutting down, we may have
- * unlocked log items arbitrarily. The last thing
- * we want to do is to move the tail of the log
- * over some potentially valid data.
- */
- if (!(lip->li_flags & XFS_LI_IN_AIL) ||
- XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
- return;
- }
-
- /*
- * This is the one case where we can call into xfs_ail_min()
- * without holding the AIL lock because we only care about the
- * case where we are at the tail of the AIL. If the object isn't
- * at the tail, it doesn't matter what result we get back. This
- * is slightly racy because since we were just unlocked, we could
- * go to sleep between the call to xfs_ail_min and the call to
- * xfs_log_space_wake, have someone else lock us, commit to us disk,
- * move us out of the tail of the AIL, and then we wake up. However,
- * the call to xfs_log_space_wake() doesn't do anything if there's
- * not enough free space to wake people up so we're safe calling it.
- */
- min_lip = xfs_ail_min(ailp);
-
- if (min_lip == lip)
- xfs_log_space_wake(ailp->xa_mount, true);
-} /* xfs_trans_unlocked_item */
-
-/*
* xfs_trans_ail_update - bulk AIL insertion operation.
*
* @xfs_trans_ail_update takes an array of log items that all need to be
Index: xfs/fs/xfs/xfs_trans_buf.c
===================================================================
--- xfs.orig/fs/xfs/xfs_trans_buf.c 2012-02-17 18:45:34.000000000 -0800
+++ xfs/fs/xfs/xfs_trans_buf.c 2012-02-19 13:02:44.699961529 -0800
@@ -463,19 +463,7 @@ xfs_trans_brelse(xfs_trans_t *tp,
* Default to a normal brelse() call if the tp is NULL.
*/
if (tp == NULL) {
- struct xfs_log_item *lip = bp->b_fspriv;
-
ASSERT(bp->b_transp == NULL);
-
- /*
- * If there's a buf log item attached to the buffer,
- * then let the AIL know that the buffer is being
- * unlocked.
- */
- if (lip != NULL && lip->li_type == XFS_LI_BUF) {
- bip = bp->b_fspriv;
- xfs_trans_unlocked_item(bip->bli_item.li_ailp, lip);
- }
xfs_buf_relse(bp);
return;
}
@@ -550,21 +538,10 @@ xfs_trans_brelse(xfs_trans_t *tp,
ASSERT(!(bip->bli_item.li_flags & XFS_LI_IN_AIL));
ASSERT(!(bip->bli_flags & XFS_BLI_INODE_ALLOC_BUF));
xfs_buf_item_relse(bp);
- bip = NULL;
- }
- bp->b_transp = NULL;
-
- /*
- * If we've still got a buf log item on the buffer, then
- * tell the AIL that the buffer is being unlocked.
- */
- if (bip != NULL) {
- xfs_trans_unlocked_item(bip->bli_item.li_ailp,
- (xfs_log_item_t*)bip);
}
+ bp->b_transp = NULL;
xfs_buf_relse(bp);
- return;
}
/*
Index: xfs/fs/xfs/xfs_trans_priv.h
===================================================================
--- xfs.orig/fs/xfs/xfs_trans_priv.h 2012-02-17 18:45:34.593333162 -0800
+++ xfs/fs/xfs/xfs_trans_priv.h 2012-02-19 13:02:44.703294862 -0800
@@ -104,9 +104,6 @@ void xfs_ail_push(struct xfs_ail *, xf
void xfs_ail_push_all(struct xfs_ail *);
xfs_lsn_t xfs_ail_min_lsn(struct xfs_ail *ailp);
-void xfs_trans_unlocked_item(struct xfs_ail *,
- xfs_log_item_t *);
-
struct xfs_log_item * xfs_trans_ail_cursor_first(struct xfs_ail *ailp,
struct xfs_ail_cursor *cur,
xfs_lsn_t lsn);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 04/12] xfs: cleanup xfs_log_space_wake
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (2 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 03/12] xfs: remove xfs_trans_unlocked_item Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 05/12] xfs: remove log space waitqueues Christoph Hellwig
` (8 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-cleanup-xfs_log_space_wake --]
[-- Type: text/plain, Size: 4128 bytes --]
Remove the now unused opportunistic parameter, and use the the
xlog_writeq_wake and xlog_reserveq_wake helpers now that we don't have
to care about the opportunistic wakeups.
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 35 +++++------------------------------
fs/xfs/xfs_log.h | 3 +--
fs/xfs/xfs_trans_ail.c | 4 ++--
3 files changed, 8 insertions(+), 34 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-11-29 08:42:35.235496706 +0100
+++ xfs/fs/xfs/xfs_log.c 2011-11-29 08:44:52.854751157 +0100
@@ -762,18 +762,13 @@ xfs_log_item_init(
/*
* Wake up processes waiting for log space after we have moved the log tail.
- *
- * If opportunistic is set wake up one waiter even if we do not have enough
- * free space by our strict accounting.
*/
void
xfs_log_space_wake(
- struct xfs_mount *mp,
- bool opportunistic)
+ struct xfs_mount *mp)
{
- struct xlog_ticket *tic;
struct log *log = mp->m_log;
- int need_bytes, free_bytes;
+ int free_bytes;
if (XLOG_FORCED_SHUTDOWN(log))
return;
@@ -783,16 +778,7 @@ xfs_log_space_wake(
spin_lock(&log->l_grant_write_lock);
free_bytes = xlog_space_left(log, &log->l_grant_write_head);
- list_for_each_entry(tic, &log->l_writeq, t_queue) {
- ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV);
-
- if (free_bytes < tic->t_unit_res && !opportunistic)
- break;
- opportunistic = false;
- free_bytes -= tic->t_unit_res;
- trace_xfs_log_regrant_write_wake_up(log, tic);
- wake_up(&tic->t_wait);
- }
+ xlog_writeq_wake(log, &free_bytes);
spin_unlock(&log->l_grant_write_lock);
}
@@ -801,18 +787,7 @@ xfs_log_space_wake(
spin_lock(&log->l_grant_reserve_lock);
free_bytes = xlog_space_left(log, &log->l_grant_reserve_head);
- list_for_each_entry(tic, &log->l_reserveq, t_queue) {
- if (tic->t_flags & XLOG_TIC_PERM_RESERV)
- need_bytes = tic->t_unit_res*tic->t_cnt;
- else
- need_bytes = tic->t_unit_res;
- if (free_bytes < need_bytes && !opportunistic)
- break;
- opportunistic = false;
- free_bytes -= need_bytes;
- trace_xfs_log_grant_wake_up(log, tic);
- wake_up(&tic->t_wait);
- }
+ xlog_reserveq_wake(log, &free_bytes);
spin_unlock(&log->l_grant_reserve_lock);
}
}
@@ -2748,7 +2723,7 @@ xlog_ungrant_log_space(xlog_t *log,
trace_xfs_log_ungrant_exit(log, ticket);
- xfs_log_space_wake(log->l_mp, false);
+ xfs_log_space_wake(log->l_mp);
}
/*
Index: xfs/fs/xfs/xfs_log.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log.h 2011-11-29 08:42:35.255496597 +0100
+++ xfs/fs/xfs/xfs_log.h 2011-11-29 08:42:52.682068856 +0100
@@ -161,8 +161,7 @@ int xfs_log_mount(struct xfs_mount *mp
int num_bblocks);
int xfs_log_mount_finish(struct xfs_mount *mp);
xfs_lsn_t xlog_assign_tail_lsn(struct xfs_mount *mp);
-void xfs_log_space_wake(struct xfs_mount *mp,
- bool opportunistic);
+void xfs_log_space_wake(struct xfs_mount *mp);
int xfs_log_notify(struct xfs_mount *mp,
struct xlog_in_core *iclog,
xfs_log_callback_t *callback_entry);
Index: xfs/fs/xfs/xfs_trans_ail.c
===================================================================
--- xfs.orig/fs/xfs/xfs_trans_ail.c 2011-11-29 08:42:35.275496489 +0100
+++ xfs/fs/xfs/xfs_trans_ail.c 2011-11-29 08:42:44.325447461 +0100
@@ -671,7 +671,7 @@ xfs_trans_ail_update_bulk(
if (mlip_changed && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
xlog_assign_tail_lsn(ailp->xa_mount);
- xfs_log_space_wake(ailp->xa_mount, false);
+ xfs_log_space_wake(ailp->xa_mount);
}
}
@@ -733,7 +733,7 @@ xfs_trans_ail_delete_bulk(
if (mlip_changed && !XFS_FORCED_SHUTDOWN(ailp->xa_mount)) {
xlog_assign_tail_lsn(ailp->xa_mount);
- xfs_log_space_wake(ailp->xa_mount, false);
+ xfs_log_space_wake(ailp->xa_mount);
}
}
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 05/12] xfs: remove log space waitqueues
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (3 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 04/12] xfs: cleanup xfs_log_space_wake Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 06/12] xfs: add the xlog_grant_head structure Christoph Hellwig
` (7 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-log-wakeup-processes --]
[-- Type: text/plain, Size: 3697 bytes --]
The tic->t_wait waitqueues can never have more than a single waiter
on them, so we can easily replace them with a task_struct pointer
and wake_up_process.
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 24 +++++++++++++++---------
fs/xfs/xfs_log_priv.h | 2 +-
2 files changed, 16 insertions(+), 10 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-12-11 20:15:38.095176497 +0100
+++ xfs/fs/xfs/xfs_log.c 2011-12-11 21:28:53.101366703 +0100
@@ -169,7 +169,7 @@ xlog_reserveq_wake(
*free_bytes -= need_bytes;
trace_xfs_log_grant_wake_up(log, tic);
- wake_up(&tic->t_wait);
+ wake_up_process(tic->t_task);
}
return true;
@@ -193,7 +193,7 @@ xlog_writeq_wake(
*free_bytes -= need_bytes;
trace_xfs_log_regrant_write_wake_up(log, tic);
- wake_up(&tic->t_wait);
+ wake_up_process(tic->t_task);
}
return true;
@@ -212,10 +212,13 @@ xlog_reserveq_wait(
goto shutdown;
xlog_grant_push_ail(log, need_bytes);
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ spin_unlock(&log->l_grant_reserve_lock);
+
XFS_STATS_INC(xs_sleep_logspace);
- trace_xfs_log_grant_sleep(log, tic);
- xlog_wait(&tic->t_wait, &log->l_grant_reserve_lock);
+ trace_xfs_log_grant_sleep(log, tic);
+ schedule();
trace_xfs_log_grant_wake(log, tic);
spin_lock(&log->l_grant_reserve_lock);
@@ -243,10 +246,13 @@ xlog_writeq_wait(
goto shutdown;
xlog_grant_push_ail(log, need_bytes);
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ spin_unlock(&log->l_grant_write_lock);
+
XFS_STATS_INC(xs_sleep_logspace);
- trace_xfs_log_regrant_write_sleep(log, tic);
- xlog_wait(&tic->t_wait, &log->l_grant_write_lock);
+ trace_xfs_log_regrant_write_sleep(log, tic);
+ schedule();
trace_xfs_log_regrant_write_wake(log, tic);
spin_lock(&log->l_grant_write_lock);
@@ -3276,6 +3282,7 @@ xlog_ticket_alloc(
}
atomic_set(&tic->t_ref, 1);
+ tic->t_task = current;
INIT_LIST_HEAD(&tic->t_queue);
tic->t_unit_res = unit_bytes;
tic->t_curr_res = unit_bytes;
@@ -3287,7 +3294,6 @@ xlog_ticket_alloc(
tic->t_trans_type = 0;
if (xflags & XFS_LOG_PERM_RESERV)
tic->t_flags |= XLOG_TIC_PERM_RESERV;
- init_waitqueue_head(&tic->t_wait);
xlog_tic_reset_res(tic);
@@ -3615,12 +3621,12 @@ xfs_log_force_umount(
*/
spin_lock(&log->l_grant_reserve_lock);
list_for_each_entry(tic, &log->l_reserveq, t_queue)
- wake_up(&tic->t_wait);
+ wake_up_process(tic->t_task);
spin_unlock(&log->l_grant_reserve_lock);
spin_lock(&log->l_grant_write_lock);
list_for_each_entry(tic, &log->l_writeq, t_queue)
- wake_up(&tic->t_wait);
+ wake_up_process(tic->t_task);
spin_unlock(&log->l_grant_write_lock);
if (!(log->l_iclog->ic_state & XLOG_STATE_IOERROR)) {
Index: xfs/fs/xfs/xfs_log_priv.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log_priv.h 2011-12-11 20:15:35.918521622 +0100
+++ xfs/fs/xfs/xfs_log_priv.h 2011-12-11 21:28:53.114699966 +0100
@@ -239,8 +239,8 @@ typedef struct xlog_res {
} xlog_res_t;
typedef struct xlog_ticket {
- wait_queue_head_t t_wait; /* ticket wait queue */
struct list_head t_queue; /* reserve/write queue */
+ struct task_struct *t_task; /* task that owns this ticket */
xlog_tid_t t_tid; /* transaction identifier : 4 */
atomic_t t_ref; /* ticket reference count : 4 */
int t_curr_res; /* current reservation in bytes : 4 */
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 06/12] xfs: add the xlog_grant_head structure
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (4 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 05/12] xfs: remove log space waitqueues Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 07/12] xfs: add xlog_grant_head_init Christoph Hellwig
` (6 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-add-xlog_grant_head --]
[-- Type: text/plain, Size: 14979 bytes --]
Add a new data structure to allow sharing code between the log grant and
regrant code.
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 112 +++++++++++++++++++++++------------------------
fs/xfs/xfs_log_priv.h | 23 +++++----
fs/xfs/xfs_log_recover.c | 4 -
fs/xfs/xfs_trace.h | 8 +--
4 files changed, 74 insertions(+), 73 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2012-02-19 13:04:59.586627831 -0800
+++ xfs/fs/xfs/xfs_log.c 2012-02-19 13:05:22.483294435 -0800
@@ -158,7 +158,7 @@ xlog_reserveq_wake(
struct xlog_ticket *tic;
int need_bytes;
- list_for_each_entry(tic, &log->l_reserveq, t_queue) {
+ list_for_each_entry(tic, &log->l_reserve_head.waiters, t_queue) {
if (tic->t_flags & XLOG_TIC_PERM_RESERV)
need_bytes = tic->t_unit_res * tic->t_cnt;
else
@@ -183,7 +183,7 @@ xlog_writeq_wake(
struct xlog_ticket *tic;
int need_bytes;
- list_for_each_entry(tic, &log->l_writeq, t_queue) {
+ list_for_each_entry(tic, &log->l_write_head.waiters, t_queue) {
ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV);
need_bytes = tic->t_unit_res;
@@ -205,7 +205,7 @@ xlog_reserveq_wait(
struct xlog_ticket *tic,
int need_bytes)
{
- list_add_tail(&tic->t_queue, &log->l_reserveq);
+ list_add_tail(&tic->t_queue, &log->l_reserve_head.waiters);
do {
if (XLOG_FORCED_SHUTDOWN(log))
@@ -213,7 +213,7 @@ xlog_reserveq_wait(
xlog_grant_push_ail(log, need_bytes);
__set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock(&log->l_grant_reserve_lock);
+ spin_unlock(&log->l_reserve_head.lock);
XFS_STATS_INC(xs_sleep_logspace);
@@ -221,10 +221,10 @@ xlog_reserveq_wait(
schedule();
trace_xfs_log_grant_wake(log, tic);
- spin_lock(&log->l_grant_reserve_lock);
+ spin_lock(&log->l_reserve_head.lock);
if (XLOG_FORCED_SHUTDOWN(log))
goto shutdown;
- } while (xlog_space_left(log, &log->l_grant_reserve_head) < need_bytes);
+ } while (xlog_space_left(log, &log->l_reserve_head.grant) < need_bytes);
list_del_init(&tic->t_queue);
return 0;
@@ -239,7 +239,7 @@ xlog_writeq_wait(
struct xlog_ticket *tic,
int need_bytes)
{
- list_add_tail(&tic->t_queue, &log->l_writeq);
+ list_add_tail(&tic->t_queue, &log->l_write_head.waiters);
do {
if (XLOG_FORCED_SHUTDOWN(log))
@@ -247,7 +247,7 @@ xlog_writeq_wait(
xlog_grant_push_ail(log, need_bytes);
__set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock(&log->l_grant_write_lock);
+ spin_unlock(&log->l_write_head.lock);
XFS_STATS_INC(xs_sleep_logspace);
@@ -255,10 +255,10 @@ xlog_writeq_wait(
schedule();
trace_xfs_log_regrant_write_wake(log, tic);
- spin_lock(&log->l_grant_write_lock);
+ spin_lock(&log->l_write_head.lock);
if (XLOG_FORCED_SHUTDOWN(log))
goto shutdown;
- } while (xlog_space_left(log, &log->l_grant_write_head) < need_bytes);
+ } while (xlog_space_left(log, &log->l_write_head.grant) < need_bytes);
list_del_init(&tic->t_queue);
return 0;
@@ -779,22 +779,22 @@ xfs_log_space_wake(
if (XLOG_FORCED_SHUTDOWN(log))
return;
- if (!list_empty_careful(&log->l_writeq)) {
+ if (!list_empty_careful(&log->l_write_head.waiters)) {
ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
- spin_lock(&log->l_grant_write_lock);
- free_bytes = xlog_space_left(log, &log->l_grant_write_head);
+ spin_lock(&log->l_write_head.lock);
+ free_bytes = xlog_space_left(log, &log->l_write_head.grant);
xlog_writeq_wake(log, &free_bytes);
- spin_unlock(&log->l_grant_write_lock);
+ spin_unlock(&log->l_write_head.lock);
}
- if (!list_empty_careful(&log->l_reserveq)) {
+ if (!list_empty_careful(&log->l_reserve_head.waiters)) {
ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
- spin_lock(&log->l_grant_reserve_lock);
- free_bytes = xlog_space_left(log, &log->l_grant_reserve_head);
+ spin_lock(&log->l_reserve_head.lock);
+ free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
xlog_reserveq_wake(log, &free_bytes);
- spin_unlock(&log->l_grant_reserve_lock);
+ spin_unlock(&log->l_reserve_head.lock);
}
}
@@ -1070,12 +1070,12 @@ xlog_alloc_log(xfs_mount_t *mp,
xlog_assign_atomic_lsn(&log->l_tail_lsn, 1, 0);
xlog_assign_atomic_lsn(&log->l_last_sync_lsn, 1, 0);
log->l_curr_cycle = 1; /* 0 is bad since this is initial value */
- xlog_assign_grant_head(&log->l_grant_reserve_head, 1, 0);
- xlog_assign_grant_head(&log->l_grant_write_head, 1, 0);
- INIT_LIST_HEAD(&log->l_reserveq);
- INIT_LIST_HEAD(&log->l_writeq);
- spin_lock_init(&log->l_grant_reserve_lock);
- spin_lock_init(&log->l_grant_write_lock);
+ xlog_assign_grant_head(&log->l_reserve_head.grant, 1, 0);
+ xlog_assign_grant_head(&log->l_write_head.grant, 1, 0);
+ INIT_LIST_HEAD(&log->l_reserve_head.waiters);
+ INIT_LIST_HEAD(&log->l_write_head.waiters);
+ spin_lock_init(&log->l_reserve_head.lock);
+ spin_lock_init(&log->l_write_head.lock);
error = EFSCORRUPTED;
if (xfs_sb_version_hassector(&mp->m_sb)) {
@@ -1250,7 +1250,7 @@ xlog_grant_push_ail(
ASSERT(BTOBB(need_bytes) < log->l_logBBsize);
- free_bytes = xlog_space_left(log, &log->l_grant_reserve_head);
+ free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
free_blocks = BTOBBT(free_bytes);
/*
@@ -1382,8 +1382,8 @@ xlog_sync(xlog_t *log,
roundoff < BBTOB(1)));
/* move grant heads by roundoff in sync */
- xlog_grant_add_space(log, &log->l_grant_reserve_head, roundoff);
- xlog_grant_add_space(log, &log->l_grant_write_head, roundoff);
+ xlog_grant_add_space(log, &log->l_reserve_head.grant, roundoff);
+ xlog_grant_add_space(log, &log->l_write_head.grant, roundoff);
/* put cycle number in every block */
xlog_pack_data(log, iclog, roundoff);
@@ -2547,8 +2547,8 @@ restart:
* path. Hence any lock will be globally hot if we take it unconditionally on
* every pass.
*
- * As tickets are only ever moved on and off the reserveq under the
- * l_grant_reserve_lock, we only need to take that lock if we are going to add
+ * As tickets are only ever moved on and off the l_reserve.waiters under the
+ * l_reserve.lock, we only need to take that lock if we are going to add
* the ticket to the queue and sleep. We can avoid taking the lock if the ticket
* was never added to the reserveq because the t_queue list head will be empty
* and we hold the only reference to it so it can safely be checked unlocked.
@@ -2574,23 +2574,23 @@ xlog_grant_log_space(
need_bytes = tic->t_unit_res;
if (tic->t_flags & XFS_LOG_PERM_RESERV)
need_bytes *= tic->t_ocnt;
- free_bytes = xlog_space_left(log, &log->l_grant_reserve_head);
- if (!list_empty_careful(&log->l_reserveq)) {
- spin_lock(&log->l_grant_reserve_lock);
+ free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
+ if (!list_empty_careful(&log->l_reserve_head.waiters)) {
+ spin_lock(&log->l_reserve_head.lock);
if (!xlog_reserveq_wake(log, &free_bytes) ||
free_bytes < need_bytes)
error = xlog_reserveq_wait(log, tic, need_bytes);
- spin_unlock(&log->l_grant_reserve_lock);
+ spin_unlock(&log->l_reserve_head.lock);
} else if (free_bytes < need_bytes) {
- spin_lock(&log->l_grant_reserve_lock);
+ spin_lock(&log->l_reserve_head.lock);
error = xlog_reserveq_wait(log, tic, need_bytes);
- spin_unlock(&log->l_grant_reserve_lock);
+ spin_unlock(&log->l_reserve_head.lock);
}
if (error)
return error;
- xlog_grant_add_space(log, &log->l_grant_reserve_head, need_bytes);
- xlog_grant_add_space(log, &log->l_grant_write_head, need_bytes);
+ xlog_grant_add_space(log, &log->l_reserve_head.grant, need_bytes);
+ xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
trace_xfs_log_grant_exit(log, tic);
xlog_verify_grant_tail(log);
return 0;
@@ -2627,23 +2627,23 @@ xlog_regrant_write_log_space(
* otherwise try to get some space for this transaction.
*/
need_bytes = tic->t_unit_res;
- free_bytes = xlog_space_left(log, &log->l_grant_write_head);
- if (!list_empty_careful(&log->l_writeq)) {
- spin_lock(&log->l_grant_write_lock);
+ free_bytes = xlog_space_left(log, &log->l_write_head.grant);
+ if (!list_empty_careful(&log->l_write_head.waiters)) {
+ spin_lock(&log->l_write_head.lock);
if (!xlog_writeq_wake(log, &free_bytes) ||
free_bytes < need_bytes)
error = xlog_writeq_wait(log, tic, need_bytes);
- spin_unlock(&log->l_grant_write_lock);
+ spin_unlock(&log->l_write_head.lock);
} else if (free_bytes < need_bytes) {
- spin_lock(&log->l_grant_write_lock);
+ spin_lock(&log->l_write_head.lock);
error = xlog_writeq_wait(log, tic, need_bytes);
- spin_unlock(&log->l_grant_write_lock);
+ spin_unlock(&log->l_write_head.lock);
}
if (error)
return error;
- xlog_grant_add_space(log, &log->l_grant_write_head, need_bytes);
+ xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
trace_xfs_log_regrant_write_exit(log, tic);
xlog_verify_grant_tail(log);
return 0;
@@ -2665,9 +2665,9 @@ xlog_regrant_reserve_log_space(xlog_t
if (ticket->t_cnt > 0)
ticket->t_cnt--;
- xlog_grant_sub_space(log, &log->l_grant_reserve_head,
+ xlog_grant_sub_space(log, &log->l_reserve_head.grant,
ticket->t_curr_res);
- xlog_grant_sub_space(log, &log->l_grant_write_head,
+ xlog_grant_sub_space(log, &log->l_write_head.grant,
ticket->t_curr_res);
ticket->t_curr_res = ticket->t_unit_res;
xlog_tic_reset_res(ticket);
@@ -2678,7 +2678,7 @@ xlog_regrant_reserve_log_space(xlog_t
if (ticket->t_cnt > 0)
return;
- xlog_grant_add_space(log, &log->l_grant_reserve_head,
+ xlog_grant_add_space(log, &log->l_reserve_head.grant,
ticket->t_unit_res);
trace_xfs_log_regrant_reserve_exit(log, ticket);
@@ -2724,8 +2724,8 @@ xlog_ungrant_log_space(xlog_t *log,
bytes += ticket->t_unit_res*ticket->t_cnt;
}
- xlog_grant_sub_space(log, &log->l_grant_reserve_head, bytes);
- xlog_grant_sub_space(log, &log->l_grant_write_head, bytes);
+ xlog_grant_sub_space(log, &log->l_reserve_head.grant, bytes);
+ xlog_grant_sub_space(log, &log->l_write_head.grant, bytes);
trace_xfs_log_ungrant_exit(log, ticket);
@@ -3349,7 +3349,7 @@ xlog_verify_grant_tail(
int tail_cycle, tail_blocks;
int cycle, space;
- xlog_crack_grant_head(&log->l_grant_write_head, &cycle, &space);
+ xlog_crack_grant_head(&log->l_write_head.grant, &cycle, &space);
xlog_crack_atomic_lsn(&log->l_tail_lsn, &tail_cycle, &tail_blocks);
if (tail_cycle != cycle) {
if (cycle - 1 != tail_cycle &&
@@ -3619,15 +3619,15 @@ xfs_log_force_umount(
* we don't enqueue anything once the SHUTDOWN flag is set, and this
* action is protected by the grant locks.
*/
- spin_lock(&log->l_grant_reserve_lock);
- list_for_each_entry(tic, &log->l_reserveq, t_queue)
+ spin_lock(&log->l_reserve_head.lock);
+ list_for_each_entry(tic, &log->l_reserve_head.waiters, t_queue)
wake_up_process(tic->t_task);
- spin_unlock(&log->l_grant_reserve_lock);
+ spin_unlock(&log->l_reserve_head.lock);
- spin_lock(&log->l_grant_write_lock);
- list_for_each_entry(tic, &log->l_writeq, t_queue)
+ spin_lock(&log->l_write_head.lock);
+ list_for_each_entry(tic, &log->l_write_head.waiters, t_queue)
wake_up_process(tic->t_task);
- spin_unlock(&log->l_grant_write_lock);
+ spin_unlock(&log->l_write_head.lock);
if (!(log->l_iclog->ic_state & XLOG_STATE_IOERROR)) {
ASSERT(!logerror);
Index: xfs/fs/xfs/xfs_log_priv.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log_priv.h 2012-02-19 13:04:59.586627831 -0800
+++ xfs/fs/xfs/xfs_log_priv.h 2012-02-19 13:05:22.486627769 -0800
@@ -470,6 +470,16 @@ struct xfs_cil {
#define XLOG_CIL_HARD_SPACE_LIMIT(log) (3 * (log->l_logsize >> 4))
/*
+ * ticket grant locks, queues and accounting have their own cachlines
+ * as these are quite hot and can be operated on concurrently.
+ */
+struct xlog_grant_head {
+ spinlock_t lock ____cacheline_aligned_in_smp;
+ struct list_head waiters;
+ atomic64_t grant;
+};
+
+/*
* The reservation head lsn is not made up of a cycle number and block number.
* Instead, it uses a cycle number and byte number. Logs don't expect to
* overflow 31 bits worth of byte offset, so using a byte number will mean
@@ -520,17 +530,8 @@ typedef struct log {
/* lsn of 1st LR with unflushed * buffers */
atomic64_t l_tail_lsn ____cacheline_aligned_in_smp;
- /*
- * ticket grant locks, queues and accounting have their own cachlines
- * as these are quite hot and can be operated on concurrently.
- */
- spinlock_t l_grant_reserve_lock ____cacheline_aligned_in_smp;
- struct list_head l_reserveq;
- atomic64_t l_grant_reserve_head;
-
- spinlock_t l_grant_write_lock ____cacheline_aligned_in_smp;
- struct list_head l_writeq;
- atomic64_t l_grant_write_head;
+ struct xlog_grant_head l_reserve_head;
+ struct xlog_grant_head l_write_head;
/* The following field are used for debugging; need to hold icloglock */
#ifdef DEBUG
Index: xfs/fs/xfs/xfs_log_recover.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log_recover.c 2012-02-19 12:58:47.169962173 -0800
+++ xfs/fs/xfs/xfs_log_recover.c 2012-02-19 13:05:22.486627769 -0800
@@ -965,9 +965,9 @@ xlog_find_tail(
log->l_curr_cycle++;
atomic64_set(&log->l_tail_lsn, be64_to_cpu(rhead->h_tail_lsn));
atomic64_set(&log->l_last_sync_lsn, be64_to_cpu(rhead->h_lsn));
- xlog_assign_grant_head(&log->l_grant_reserve_head, log->l_curr_cycle,
+ xlog_assign_grant_head(&log->l_reserve_head.grant, log->l_curr_cycle,
BBTOB(log->l_curr_block));
- xlog_assign_grant_head(&log->l_grant_write_head, log->l_curr_cycle,
+ xlog_assign_grant_head(&log->l_write_head.grant, log->l_curr_cycle,
BBTOB(log->l_curr_block));
/*
Index: xfs/fs/xfs/xfs_trace.h
===================================================================
--- xfs.orig/fs/xfs/xfs_trace.h 2012-02-19 12:58:48.593295502 -0800
+++ xfs/fs/xfs/xfs_trace.h 2012-02-19 13:05:22.489961102 -0800
@@ -792,12 +792,12 @@ DECLARE_EVENT_CLASS(xfs_loggrant_class,
__entry->curr_res = tic->t_curr_res;
__entry->unit_res = tic->t_unit_res;
__entry->flags = tic->t_flags;
- __entry->reserveq = list_empty(&log->l_reserveq);
- __entry->writeq = list_empty(&log->l_writeq);
- xlog_crack_grant_head(&log->l_grant_reserve_head,
+ __entry->reserveq = list_empty(&log->l_reserve_head.waiters);
+ __entry->writeq = list_empty(&log->l_write_head.waiters);
+ xlog_crack_grant_head(&log->l_reserve_head.grant,
&__entry->grant_reserve_cycle,
&__entry->grant_reserve_bytes);
- xlog_crack_grant_head(&log->l_grant_write_head,
+ xlog_crack_grant_head(&log->l_write_head.grant,
&__entry->grant_write_cycle,
&__entry->grant_write_bytes);
__entry->curr_cycle = log->l_curr_cycle;
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 07/12] xfs: add xlog_grant_head_init
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (5 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 06/12] xfs: add the xlog_grant_head structure Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 08/12] xfs: add xlog_grant_head_wake_all Christoph Hellwig
` (5 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-add-xlog_grant_head_init --]
[-- Type: text/plain, Size: 1588 bytes --]
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-12-11 21:24:39.502740565 +0100
+++ xfs/fs/xfs/xfs_log.c 2011-12-11 21:28:15.334904636 +0100
@@ -150,6 +150,15 @@ xlog_grant_add_space(
} while (head_val != old);
}
+STATIC void
+xlog_grant_head_init(
+ struct xlog_grant_head *head)
+{
+ xlog_assign_grant_head(&head->grant, 1, 0);
+ INIT_LIST_HEAD(&head->waiters);
+ spin_lock_init(&head->lock);
+}
+
STATIC bool
xlog_reserveq_wake(
struct log *log,
@@ -1070,12 +1079,9 @@ xlog_alloc_log(xfs_mount_t *mp,
xlog_assign_atomic_lsn(&log->l_tail_lsn, 1, 0);
xlog_assign_atomic_lsn(&log->l_last_sync_lsn, 1, 0);
log->l_curr_cycle = 1; /* 0 is bad since this is initial value */
- xlog_assign_grant_head(&log->l_reserve_head.grant, 1, 0);
- xlog_assign_grant_head(&log->l_write_head.grant, 1, 0);
- INIT_LIST_HEAD(&log->l_reserve_head.waiters);
- INIT_LIST_HEAD(&log->l_write_head.waiters);
- spin_lock_init(&log->l_reserve_head.lock);
- spin_lock_init(&log->l_write_head.lock);
+
+ xlog_grant_head_init(&log->l_reserve_head);
+ xlog_grant_head_init(&log->l_write_head);
error = EFSCORRUPTED;
if (xfs_sb_version_hassector(&mp->m_sb)) {
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 08/12] xfs: add xlog_grant_head_wake_all
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (6 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 07/12] xfs: add xlog_grant_head_init Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 09/12] xfs: share code for grant head waiting Christoph Hellwig
` (4 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-add-xlog_grant_head_wake_all --]
[-- Type: text/plain, Size: 1809 bytes --]
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-12-11 21:28:56.364682358 +0100
+++ xfs/fs/xfs/xfs_log.c 2011-12-11 21:28:57.128011556 +0100
@@ -159,6 +159,18 @@ xlog_grant_head_init(
spin_lock_init(&head->lock);
}
+STATIC void
+xlog_grant_head_wake_all(
+ struct xlog_grant_head *head)
+{
+ struct xlog_ticket *tic;
+
+ spin_lock(&head->lock);
+ list_for_each_entry(tic, &head->waiters, t_queue)
+ wake_up_process(tic->t_task);
+ spin_unlock(&head->lock);
+}
+
STATIC bool
xlog_reserveq_wake(
struct log *log,
@@ -3557,7 +3569,6 @@ xfs_log_force_umount(
struct xfs_mount *mp,
int logerror)
{
- xlog_ticket_t *tic;
xlog_t *log;
int retval;
@@ -3625,15 +3636,8 @@ xfs_log_force_umount(
* we don't enqueue anything once the SHUTDOWN flag is set, and this
* action is protected by the grant locks.
*/
- spin_lock(&log->l_reserve_head.lock);
- list_for_each_entry(tic, &log->l_reserve_head.waiters, t_queue)
- wake_up_process(tic->t_task);
- spin_unlock(&log->l_reserve_head.lock);
-
- spin_lock(&log->l_write_head.lock);
- list_for_each_entry(tic, &log->l_write_head.waiters, t_queue)
- wake_up_process(tic->t_task);
- spin_unlock(&log->l_write_head.lock);
+ xlog_grant_head_wake_all(&log->l_reserve_head);
+ xlog_grant_head_wake_all(&log->l_write_head);
if (!(log->l_iclog->ic_state & XLOG_STATE_IOERROR)) {
ASSERT(!logerror);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 09/12] xfs: share code for grant head waiting
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (7 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 08/12] xfs: add xlog_grant_head_wake_all Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 10/12] xfs: share code for grant head wakeups Christoph Hellwig
` (3 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-factor-xlog_grant_head_wait --]
[-- Type: text/plain, Size: 4593 bytes --]
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 63 +++++++++++++++--------------------------------------
fs/xfs/xfs_trace.h | 2 -
2 files changed, 18 insertions(+), 47 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2012-02-19 13:06:14.093294296 -0800
+++ xfs/fs/xfs/xfs_log.c 2012-02-19 13:06:35.086627572 -0800
@@ -221,12 +221,13 @@ xlog_writeq_wake(
}
STATIC int
-xlog_reserveq_wait(
+xlog_grant_head_wait(
struct log *log,
+ struct xlog_grant_head *head,
struct xlog_ticket *tic,
int need_bytes)
{
- list_add_tail(&tic->t_queue, &log->l_reserve_head.waiters);
+ list_add_tail(&tic->t_queue, &head->waiters);
do {
if (XLOG_FORCED_SHUTDOWN(log))
@@ -234,7 +235,7 @@ xlog_reserveq_wait(
xlog_grant_push_ail(log, need_bytes);
__set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock(&log->l_reserve_head.lock);
+ spin_unlock(&head->lock);
XFS_STATS_INC(xs_sleep_logspace);
@@ -242,44 +243,10 @@ xlog_reserveq_wait(
schedule();
trace_xfs_log_grant_wake(log, tic);
- spin_lock(&log->l_reserve_head.lock);
+ spin_lock(&head->lock);
if (XLOG_FORCED_SHUTDOWN(log))
goto shutdown;
- } while (xlog_space_left(log, &log->l_reserve_head.grant) < need_bytes);
-
- list_del_init(&tic->t_queue);
- return 0;
-shutdown:
- list_del_init(&tic->t_queue);
- return XFS_ERROR(EIO);
-}
-
-STATIC int
-xlog_writeq_wait(
- struct log *log,
- struct xlog_ticket *tic,
- int need_bytes)
-{
- list_add_tail(&tic->t_queue, &log->l_write_head.waiters);
-
- do {
- if (XLOG_FORCED_SHUTDOWN(log))
- goto shutdown;
- xlog_grant_push_ail(log, need_bytes);
-
- __set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock(&log->l_write_head.lock);
-
- XFS_STATS_INC(xs_sleep_logspace);
-
- trace_xfs_log_regrant_write_sleep(log, tic);
- schedule();
- trace_xfs_log_regrant_write_wake(log, tic);
-
- spin_lock(&log->l_write_head.lock);
- if (XLOG_FORCED_SHUTDOWN(log))
- goto shutdown;
- } while (xlog_space_left(log, &log->l_write_head.grant) < need_bytes);
+ } while (xlog_space_left(log, &head->grant) < need_bytes);
list_del_init(&tic->t_queue);
return 0;
@@ -2596,12 +2563,15 @@ xlog_grant_log_space(
if (!list_empty_careful(&log->l_reserve_head.waiters)) {
spin_lock(&log->l_reserve_head.lock);
if (!xlog_reserveq_wake(log, &free_bytes) ||
- free_bytes < need_bytes)
- error = xlog_reserveq_wait(log, tic, need_bytes);
+ free_bytes < need_bytes) {
+ error = xlog_grant_head_wait(log, &log->l_reserve_head,
+ tic, need_bytes);
+ }
spin_unlock(&log->l_reserve_head.lock);
} else if (free_bytes < need_bytes) {
spin_lock(&log->l_reserve_head.lock);
- error = xlog_reserveq_wait(log, tic, need_bytes);
+ error = xlog_grant_head_wait(log, &log->l_reserve_head, tic,
+ need_bytes);
spin_unlock(&log->l_reserve_head.lock);
}
if (error)
@@ -2649,12 +2619,15 @@ xlog_regrant_write_log_space(
if (!list_empty_careful(&log->l_write_head.waiters)) {
spin_lock(&log->l_write_head.lock);
if (!xlog_writeq_wake(log, &free_bytes) ||
- free_bytes < need_bytes)
- error = xlog_writeq_wait(log, tic, need_bytes);
+ free_bytes < need_bytes) {
+ error = xlog_grant_head_wait(log, &log->l_write_head,
+ tic, need_bytes);
+ }
spin_unlock(&log->l_write_head.lock);
} else if (free_bytes < need_bytes) {
spin_lock(&log->l_write_head.lock);
- error = xlog_writeq_wait(log, tic, need_bytes);
+ error = xlog_grant_head_wait(log, &log->l_write_head, tic,
+ need_bytes);
spin_unlock(&log->l_write_head.lock);
}
Index: xfs/fs/xfs/xfs_trace.h
===================================================================
--- xfs.orig/fs/xfs/xfs_trace.h 2012-02-19 13:05:22.489961102 -0800
+++ xfs/fs/xfs/xfs_trace.h 2012-02-19 13:06:35.089960905 -0800
@@ -847,8 +847,6 @@ DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_enter);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_exit);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_error);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_sleep);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake_up);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_enter);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_exit);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 10/12] xfs: share code for grant head wakeups
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (8 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 09/12] xfs: share code for grant head waiting Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 11/12] xfs: share code for grant head availability checks Christoph Hellwig
` (2 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-factor-xlog_grant_head_wake --]
[-- Type: text/plain, Size: 4381 bytes --]
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 50 ++++++++++++++++++++------------------------------
fs/xfs/xfs_trace.h | 1 -
2 files changed, 20 insertions(+), 31 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2012-02-19 13:06:35.086627572 -0800
+++ xfs/fs/xfs/xfs_log.c 2012-02-19 13:07:14.849960798 -0800
@@ -171,49 +171,39 @@ xlog_grant_head_wake_all(
spin_unlock(&head->lock);
}
-STATIC bool
-xlog_reserveq_wake(
+static inline int
+xlog_ticket_reservation(
struct log *log,
- int *free_bytes)
+ struct xlog_grant_head *head,
+ struct xlog_ticket *tic)
{
- struct xlog_ticket *tic;
- int need_bytes;
-
- list_for_each_entry(tic, &log->l_reserve_head.waiters, t_queue) {
+ if (head == &log->l_write_head) {
+ ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV);
+ return tic->t_unit_res;
+ } else {
if (tic->t_flags & XLOG_TIC_PERM_RESERV)
- need_bytes = tic->t_unit_res * tic->t_cnt;
+ return tic->t_unit_res * tic->t_cnt;
else
- need_bytes = tic->t_unit_res;
-
- if (*free_bytes < need_bytes)
- return false;
- *free_bytes -= need_bytes;
-
- trace_xfs_log_grant_wake_up(log, tic);
- wake_up_process(tic->t_task);
+ return tic->t_unit_res;
}
-
- return true;
}
STATIC bool
-xlog_writeq_wake(
+xlog_grant_head_wake(
struct log *log,
+ struct xlog_grant_head *head,
int *free_bytes)
{
struct xlog_ticket *tic;
int need_bytes;
- list_for_each_entry(tic, &log->l_write_head.waiters, t_queue) {
- ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV);
-
- need_bytes = tic->t_unit_res;
-
+ list_for_each_entry(tic, &head->waiters, t_queue) {
+ need_bytes = xlog_ticket_reservation(log, head, tic);
if (*free_bytes < need_bytes)
return false;
- *free_bytes -= need_bytes;
- trace_xfs_log_regrant_write_wake_up(log, tic);
+ *free_bytes -= need_bytes;
+ trace_xfs_log_grant_wake_up(log, tic);
wake_up_process(tic->t_task);
}
@@ -772,7 +762,7 @@ xfs_log_space_wake(
spin_lock(&log->l_write_head.lock);
free_bytes = xlog_space_left(log, &log->l_write_head.grant);
- xlog_writeq_wake(log, &free_bytes);
+ xlog_grant_head_wake(log, &log->l_write_head, &free_bytes);
spin_unlock(&log->l_write_head.lock);
}
@@ -781,7 +771,7 @@ xfs_log_space_wake(
spin_lock(&log->l_reserve_head.lock);
free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
- xlog_reserveq_wake(log, &free_bytes);
+ xlog_grant_head_wake(log, &log->l_reserve_head, &free_bytes);
spin_unlock(&log->l_reserve_head.lock);
}
}
@@ -2562,7 +2552,7 @@ xlog_grant_log_space(
free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
if (!list_empty_careful(&log->l_reserve_head.waiters)) {
spin_lock(&log->l_reserve_head.lock);
- if (!xlog_reserveq_wake(log, &free_bytes) ||
+ if (!xlog_grant_head_wake(log, &log->l_reserve_head, &free_bytes) ||
free_bytes < need_bytes) {
error = xlog_grant_head_wait(log, &log->l_reserve_head,
tic, need_bytes);
@@ -2618,7 +2608,7 @@ xlog_regrant_write_log_space(
free_bytes = xlog_space_left(log, &log->l_write_head.grant);
if (!list_empty_careful(&log->l_write_head.waiters)) {
spin_lock(&log->l_write_head.lock);
- if (!xlog_writeq_wake(log, &free_bytes) ||
+ if (!xlog_grant_head_wake(log, &log->l_write_head, &free_bytes) ||
free_bytes < need_bytes) {
error = xlog_grant_head_wait(log, &log->l_write_head,
tic, need_bytes);
Index: xfs/fs/xfs/xfs_trace.h
===================================================================
--- xfs.orig/fs/xfs/xfs_trace.h 2012-02-19 13:06:35.089960905 -0800
+++ xfs/fs/xfs/xfs_trace.h 2012-02-19 13:07:14.853294131 -0800
@@ -847,7 +847,6 @@ DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_enter);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_exit);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_error);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake_up);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_enter);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_exit);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_sub);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 11/12] xfs: share code for grant head availability checks
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (9 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 10/12] xfs: share code for grant head wakeups Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-20 2:31 ` [PATCH 12/12] xfs: split and cleanup xfs_log_reserve Christoph Hellwig
2012-02-21 4:56 ` [PATCH 00/12] log grant code cleanups V2 Dave Chinner
12 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-factor-xlog_grant_head_check --]
[-- Type: text/plain, Size: 6578 bytes --]
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 133 ++++++++++++++++++++++++-------------------------------
1 file changed, 60 insertions(+), 73 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2012-02-19 13:07:14.849960798 -0800
+++ xfs/fs/xfs/xfs_log.c 2012-02-19 13:11:21.859960129 -0800
@@ -245,6 +245,60 @@ shutdown:
return XFS_ERROR(EIO);
}
+/*
+ * Atomically get the log space required for a log ticket.
+ *
+ * Once a ticket gets put onto head->waiters, it will only return after the
+ * needed reservation is satisfied.
+ *
+ * This function is structured so that it has a lock free fast path. This is
+ * necessary because every new transaction reservation will come through this
+ * path. Hence any lock will be globally hot if we take it unconditionally on
+ * every pass.
+ *
+ * As tickets are only ever moved on and off head->waiters under head->lock, we
+ * only need to take that lock if we are going to add the ticket to the queue
+ * and sleep. We can avoid taking the lock if the ticket was never added to
+ * head->waiters because the t_queue list head will be empty and we hold the
+ * only reference to it so it can safely be checked unlocked.
+ */
+STATIC int
+xlog_grant_head_check(
+ struct log *log,
+ struct xlog_grant_head *head,
+ struct xlog_ticket *tic,
+ int *need_bytes)
+{
+ int free_bytes;
+ int error = 0;
+
+ ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
+
+ /*
+ * If there are other waiters on the queue then give them a chance at
+ * logspace before us. Wake up the first waiters, if we do not wake
+ * up all the waiters then go to sleep waiting for more free space,
+ * otherwise try to get some space for this transaction.
+ */
+ *need_bytes = xlog_ticket_reservation(log, head, tic);
+ free_bytes = xlog_space_left(log, &head->grant);
+ if (!list_empty_careful(&head->waiters)) {
+ spin_lock(&head->lock);
+ if (!xlog_grant_head_wake(log, head, &free_bytes) ||
+ free_bytes < *need_bytes) {
+ error = xlog_grant_head_wait(log, head, tic,
+ *need_bytes);
+ }
+ spin_unlock(&head->lock);
+ } else if (free_bytes < *need_bytes) {
+ spin_lock(&head->lock);
+ error = xlog_grant_head_wait(log, head, tic, *need_bytes);
+ spin_unlock(&head->lock);
+ }
+
+ return error;
+}
+
static void
xlog_tic_reset_res(xlog_ticket_t *tic)
{
@@ -2511,59 +2565,18 @@ restart:
return 0;
} /* xlog_state_get_iclog_space */
-/*
- * Atomically get the log space required for a log ticket.
- *
- * Once a ticket gets put onto the reserveq, it will only return after the
- * needed reservation is satisfied.
- *
- * This function is structured so that it has a lock free fast path. This is
- * necessary because every new transaction reservation will come through this
- * path. Hence any lock will be globally hot if we take it unconditionally on
- * every pass.
- *
- * As tickets are only ever moved on and off the l_reserve.waiters under the
- * l_reserve.lock, we only need to take that lock if we are going to add
- * the ticket to the queue and sleep. We can avoid taking the lock if the ticket
- * was never added to the reserveq because the t_queue list head will be empty
- * and we hold the only reference to it so it can safely be checked unlocked.
- */
STATIC int
xlog_grant_log_space(
struct log *log,
struct xlog_ticket *tic)
{
- int free_bytes, need_bytes;
+ int need_bytes;
int error = 0;
- ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
-
trace_xfs_log_grant_enter(log, tic);
- /*
- * If there are other waiters on the queue then give them a chance at
- * logspace before us. Wake up the first waiters, if we do not wake
- * up all the waiters then go to sleep waiting for more free space,
- * otherwise try to get some space for this transaction.
- */
- need_bytes = tic->t_unit_res;
- if (tic->t_flags & XFS_LOG_PERM_RESERV)
- need_bytes *= tic->t_ocnt;
- free_bytes = xlog_space_left(log, &log->l_reserve_head.grant);
- if (!list_empty_careful(&log->l_reserve_head.waiters)) {
- spin_lock(&log->l_reserve_head.lock);
- if (!xlog_grant_head_wake(log, &log->l_reserve_head, &free_bytes) ||
- free_bytes < need_bytes) {
- error = xlog_grant_head_wait(log, &log->l_reserve_head,
- tic, need_bytes);
- }
- spin_unlock(&log->l_reserve_head.lock);
- } else if (free_bytes < need_bytes) {
- spin_lock(&log->l_reserve_head.lock);
- error = xlog_grant_head_wait(log, &log->l_reserve_head, tic,
- need_bytes);
- spin_unlock(&log->l_reserve_head.lock);
- }
+ error = xlog_grant_head_check(log, &log->l_reserve_head, tic,
+ &need_bytes);
if (error)
return error;
@@ -2576,16 +2589,13 @@ xlog_grant_log_space(
/*
* Replenish the byte reservation required by moving the grant write head.
- *
- * Similar to xlog_grant_log_space, the function is structured to have a lock
- * free fast path.
*/
STATIC int
xlog_regrant_write_log_space(
struct log *log,
struct xlog_ticket *tic)
{
- int free_bytes, need_bytes;
+ int need_bytes;
int error = 0;
tic->t_curr_res = tic->t_unit_res;
@@ -2594,33 +2604,10 @@ xlog_regrant_write_log_space(
if (tic->t_cnt > 0)
return 0;
- ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY));
-
trace_xfs_log_regrant_write_enter(log, tic);
- /*
- * If there are other waiters on the queue then give them a chance at
- * logspace before us. Wake up the first waiters, if we do not wake
- * up all the waiters then go to sleep waiting for more free space,
- * otherwise try to get some space for this transaction.
- */
- need_bytes = tic->t_unit_res;
- free_bytes = xlog_space_left(log, &log->l_write_head.grant);
- if (!list_empty_careful(&log->l_write_head.waiters)) {
- spin_lock(&log->l_write_head.lock);
- if (!xlog_grant_head_wake(log, &log->l_write_head, &free_bytes) ||
- free_bytes < need_bytes) {
- error = xlog_grant_head_wait(log, &log->l_write_head,
- tic, need_bytes);
- }
- spin_unlock(&log->l_write_head.lock);
- } else if (free_bytes < need_bytes) {
- spin_lock(&log->l_write_head.lock);
- error = xlog_grant_head_wait(log, &log->l_write_head, tic,
- need_bytes);
- spin_unlock(&log->l_write_head.lock);
- }
-
+ error = xlog_grant_head_check(log, &log->l_write_head, tic,
+ &need_bytes);
if (error)
return error;
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 12/12] xfs: split and cleanup xfs_log_reserve
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (10 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 11/12] xfs: share code for grant head availability checks Christoph Hellwig
@ 2012-02-20 2:31 ` Christoph Hellwig
2012-02-21 15:18 ` Mark Tinguely
2012-02-21 4:56 ` [PATCH 00/12] log grant code cleanups V2 Dave Chinner
12 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2012-02-20 2:31 UTC (permalink / raw)
To: xfs
[-- Attachment #1: xfs-split-xfs_log_reserve --]
[-- Type: text/plain, Size: 13962 bytes --]
Split the log regrant case out of xfs_log_reserve into a separate function,
and merge xlog_grant_log_space and xlog_regrant_write_log_space into their
respective callers. Also replace the XFS_LOG_PERM_RESERV flag, which easily
got misused before the previous cleanups with a simple boolean parameter.
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_log.c | 198 ++++++++++++++++++++++++-----------------------------
fs/xfs/xfs_log.h | 1
fs/xfs/xfs_trace.h | 1
fs/xfs/xfs_trans.c | 17 +++-
4 files changed, 103 insertions(+), 114 deletions(-)
Index: xfs/fs/xfs/xfs_log.c
===================================================================
--- xfs.orig/fs/xfs/xfs_log.c 2011-12-11 12:36:10.395664344 -0800
+++ xfs/fs/xfs/xfs_log.c 2011-12-11 16:43:30.273088234 -0800
@@ -67,15 +67,10 @@ STATIC void xlog_state_switch_iclogs(xlo
int eventual_size);
STATIC void xlog_state_want_sync(xlog_t *log, xlog_in_core_t *iclog);
-/* local functions to manipulate grant head */
-STATIC int xlog_grant_log_space(xlog_t *log,
- xlog_ticket_t *xtic);
STATIC void xlog_grant_push_ail(struct log *log,
int need_bytes);
STATIC void xlog_regrant_reserve_log_space(xlog_t *log,
xlog_ticket_t *ticket);
-STATIC int xlog_regrant_write_log_space(xlog_t *log,
- xlog_ticket_t *ticket);
STATIC void xlog_ungrant_log_space(xlog_t *log,
xlog_ticket_t *ticket);
@@ -324,6 +319,128 @@ xlog_tic_add_region(xlog_ticket_t *tic,
}
/*
+ * Replenish the byte reservation required by moving the grant write head.
+ */
+int
+xfs_log_regrant(
+ struct xfs_mount *mp,
+ struct xlog_ticket *tic)
+{
+ struct log *log = mp->m_log;
+ int need_bytes;
+ int error = 0;
+
+ if (XLOG_FORCED_SHUTDOWN(log))
+ return XFS_ERROR(EIO);
+
+ XFS_STATS_INC(xs_try_logspace);
+
+ /*
+ * This is a new transaction on the ticket, so we need to change the
+ * transaction ID so that the next transaction has a different TID in
+ * the log. Just add one to the existing tid so that we can see chains
+ * of rolling transactions in the log easily.
+ */
+ tic->t_tid++;
+
+ xlog_grant_push_ail(log, tic->t_unit_res);
+
+ tic->t_curr_res = tic->t_unit_res;
+ xlog_tic_reset_res(tic);
+
+ if (tic->t_cnt > 0)
+ return 0;
+
+ trace_xfs_log_regrant(log, tic);
+
+ error = xlog_grant_head_check(log, &log->l_write_head, tic,
+ &need_bytes);
+ if (error)
+ goto out_error;
+
+ xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
+ trace_xfs_log_regrant_exit(log, tic);
+ xlog_verify_grant_tail(log);
+ return 0;
+
+out_error:
+ /*
+ * If we are failing, make sure the ticket doesn't have any current
+ * reservations. We don't want to add this back when the ticket/
+ * transaction gets cancelled.
+ */
+ tic->t_curr_res = 0;
+ tic->t_cnt = 0; /* ungrant will give back unit_res * t_cnt. */
+ return error;
+}
+
+/*
+ * Reserve log space and return a ticket corresponding the reservation.
+ *
+ * Each reservation is going to reserve extra space for a log record header.
+ * When writes happen to the on-disk log, we don't subtract the length of the
+ * log record header from any reservation. By wasting space in each
+ * reservation, we prevent over allocation problems.
+ */
+int
+xfs_log_reserve(
+ struct xfs_mount *mp,
+ int unit_bytes,
+ int cnt,
+ struct xlog_ticket **ticp,
+ __uint8_t client,
+ bool permanent,
+ uint t_type)
+{
+ struct log *log = mp->m_log;
+ struct xlog_ticket *tic;
+ int need_bytes;
+ int error = 0;
+
+ ASSERT(client == XFS_TRANSACTION || client == XFS_LOG);
+
+ if (XLOG_FORCED_SHUTDOWN(log))
+ return XFS_ERROR(EIO);
+
+ XFS_STATS_INC(xs_try_logspace);
+
+ ASSERT(*ticp == NULL);
+ tic = xlog_ticket_alloc(log, unit_bytes, cnt, client, permanent,
+ KM_SLEEP | KM_MAYFAIL);
+ if (!tic)
+ return XFS_ERROR(ENOMEM);
+
+ tic->t_trans_type = t_type;
+ *ticp = tic;
+
+ xlog_grant_push_ail(log, tic->t_unit_res * tic->t_cnt);
+
+ trace_xfs_log_reserve(log, tic);
+
+ error = xlog_grant_head_check(log, &log->l_reserve_head, tic,
+ &need_bytes);
+ if (error)
+ goto out_error;
+
+ xlog_grant_add_space(log, &log->l_reserve_head.grant, need_bytes);
+ xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
+ trace_xfs_log_reserve_exit(log, tic);
+ xlog_verify_grant_tail(log);
+ return 0;
+
+out_error:
+ /*
+ * If we are failing, make sure the ticket doesn't have any current
+ * reservations. We don't want to add this back when the ticket/
+ * transaction gets cancelled.
+ */
+ tic->t_curr_res = 0;
+ tic->t_cnt = 0; /* ungrant will give back unit_res * t_cnt. */
+ return error;
+}
+
+
+/*
* NOTES:
*
* 1. currblock field gets updated at startup and after in-core logs
@@ -433,88 +550,6 @@ xfs_log_release_iclog(
}
/*
- * 1. Reserve an amount of on-disk log space and return a ticket corresponding
- * to the reservation.
- * 2. Potentially, push buffers at tail of log to disk.
- *
- * Each reservation is going to reserve extra space for a log record header.
- * When writes happen to the on-disk log, we don't subtract the length of the
- * log record header from any reservation. By wasting space in each
- * reservation, we prevent over allocation problems.
- */
-int
-xfs_log_reserve(
- struct xfs_mount *mp,
- int unit_bytes,
- int cnt,
- struct xlog_ticket **ticket,
- __uint8_t client,
- uint flags,
- uint t_type)
-{
- struct log *log = mp->m_log;
- struct xlog_ticket *internal_ticket;
- int retval = 0;
-
- ASSERT(client == XFS_TRANSACTION || client == XFS_LOG);
-
- if (XLOG_FORCED_SHUTDOWN(log))
- return XFS_ERROR(EIO);
-
- XFS_STATS_INC(xs_try_logspace);
-
-
- if (*ticket != NULL) {
- ASSERT(flags & XFS_LOG_PERM_RESERV);
- internal_ticket = *ticket;
-
- /*
- * this is a new transaction on the ticket, so we need to
- * change the transaction ID so that the next transaction has a
- * different TID in the log. Just add one to the existing tid
- * so that we can see chains of rolling transactions in the log
- * easily.
- */
- internal_ticket->t_tid++;
-
- trace_xfs_log_reserve(log, internal_ticket);
-
- xlog_grant_push_ail(log, internal_ticket->t_unit_res);
- retval = xlog_regrant_write_log_space(log, internal_ticket);
- } else {
- /* may sleep if need to allocate more tickets */
- internal_ticket = xlog_ticket_alloc(log, unit_bytes, cnt,
- client, flags,
- KM_SLEEP|KM_MAYFAIL);
- if (!internal_ticket)
- return XFS_ERROR(ENOMEM);
- internal_ticket->t_trans_type = t_type;
- *ticket = internal_ticket;
-
- trace_xfs_log_reserve(log, internal_ticket);
-
- xlog_grant_push_ail(log,
- (internal_ticket->t_unit_res *
- internal_ticket->t_cnt));
- retval = xlog_grant_log_space(log, internal_ticket);
- }
-
- if (unlikely(retval)) {
- /*
- * If we are failing, make sure the ticket doesn't have any
- * current reservations. We don't want to add this back
- * when the ticket/ transaction gets cancelled.
- */
- internal_ticket->t_curr_res = 0;
- /* ungrant will give back unit_res * t_cnt. */
- internal_ticket->t_cnt = 0;
- }
-
- return retval;
-}
-
-
-/*
* Mount a log filesystem
*
* mp - ubiquitous xfs mount point structure
@@ -2565,58 +2600,6 @@ restart:
return 0;
} /* xlog_state_get_iclog_space */
-STATIC int
-xlog_grant_log_space(
- struct log *log,
- struct xlog_ticket *tic)
-{
- int need_bytes;
- int error = 0;
-
- trace_xfs_log_grant_enter(log, tic);
-
- error = xlog_grant_head_check(log, &log->l_reserve_head, tic,
- &need_bytes);
- if (error)
- return error;
-
- xlog_grant_add_space(log, &log->l_reserve_head.grant, need_bytes);
- xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
- trace_xfs_log_grant_exit(log, tic);
- xlog_verify_grant_tail(log);
- return 0;
-}
-
-/*
- * Replenish the byte reservation required by moving the grant write head.
- */
-STATIC int
-xlog_regrant_write_log_space(
- struct log *log,
- struct xlog_ticket *tic)
-{
- int need_bytes;
- int error = 0;
-
- tic->t_curr_res = tic->t_unit_res;
- xlog_tic_reset_res(tic);
-
- if (tic->t_cnt > 0)
- return 0;
-
- trace_xfs_log_regrant_write_enter(log, tic);
-
- error = xlog_grant_head_check(log, &log->l_write_head, tic,
- &need_bytes);
- if (error)
- return error;
-
- xlog_grant_add_space(log, &log->l_write_head.grant, need_bytes);
- trace_xfs_log_regrant_write_exit(log, tic);
- xlog_verify_grant_tail(log);
- return 0;
-}
-
/* The first cnt-1 times through here we don't need to
* move the grant write head because the permanent
* reservation has reserved cnt times the unit amount.
@@ -3156,7 +3139,7 @@ xlog_ticket_alloc(
int unit_bytes,
int cnt,
char client,
- uint xflags,
+ bool permanent,
int alloc_flags)
{
struct xlog_ticket *tic;
@@ -3260,7 +3243,7 @@ xlog_ticket_alloc(
tic->t_clientid = client;
tic->t_flags = XLOG_TIC_INITED;
tic->t_trans_type = 0;
- if (xflags & XFS_LOG_PERM_RESERV)
+ if (permanent)
tic->t_flags |= XLOG_TIC_PERM_RESERV;
xlog_tic_reset_res(tic);
Index: xfs/fs/xfs/xfs_log.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log.h 2011-12-11 12:24:34.000000000 -0800
+++ xfs/fs/xfs/xfs_log.h 2011-12-11 16:42:12.056845301 -0800
@@ -53,15 +53,6 @@ static inline xfs_lsn_t _lsn_cmp(xfs_lsn
#define XFS_LOG_REL_PERM_RESERV 0x1
/*
- * Flags to xfs_log_reserve()
- *
- * XFS_LOG_PERM_RESERV: Permanent reservation. When writes are
- * performed against this type of reservation, the reservation
- * is not decreased. Long running transactions should use this.
- */
-#define XFS_LOG_PERM_RESERV 0x2
-
-/*
* Flags to xfs_log_force()
*
* XFS_LOG_SYNC: Synchronous force in-core log to disk
@@ -172,8 +163,9 @@ int xfs_log_reserve(struct xfs_mount *
int count,
struct xlog_ticket **ticket,
__uint8_t clientid,
- uint flags,
+ bool permanent,
uint t_type);
+int xfs_log_regrant(struct xfs_mount *mp, struct xlog_ticket *tic);
int xfs_log_unmount_write(struct xfs_mount *mp);
void xfs_log_unmount(struct xfs_mount *mp);
int xfs_log_force_umount(struct xfs_mount *mp, int logerror);
Index: xfs/fs/xfs/xfs_trace.h
===================================================================
--- xfs.orig/fs/xfs/xfs_trace.h 2011-12-11 12:32:42.000000000 -0800
+++ xfs/fs/xfs/xfs_trace.h 2011-12-11 16:14:57.932364781 -0800
@@ -827,17 +827,14 @@ DEFINE_EVENT(xfs_loggrant_class, name, \
TP_ARGS(log, tic))
DEFINE_LOGGRANT_EVENT(xfs_log_done_nonperm);
DEFINE_LOGGRANT_EVENT(xfs_log_done_perm);
-DEFINE_LOGGRANT_EVENT(xfs_log_reserve);
DEFINE_LOGGRANT_EVENT(xfs_log_umount_write);
-DEFINE_LOGGRANT_EVENT(xfs_log_grant_enter);
-DEFINE_LOGGRANT_EVENT(xfs_log_grant_exit);
-DEFINE_LOGGRANT_EVENT(xfs_log_grant_error);
DEFINE_LOGGRANT_EVENT(xfs_log_grant_sleep);
DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake);
DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake_up);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_enter);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_exit);
-DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_error);
+DEFINE_LOGGRANT_EVENT(xfs_log_reserve);
+DEFINE_LOGGRANT_EVENT(xfs_log_reserve_exit);
+DEFINE_LOGGRANT_EVENT(xfs_log_regrant);
+DEFINE_LOGGRANT_EVENT(xfs_log_regrant_exit);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_enter);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_exit);
DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_sub);
Index: xfs/fs/xfs/xfs_trans.c
===================================================================
--- xfs.orig/fs/xfs/xfs_trans.c 2011-12-11 12:24:34.000000000 -0800
+++ xfs/fs/xfs/xfs_trans.c 2011-12-11 16:43:16.753161478 -0800
@@ -681,7 +681,6 @@ xfs_trans_reserve(
uint flags,
uint logcount)
{
- int log_flags;
int error = 0;
int rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0;
@@ -707,24 +706,32 @@ xfs_trans_reserve(
* Reserve the log space needed for this transaction.
*/
if (logspace > 0) {
- ASSERT((tp->t_log_res == 0) || (tp->t_log_res == logspace));
- ASSERT((tp->t_log_count == 0) ||
- (tp->t_log_count == logcount));
+ bool permanent = false;
+
+ ASSERT(tp->t_log_res == 0 || tp->t_log_res == logspace);
+ ASSERT(tp->t_log_count == 0 || tp->t_log_count == logcount);
+
if (flags & XFS_TRANS_PERM_LOG_RES) {
- log_flags = XFS_LOG_PERM_RESERV;
tp->t_flags |= XFS_TRANS_PERM_LOG_RES;
+ permanent = true;
} else {
ASSERT(tp->t_ticket == NULL);
ASSERT(!(tp->t_flags & XFS_TRANS_PERM_LOG_RES));
- log_flags = 0;
}
- error = xfs_log_reserve(tp->t_mountp, logspace, logcount,
- &tp->t_ticket,
- XFS_TRANSACTION, log_flags, tp->t_type);
- if (error) {
- goto undo_blocks;
+ if (tp->t_ticket != NULL) {
+ ASSERT(flags & XFS_TRANS_PERM_LOG_RES);
+ error = xfs_log_regrant(tp->t_mountp, tp->t_ticket);
+ } else {
+ error = xfs_log_reserve(tp->t_mountp, logspace,
+ logcount, &tp->t_ticket,
+ XFS_TRANSACTION, permanent,
+ tp->t_type);
}
+
+ if (error)
+ goto undo_blocks;
+
tp->t_log_res = logspace;
tp->t_log_count = logcount;
}
@@ -752,6 +759,8 @@ xfs_trans_reserve(
*/
undo_log:
if (logspace > 0) {
+ int log_flags;
+
if (flags & XFS_TRANS_PERM_LOG_RES) {
log_flags = XFS_LOG_REL_PERM_RESERV;
} else {
Index: xfs/fs/xfs/xfs_log_priv.h
===================================================================
--- xfs.orig/fs/xfs/xfs_log_priv.h 2011-12-11 15:53:25.556032850 -0800
+++ xfs/fs/xfs/xfs_log_priv.h 2011-12-11 16:43:47.969659031 -0800
@@ -552,7 +552,7 @@ extern void xlog_pack_data(xlog_t *log,
extern kmem_zone_t *xfs_log_ticket_zone;
struct xlog_ticket *xlog_ticket_alloc(struct log *log, int unit_bytes,
- int count, char client, uint xflags,
+ int count, char client, bool permanent,
int alloc_flags);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 12/12] xfs: split and cleanup xfs_log_reserve
2012-02-20 2:31 ` [PATCH 12/12] xfs: split and cleanup xfs_log_reserve Christoph Hellwig
@ 2012-02-21 15:18 ` Mark Tinguely
0 siblings, 0 replies; 15+ messages in thread
From: Mark Tinguely @ 2012-02-21 15:18 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
On 02/19/12 20:31, Christoph Hellwig wrote:
> Split the log regrant case out of xfs_log_reserve into a separate function,
> and merge xlog_grant_log_space and xlog_regrant_write_log_space into their
> respective callers. Also replace the XFS_LOG_PERM_RESERV flag, which easily
> got misused before the previous cleanups with a simple boolean parameter.
>
> Reviewed-by: Ben Myers<bpm@sgi.com>
> Signed-off-by: Christoph Hellwig<hch@lst.de>
>
> ---
> fs/xfs/xfs_log.c | 198 ++++++++++++++++++++++++-----------------------------
> fs/xfs/xfs_log.h | 1
> fs/xfs/xfs_trace.h | 1
> fs/xfs/xfs_trans.c | 17 +++-
> 4 files changed, 103 insertions(+), 114 deletions(-)
Sorry for the omission. The clean up looks good.
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 00/12] log grant code cleanups V2
2012-02-20 2:31 [PATCH 00/12] log grant code cleanups V2 Christoph Hellwig
` (11 preceding siblings ...)
2012-02-20 2:31 ` [PATCH 12/12] xfs: split and cleanup xfs_log_reserve Christoph Hellwig
@ 2012-02-21 4:56 ` Dave Chinner
12 siblings, 0 replies; 15+ messages in thread
From: Dave Chinner @ 2012-02-21 4:56 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: xfs
On Sun, Feb 19, 2012 at 09:31:19PM -0500, Christoph Hellwig wrote:
> This series removes the opportunistic log space wakeups which had no
> use but hiding real races for far too long, and applies various bits
> of refactoring to the log grant code to make it smaller and more readable.
>
> Changes since V1:
> - address various minor review comments
You can add:
Reviewed-by: Dave Chinner <dchinner@redhat.com>
To the series. Solving the higher level issue of grant heads moving
between the sample and the addition can wait until this goes in.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 15+ messages in thread