public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] speculative preallocation quota throttling
@ 2012-12-05 16:47 Brian Foster
  2012-12-05 16:47 ` [PATCH 1/4] xfs: reorganize xfs_iomap_prealloc_size to remove indentation Brian Foster
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Brian Foster @ 2012-12-05 16:47 UTC (permalink / raw)
  To: xfs

Hi All,

This set is intended to enable throttling of speculative preallocation as we
approach EDQUOT. Currently, speculative preallocation is throttled only as we
approach global ENOSPC. The addition of quota prealloc. throttling helps prevent
performance issues (e.g., via reduction of prealloc, ENOSPC, inode flush
sequences) and premature errors in this scenario.

Functional Description
XFS speculative preallocation quota throttling is controlled via the hard and
soft quota limits. Preallocation is throttled against the hard limit with a
default prealloc. maximum of 5% free space in the quota. Preallocation is
disabled if the hard limit is surpassed (i.e., noenforce).

If a soft quota limit is set, it is used as a watermark to enable throttling.
The difference between the soft and hard limits also scales the throttling
percentage heuristic (i.e., a 10% difference between the hard and soft limit
adjusts the prealloc throttling heuristic to 10%).

Testing
I've tested this functionality by running 32 concurrent writers (18G each, to
trigger max prealloc requests when files are >8GB) into a project quota of
576GB [1]. Without quota throttling, I'm able to write ~528GB before errors
propagate to the test program and writing stops. With quota throttling enabled
(using the default 5% limit), the test writes ~576GB. With a 10% throttle, the
test stops at ~564GB.

I'm pretty sure I've run this through xfstests in the past, but I don't have a
record of results so I'll be running this through some tests soon.

Brian

P.S., I was originally planning to include eofblocks based handling of EDQUOT
errors in this set but I have more studying up and hacking to do there. It's
easier for me to carry that as an independent set.

[1] - Using the following iozone command:
	iozone -w -c -e -i 0 -+n -r 4k -s 18g -t 32 -F /mnt/file{0..31}

Brian Foster (4):
  xfs: reorganize xfs_iomap_prealloc_size to remove indentation
  xfs: push rounddown_pow_of_two() to after prealloc throttle
  xfs: add quota-driven speculative preallocation throttling
  xfs: preallocation throttling tracepoints

 fs/xfs/xfs_iomap.c |  173 +++++++++++++++++++++++++++++++++++++++++++---------
 fs/xfs/xfs_iomap.h |    2 +
 fs/xfs/xfs_trace.h |   62 +++++++++++++++++++
 3 files changed, 209 insertions(+), 28 deletions(-)

-- 
1.7.7.6

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/4] xfs: reorganize xfs_iomap_prealloc_size to remove indentation
  2012-12-05 16:47 [PATCH 0/4] speculative preallocation quota throttling Brian Foster
@ 2012-12-05 16:47 ` Brian Foster
  2012-12-05 16:47 ` [PATCH 2/4] xfs: push rounddown_pow_of_two() to after prealloc throttle Brian Foster
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Brian Foster @ 2012-12-05 16:47 UTC (permalink / raw)
  To: xfs

The majority of xfs_iomap_prealloc_size() executes within the
check for lack of default I/O size. Reverse the logic to remove the
extra indentation.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_iomap.c |   55 ++++++++++++++++++++++++++-------------------------
 1 files changed, 28 insertions(+), 27 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index add06b4..bd7c060 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -322,37 +322,38 @@ xfs_iomap_prealloc_size(
 	struct xfs_inode	*ip)
 {
 	xfs_fsblock_t		alloc_blocks = 0;
+	int			shift = 0;
+	int64_t			freesp;
 
-	if (!(mp->m_flags & XFS_MOUNT_DFLT_IOSIZE)) {
-		int shift = 0;
-		int64_t freesp;
+	if (mp->m_flags & XFS_MOUNT_DFLT_IOSIZE)
+		goto check_writeio;
 
-		/*
-		 * rounddown_pow_of_two() returns an undefined result
-		 * if we pass in alloc_blocks = 0. Hence the "+ 1" to
-		 * ensure we always pass in a non-zero value.
-		 */
-		alloc_blocks = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)) + 1;
-		alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN,
-					rounddown_pow_of_two(alloc_blocks));
-
-		xfs_icsb_sync_counters(mp, XFS_ICSB_LAZY_COUNT);
-		freesp = mp->m_sb.sb_fdblocks;
-		if (freesp < mp->m_low_space[XFS_LOWSP_5_PCNT]) {
-			shift = 2;
-			if (freesp < mp->m_low_space[XFS_LOWSP_4_PCNT])
-				shift++;
-			if (freesp < mp->m_low_space[XFS_LOWSP_3_PCNT])
-				shift++;
-			if (freesp < mp->m_low_space[XFS_LOWSP_2_PCNT])
-				shift++;
-			if (freesp < mp->m_low_space[XFS_LOWSP_1_PCNT])
-				shift++;
-		}
-		if (shift)
-			alloc_blocks >>= shift;
+	/*
+	 * rounddown_pow_of_two() returns an undefined result
+	 * if we pass in alloc_blocks = 0. Hence the "+ 1" to
+	 * ensure we always pass in a non-zero value.
+	 */
+	alloc_blocks = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)) + 1;
+	alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN,
+				rounddown_pow_of_two(alloc_blocks));
+
+	xfs_icsb_sync_counters(mp, XFS_ICSB_LAZY_COUNT);
+	freesp = mp->m_sb.sb_fdblocks;
+	if (freesp < mp->m_low_space[XFS_LOWSP_5_PCNT]) {
+		shift = 2;
+		if (freesp < mp->m_low_space[XFS_LOWSP_4_PCNT])
+			shift++;
+		if (freesp < mp->m_low_space[XFS_LOWSP_3_PCNT])
+			shift++;
+		if (freesp < mp->m_low_space[XFS_LOWSP_2_PCNT])
+			shift++;
+		if (freesp < mp->m_low_space[XFS_LOWSP_1_PCNT])
+			shift++;
 	}
+	if (shift)
+		alloc_blocks >>= shift;
 
+check_writeio:
 	if (alloc_blocks < mp->m_writeio_blocks)
 		alloc_blocks = mp->m_writeio_blocks;
 
-- 
1.7.7.6

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/4] xfs: push rounddown_pow_of_two() to after prealloc throttle
  2012-12-05 16:47 [PATCH 0/4] speculative preallocation quota throttling Brian Foster
  2012-12-05 16:47 ` [PATCH 1/4] xfs: reorganize xfs_iomap_prealloc_size to remove indentation Brian Foster
@ 2012-12-05 16:47 ` Brian Foster
  2012-12-05 16:47 ` [PATCH 3/4] xfs: add quota-driven speculative preallocation throttling Brian Foster
  2012-12-05 16:47 ` [PATCH 4/4] xfs: preallocation throttling tracepoints Brian Foster
  3 siblings, 0 replies; 5+ messages in thread
From: Brian Foster @ 2012-12-05 16:47 UTC (permalink / raw)
  To: xfs

The round down occurs towards the beginning of the function. Push
it down after throttling has occurred. This is to support adding
further transformations to 'alloc_blocks' that might not preserve
power-of-two alignment (and thus could lead to rounding down
multiple times).

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_iomap.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index bd7c060..d381326 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -329,13 +329,11 @@ xfs_iomap_prealloc_size(
 		goto check_writeio;
 
 	/*
-	 * rounddown_pow_of_two() returns an undefined result
-	 * if we pass in alloc_blocks = 0. Hence the "+ 1" to
-	 * ensure we always pass in a non-zero value.
+	 * MAXEXTLEN is 21 bits, add one to protect against the rounddown
+	 * further down.
 	 */
-	alloc_blocks = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)) + 1;
-	alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN,
-				rounddown_pow_of_two(alloc_blocks));
+	alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN + 1,
+				XFS_B_TO_FSB(mp, XFS_ISIZE(ip)));
 
 	xfs_icsb_sync_counters(mp, XFS_ICSB_LAZY_COUNT);
 	freesp = mp->m_sb.sb_fdblocks;
@@ -352,6 +350,8 @@ xfs_iomap_prealloc_size(
 	}
 	if (shift)
 		alloc_blocks >>= shift;
+	if (alloc_blocks)
+		alloc_blocks = rounddown_pow_of_two(alloc_blocks);
 
 check_writeio:
 	if (alloc_blocks < mp->m_writeio_blocks)
-- 
1.7.7.6

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/4] xfs: add quota-driven speculative preallocation throttling
  2012-12-05 16:47 [PATCH 0/4] speculative preallocation quota throttling Brian Foster
  2012-12-05 16:47 ` [PATCH 1/4] xfs: reorganize xfs_iomap_prealloc_size to remove indentation Brian Foster
  2012-12-05 16:47 ` [PATCH 2/4] xfs: push rounddown_pow_of_two() to after prealloc throttle Brian Foster
@ 2012-12-05 16:47 ` Brian Foster
  2012-12-05 16:47 ` [PATCH 4/4] xfs: preallocation throttling tracepoints Brian Foster
  3 siblings, 0 replies; 5+ messages in thread
From: Brian Foster @ 2012-12-05 16:47 UTC (permalink / raw)
  To: xfs

Speculative preallocation currently occurs based on the size of a
file (8GB max) and is throttled only within 5% of ENOSPC. Enable
similar throttling as an inode approaches EDQUOT.

Preallocation is throttled to a quota hard limit and disabled if
the hard limit is surpassed (noenforce). If a soft limit is also
specified, it serves as a low watermark to enable throttling and is
used to adjust the percentage of free quota space a single
preallocation is allowed to consume (5% by default).

The algorithm determines the max percentage allowed for each quota
and calculates the associated raw values. The minimum raw value
across all quotas applicable to the inode represents the maximum
size allowed for a preallocation on that inode.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_iomap.c |  114 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_iomap.h |    2 +
 2 files changed, 115 insertions(+), 1 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index d381326..bbeec02 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -42,6 +42,8 @@
 #include "xfs_iomap.h"
 #include "xfs_trace.h"
 #include "xfs_icache.h"
+#include "xfs_dquot_item.h"
+#include "xfs_dquot.h"
 
 
 #define XFS_WRITEIO_ALIGN(mp,off)	(((off) >> mp->m_writeio_log) \
@@ -311,10 +313,110 @@ xfs_iomap_eof_want_preallocate(
 }
 
 /*
+ * Return the maximum size preallocation allowed for a particular dquot. 
+ *
+ * Quota throttling is enabled when a hard limit is defined. By default, a
+ * preallocation is allowed to consume no more than 5% of available space in
+ * the quota.
+ *
+ * If a soft limit is also defined, quota throttling is not enabled until the
+ * requested preallocation surpasses the soft limit. The throttling percentage
+ * is also redefined to equal the difference between the soft and hard limits
+ * over the hard limit (i.e., (hard - soft) / hard).
+ *
+ * -1 is returned if no throttling is required. 0 is returned if preallocation
+ *  should be disabled.
+ */
+STATIC int64_t
+xfs_prealloc_dquot_max(
+	struct xfs_dquot	*dq,
+	xfs_fsblock_t		alloc_blocks)
+{
+	xfs_qcnt_t		hardlimit;
+	xfs_qcnt_t		softlimit;
+	int64_t			free;
+	int64_t			pct = XFS_DEFAULT_QTHROTTLE_PCT;
+
+	if (!dq)
+		return -1;
+
+	hardlimit = be64_to_cpu(dq->q_core.d_blk_hardlimit);
+	softlimit = be64_to_cpu(dq->q_core.d_blk_softlimit);
+
+	if (!hardlimit)
+		return -1;
+
+	/* disable preallocation if we're over the hard limit */
+	free = hardlimit - dq->q_res_bcount;
+	if (free < 0) 
+		return 0;
+
+	/* disable throttling if we're under the soft limit */
+	if (softlimit && (dq->q_res_bcount + alloc_blocks) < softlimit)
+		return -1;
+
+	/*
+	 * If specified, use the difference between the soft and hard limits
+	 * over the hard limit to determine the throttling percentage. The
+	 * throttling percentage determines how much of the quota free space a
+	 * single preallocation can consume.
+	 */
+	if (softlimit) {
+		pct = (hardlimit - softlimit) * 100;
+		do_div(pct, hardlimit);
+	}
+	ASSERT(pct >= 0 && pct <= 100);
+
+	do_div(free, 100);
+	free *= pct;
+
+	return free;
+}
+
+/*
+ * Apply the quota preallocation throttling algorithm to each enabled quota and
+ * return the most restrictive value. The return value is the maximum size
+ * preallocation allowed for the inode.
+ */
+STATIC int64_t
+xfs_prealloc_quota_max(
+	struct xfs_inode	*ip,
+	xfs_fsblock_t		alloc_blocks)
+{
+	int64_t			free;
+	int64_t 		min_free = -1;
+	struct xfs_dquot	*dq;
+
+	if (XFS_IS_UQUOTA_ON(ip->i_mount)) {
+		dq = xfs_inode_dquot(ip, XFS_DQ_USER);
+		free = xfs_prealloc_dquot_max(dq, alloc_blocks);
+		if (free != -1 && (free < min_free || min_free == -1))
+			min_free = free;
+	}
+
+	if (XFS_IS_GQUOTA_ON(ip->i_mount)) {
+		dq = xfs_inode_dquot(ip, XFS_DQ_GROUP);
+		free = xfs_prealloc_dquot_max(dq, alloc_blocks);
+		if (free != -1 && (free < min_free || min_free == -1))
+			min_free = free;
+	}
+
+	if (XFS_IS_PQUOTA_ON(ip->i_mount)) {
+		dq = xfs_inode_dquot(ip, XFS_DQ_PROJ);
+		free = xfs_prealloc_dquot_max(dq, alloc_blocks);
+		if (free != -1 && (free < min_free || min_free == -1))
+			min_free = free;
+	}
+
+	return min_free;
+}
+
+/*
  * If we don't have a user specified preallocation size, dynamically increase
  * the preallocation size as the size of the file grows. Cap the maximum size
  * at a single extent or less if the filesystem is near full. The closer the
- * filesystem is to full, the smaller the maximum prealocation.
+ * filesystem is to full or a hard quota limit, the smaller the maximum
+ * preallocation.
  */
 STATIC xfs_fsblock_t
 xfs_iomap_prealloc_size(
@@ -324,6 +426,7 @@ xfs_iomap_prealloc_size(
 	xfs_fsblock_t		alloc_blocks = 0;
 	int			shift = 0;
 	int64_t			freesp;
+	int64_t                 max_quota_prealloc;
 
 	if (mp->m_flags & XFS_MOUNT_DFLT_IOSIZE)
 		goto check_writeio;
@@ -348,6 +451,15 @@ xfs_iomap_prealloc_size(
 		if (freesp < mp->m_low_space[XFS_LOWSP_1_PCNT])
 			shift++;
 	}
+
+	/*
+	 * Throttle speculative allocation against the most restrictive quota
+	 * limit.
+	 */
+	max_quota_prealloc = xfs_prealloc_quota_max(ip, alloc_blocks);
+
+	if (max_quota_prealloc >= 0 && alloc_blocks >= max_quota_prealloc)
+		alloc_blocks = max_quota_prealloc;
 	if (shift)
 		alloc_blocks >>= shift;
 	if (alloc_blocks)
diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
index 8061576..07d79ea 100644
--- a/fs/xfs/xfs_iomap.h
+++ b/fs/xfs/xfs_iomap.h
@@ -18,6 +18,8 @@
 #ifndef __XFS_IOMAP_H__
 #define __XFS_IOMAP_H__
 
+#define XFS_DEFAULT_QTHROTTLE_PCT 5	/* default quota throttling % */
+
 struct xfs_inode;
 struct xfs_bmbt_irec;
 
-- 
1.7.7.6

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 4/4] xfs: preallocation throttling tracepoints
  2012-12-05 16:47 [PATCH 0/4] speculative preallocation quota throttling Brian Foster
                   ` (2 preceding siblings ...)
  2012-12-05 16:47 ` [PATCH 3/4] xfs: add quota-driven speculative preallocation throttling Brian Foster
@ 2012-12-05 16:47 ` Brian Foster
  3 siblings, 0 replies; 5+ messages in thread
From: Brian Foster @ 2012-12-05 16:47 UTC (permalink / raw)
  To: xfs

Define tracepoints for preallocation throttling. The
xfs_prealloc_dquot_max_pct() tracepoint provides data on the max
allowable prealloc for each quota. The xfs_iomap_prealloc_size()
tracepoint provides data on the overall prealloc.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_iomap.c |    6 ++++-
 fs/xfs/xfs_trace.h |   62 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+), 1 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index bbeec02..0d64055 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -369,6 +369,7 @@ xfs_prealloc_dquot_max(
 
 	do_div(free, 100);
 	free *= pct;
+	trace_xfs_prealloc_dquot_max_pct(dq, free, pct);
 
 	return free;
 }
@@ -426,7 +427,7 @@ xfs_iomap_prealloc_size(
 	xfs_fsblock_t		alloc_blocks = 0;
 	int			shift = 0;
 	int64_t			freesp;
-	int64_t                 max_quota_prealloc;
+	int64_t                 max_quota_prealloc = -1;
 
 	if (mp->m_flags & XFS_MOUNT_DFLT_IOSIZE)
 		goto check_writeio;
@@ -469,6 +470,9 @@ check_writeio:
 	if (alloc_blocks < mp->m_writeio_blocks)
 		alloc_blocks = mp->m_writeio_blocks;
 
+	trace_xfs_iomap_prealloc_size(ip, alloc_blocks, shift, max_quota_prealloc,
+				      mp->m_writeio_blocks);
+
 	return alloc_blocks;
 }
 
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index 2e137d4..2b28626 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -618,6 +618,33 @@ DECLARE_EVENT_CLASS(xfs_iref_class,
 		  (char *)__entry->caller_ip)
 )
 
+TRACE_EVENT(xfs_iomap_prealloc_size,
+	TP_PROTO(struct xfs_inode *ip, xfs_fsblock_t blocks, int shift,
+		 int64_t qfreesp, unsigned int writeio_blocks),
+	TP_ARGS(ip, blocks, shift, qfreesp, writeio_blocks),
+	TP_STRUCT__entry(
+		__field(dev_t, dev)
+		__field(xfs_ino_t, ino)
+		__field(xfs_fsblock_t, blocks)
+		__field(int, shift)
+		__field(int64_t, qfreesp)
+		__field(unsigned int, writeio_blocks)
+	),
+	TP_fast_assign(
+		__entry->dev = VFS_I(ip)->i_sb->s_dev;
+		__entry->ino = ip->i_ino;
+		__entry->blocks = blocks;
+		__entry->shift = shift;
+		__entry->qfreesp = qfreesp;
+		__entry->writeio_blocks = writeio_blocks;
+	),
+	TP_printk("dev %d:%d ino 0x%llx prealloc blocks %llu shift %d "
+		"quota max %lld, m_writeio_blocks %u",
+		MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino,
+		__entry->blocks, __entry->shift, __entry->qfreesp,
+		__entry->writeio_blocks)
+)
+
 #define DEFINE_IREF_EVENT(name) \
 DEFINE_EVENT(xfs_iref_class, name, \
 	TP_PROTO(struct xfs_inode *ip, unsigned long caller_ip), \
@@ -770,6 +797,41 @@ DEFINE_DQUOT_EVENT(xfs_dqflush);
 DEFINE_DQUOT_EVENT(xfs_dqflush_force);
 DEFINE_DQUOT_EVENT(xfs_dqflush_done);
 
+TRACE_EVENT(xfs_prealloc_dquot_max_pct,
+	TP_PROTO(struct xfs_dquot *dqp, int64_t free, int pct),
+	TP_ARGS(dqp, free, pct),
+	TP_STRUCT__entry(
+		__field(dev_t, dev)
+		__field(u32, id)
+		__field(unsigned long long, res_bcount)
+		__field(unsigned long long, blk_hardlimit)
+		__field(unsigned long long, blk_softlimit)
+		__field(unsigned long long, free)
+		__field(int, pct)
+	),
+	TP_fast_assign(
+		__entry->dev = dqp->q_mount->m_super->s_dev;
+		__entry->id = be32_to_cpu(dqp->q_core.d_id);
+		__entry->res_bcount = dqp->q_res_bcount;
+		__entry->blk_hardlimit =
+			be64_to_cpu(dqp->q_core.d_blk_hardlimit);
+		__entry->blk_softlimit =
+			be64_to_cpu(dqp->q_core.d_blk_softlimit);
+		__entry->free = free;
+		__entry->pct = pct;
+	),
+	TP_printk("dev %d:%d id 0x%x res_bc 0x%llx "
+		  "bhardlimit 0x%llx bsoftlimit 0x%llx "
+		  "free 0x%llx (%d%%)",
+		  MAJOR(__entry->dev), MINOR(__entry->dev),
+		  __entry->id,
+		  __entry->res_bcount,
+		  __entry->blk_hardlimit,
+		  __entry->blk_softlimit,
+		  __entry->free,
+		  __entry->pct)
+)
+
 DECLARE_EVENT_CLASS(xfs_loggrant_class,
 	TP_PROTO(struct xlog *log, struct xlog_ticket *tic),
 	TP_ARGS(log, tic),
-- 
1.7.7.6

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-12-05 16:43 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-05 16:47 [PATCH 0/4] speculative preallocation quota throttling Brian Foster
2012-12-05 16:47 ` [PATCH 1/4] xfs: reorganize xfs_iomap_prealloc_size to remove indentation Brian Foster
2012-12-05 16:47 ` [PATCH 2/4] xfs: push rounddown_pow_of_two() to after prealloc throttle Brian Foster
2012-12-05 16:47 ` [PATCH 3/4] xfs: add quota-driven speculative preallocation throttling Brian Foster
2012-12-05 16:47 ` [PATCH 4/4] xfs: preallocation throttling tracepoints Brian Foster

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox