linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: fstests@vger.kernel.org
Cc: linux-xfs@vger.kernel.org
Subject: [PATCH 3/3] tests/xfs: update indlen res. test to include larger write pattern
Date: Thu,  9 Feb 2017 14:43:44 -0500	[thread overview]
Message-ID: <1486669424-45274-4-git-send-email-bfoster@redhat.com> (raw)
In-Reply-To: <1486669424-45274-1-git-send-email-bfoster@redhat.com>

The indirect blocks reservation test originally reproduced a problem
with smaller delalloc extents being split into separate extents with
insufficient indlen blocks. This was ultimately resolved by allowing to
borrow blocks from the freed extent.

Since then, similar problems have been reproduced when larger delalloc
extents are repeatedly split and merged with new writes. These repeated
splits exposed a problem in the old indlen reservation split algorithm
when dealing with extents that are already under-reserved from previous
splits. This resulted in unfair distribution of existing reservation and
fairly large delalloc extents without any indlen reservation whatsoever.

Enhance the original indlen reservation test to include a pattern that
reproduces this behavior.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 tests/xfs/289 | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tests/xfs/289 b/tests/xfs/289
index 3aa53b9..5839a24 100755
--- a/tests/xfs/289
+++ b/tests/xfs/289
@@ -85,10 +85,25 @@ done
 
 echo 0 > /sys/fs/xfs/$sdev/drop_writes
 
-echo "Silence is golden."
-
 _scratch_cycle_mount
 $XFS_IO_PROG -c 'bmap -vp' $file | _filter_bmap
 
+# Now test a buffered write workload with larger extents. Write a 100m extent,
+# split it at the 3/4 mark, then write another 100m extent that is contiguous
+# with the 1/4 portion of the split extent. Repeat several times. This pattern
+# is known to prematurely exhaust indirect reservations and cause warnings and
+# assert failures.
+rm -f $file
+for offset in $(seq 0 100 500); do
+	$XFS_IO_PROG -fc "pwrite ${offset}m 100m" $file >> $seqres.full 2>&1
+
+	punchoffset=$((offset + 75))
+	echo 1 > /sys/fs/xfs/$sdev/drop_writes
+	$XFS_IO_PROG -c "pwrite ${punchoffset}m 4k" $file >> $seqres.full 2>&1
+	echo 0 > /sys/fs/xfs/$sdev/drop_writes
+done
+
+echo "Silence is golden."
+
 status=0
 exit
-- 
2.7.4


  parent reply	other threads:[~2017-02-09 20:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-09 19:43 [PATCH 0/3] xfs: restore and enhance xfs indlen test Brian Foster
2017-02-09 19:43 ` [PATCH 1/3] xfstests: move generic indlen reservation test to xfs dir Brian Foster
2017-02-09 19:43 ` [PATCH 2/3] tests/xfs: update indlen res. test to use fail writes mechanism Brian Foster
2017-02-09 19:43 ` Brian Foster [this message]
2017-02-10  7:15 ` [PATCH 0/3] xfs: restore and enhance xfs indlen test Eryu Guan
2017-02-10 13:58   ` Brian Foster
2017-02-10 16:25     ` Darrick J. Wong
2017-02-10 16:32       ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1486669424-45274-4-git-send-email-bfoster@redhat.com \
    --to=bfoster@redhat.com \
    --cc=fstests@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).