linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@redhat.com>
To: Stefan Behrens <sbehrens@giantdisaster.de>
Cc: xfs@oss.sgi.com, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH 3/3] xfstests: btrfs tests for basic informational commands
Date: Wed, 13 Mar 2013 11:17:46 -0500	[thread overview]
Message-ID: <5140A6AA.2010204@redhat.com> (raw)
In-Reply-To: <51409C25.70403@giantdisaster.de>

On 3/13/13 10:32 AM, Stefan Behrens wrote:
> On Wed, 13 Mar 2013 09:57:03 -0500, Eric Sandeen wrote:
> [...]
>> +echo "== Show device stats by mountpoint"
>> +$BTRFS_UTIL_PROG device stats $SCRATCH_MNT | _filter_btrfs_device_stats
> 
> Is the number of devices in SCRATCH_DEV_POOL fixed to 3? Otherwise you
> should pipe the device-stats-by-mountpoint through "head -10" to avoid
> failures if the number of devices is != 3.

Oh, you are right.

I had meant to filter device stats through "uniq" after replacing all
devices & numbers.  I'll add that, then I think it should be ok.

thanks for catching that.

> Possible additional checks (but I am not sure that we really need this
> additional level of detail in this check) would be:
> 1. The number of lines is 5 * number of devices.
> 2. The 5-line block that is printed for each device always looks the
> same (after applying _filter_btrfs_device_stats).

hm, perhaps - I wonder if that might be fragile?  I guess if *any* output
changes, the test will break . . . 

Thanks for the review!
-Eric

>> +echo "== Show device stats by first/scratch dev"
>> +$BTRFS_UTIL_PROG device stats $SCRATCH_DEV | _filter_btrfs_device_stats
>> +echo "== Show device stats by second dev"
>> +$BTRFS_UTIL_PROG device stats $FIRST_POOL_DEV | sed -e "s,$FIRST_POOL_DEV,FIRST_POOL_DEV,g"
>> +echo "== Show device stats by last dev"
>> +$BTRFS_UTIL_PROG device stats $LAST_POOL_DEV | sed -e "s,$LAST_POOL_DEV,LAST_POOL_DEV,g"
>> +
>> +# success, all done
>> +status=0
>> +exit
>> diff --git a/313.out b/313.out
>> new file mode 100644
>> index 0000000..1aa59a1
>> --- /dev/null
>> +++ b/313.out
>> @@ -0,0 +1,51 @@
>> +== QA output created by 313
>> +== Set filesystem label to TestLabel.313
>> +== Get filesystem label
>> +TestLabel.313
>> +== Mount.
>> +== Show filesystem by label
>> +Label: 'TestLabel.313'  uuid: <UUID>
>> +	Total devices <EXACTNUM> FS bytes used <SIZE>
>> +	devid     <DEVID> size <SIZE> used <SIZE> path SCRATCH_DEV
>> +
>> +== Show filesystem by UUID
>> +Label: 'TestLabel.313'  uuid: <EXACTUUID>
>> +	Total devices <EXACTNUM> FS bytes used <SIZE>
>> +	devid     <DEVID> size <SIZE> used <SIZE> path SCRATCH_DEV
>> +
>> +== Sync filesystem
>> +FSSync 'SCRATCH_MNT'
>> +== Show device stats by mountpoint
>> +[SCRATCH_DEV].write_io_errs   <NUM>
>> +[SCRATCH_DEV].read_io_errs    <NUM>
>> +[SCRATCH_DEV].flush_io_errs   <NUM>
>> +[SCRATCH_DEV].corruption_errs <NUM>
>> +[SCRATCH_DEV].generation_errs <NUM>
>> +[SCRATCH_DEV].write_io_errs   <NUM>
>> +[SCRATCH_DEV].read_io_errs    <NUM>
>> +[SCRATCH_DEV].flush_io_errs   <NUM>
>> +[SCRATCH_DEV].corruption_errs <NUM>
>> +[SCRATCH_DEV].generation_errs <NUM>
>> +[SCRATCH_DEV].write_io_errs   <NUM>
>> +[SCRATCH_DEV].read_io_errs    <NUM>
>> +[SCRATCH_DEV].flush_io_errs   <NUM>
>> +[SCRATCH_DEV].corruption_errs <NUM>
>> +[SCRATCH_DEV].generation_errs <NUM>
> 
> 3 devices in this case.

Yep, oops.

>> +== Show device stats by first/scratch dev
>> +[SCRATCH_DEV].write_io_errs   <NUM>
>> +[SCRATCH_DEV].read_io_errs    <NUM>
>> +[SCRATCH_DEV].flush_io_errs   <NUM>
>> +[SCRATCH_DEV].corruption_errs <NUM>
>> +[SCRATCH_DEV].generation_errs <NUM>
>> +== Show device stats by second dev
>> +[FIRST_POOL_DEV].write_io_errs   0
>> +[FIRST_POOL_DEV].read_io_errs    0
>> +[FIRST_POOL_DEV].flush_io_errs   0
>> +[FIRST_POOL_DEV].corruption_errs 0
>> +[FIRST_POOL_DEV].generation_errs 0
>> +== Show device stats by last dev
>> +[LAST_POOL_DEV].write_io_errs   0
>> +[LAST_POOL_DEV].read_io_errs    0
>> +[LAST_POOL_DEV].flush_io_errs   0
>> +[LAST_POOL_DEV].corruption_errs 0
>> +[LAST_POOL_DEV].generation_errs 0
> [...]
> 


  reply	other threads:[~2013-03-13 16:17 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1363186623-1378-1-git-send-email-sandeen@redhat.com>
2013-03-13 14:57 ` [PATCH 2/3] xfstests: keep newlines out of SCRATCH_DEV_POOL Eric Sandeen
2013-03-13 17:43   ` Rich Johnston
2013-03-13 17:45     ` Eric Sandeen
2013-03-13 14:57 ` [PATCH 3/3] xfstests: btrfs tests for basic informational commands Eric Sandeen
2013-03-13 15:32   ` Stefan Behrens
2013-03-13 16:17     ` Eric Sandeen [this message]
2013-03-13 17:47       ` Rich Johnston
2013-03-13 16:38   ` [PATCH 3/3 V2] " Eric Sandeen
2013-03-13 18:53     ` [PATCH 3/3 V3] " Eric Sandeen
2013-03-13 19:00       ` Stefan Behrens
2013-03-13 19:01       ` [PATCH 3/3 V4] " Eric Sandeen
2013-03-14 13:01         ` Rich Johnston
2013-03-14 13:35           ` Stefan Behrens
2013-03-15 10:16         ` Dave Chinner
2013-03-15 13:46           ` Eric Sandeen
2013-03-15 14:23             ` Rich Johnston
2013-03-15 14:36               ` Eric Sandeen
2013-03-18 13:30                 ` Rich Johnston
2013-03-19 14:15                 ` Rich Johnston

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5140A6AA.2010204@redhat.com \
    --to=sandeen@redhat.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=sbehrens@giantdisaster.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).