linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Alan D. Brunelle" <Alan.Brunelle@hp.com>
To: "K.S. Bhaskar" <ks.bhaskar@fnis.com>
Cc: Jeff Moyer <jmoyer@redhat.com>,
	James Bottomley <James.Bottomley@HansenPartnership.com>,
	linux-scsi <linux-scsi@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org
Subject: Re: Enterprise workload testing for storage and filesystems
Date: Fri, 21 Nov 2008 11:18:43 -0500	[thread overview]
Message-ID: <4926DF63.1030107@hp.com> (raw)
In-Reply-To: <4926DAFC.5070006@fnis.com>

K.S. Bhaskar wrote:
> On 11/20/2008 04:37 PM, Jeff Moyer wrote:
>> James Bottomley <James.Bottomley@HansenPartnership.com> writes:
> 
> [KSB] <...snip...>
> 
>>  > Let's see how our storage and filesystem tuning measures up to this.
>>
>> This is indeed great news!  The tool is very flexible, so I'd like to
>> know if we can get some sane configuration options to start testing.
>> I'm sure I can cook something up, but I'd like to be confident that what
>> I'm testing does indeed reflect a real-world workload.
> 
> [KSB] Here are numbers for some tests that we ran recently:
> 
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 1000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 10000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 100000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 200000 90 90 10 512
> 
> Note that these are relatively modest tests (4x32GB database files, all
> on one file system, 12 processes).  To simulate bigger loads, allow the
> journal file sizes to grow to 4GB, use a configuration file to spread
> the database and journal files on different file systems, take the
> number of processes up into the hundreds and database sizes into the
> hundreds of GB.  To keep test times reasonable, use the smallest numbers
> that give insightful results (after a point, making things bigger adds
> more time, but does not yield additional insights into system behavior,
> which is what we are trying to achieve).
> 
> Regards
> -- Bhaskar

Thanks for additional feedback Bhaskar - I've been playing with this
on-and-off the last couple of days trying to stress one testbed (16 way
AMD, 128GB RAM, two P800 Smart Arrays (48 disks total put into a single
LVM2/DM volume)). I've been able to get the I/O subsystem 100% utilized,
but in so doing really didn't stress the system (something like 80-90%
idle).

In order to stress the whole system, it sounds like it _may_ be better
to use 48 separate file systems on 48 separate platters (each with its
own DB)? Or are there other knobs to play with to get more of the system
involved besides the I/O? Is it a good idea to separate the journals
from the DB (separate FS/platter)?

Regards,
Alan

  reply	other threads:[~2008-11-21 16:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-17 22:47 Enterprise workload testing for storage and filesystems James Bottomley
2008-11-20 21:37 ` Jeff Moyer
2008-11-21 15:59   ` K.S. Bhaskar
2008-11-21 16:18     ` Alan D. Brunelle [this message]
2008-11-21 16:42       ` K.S. Bhaskar
2008-11-21  4:42 ` Grant Grundler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4926DF63.1030107@hp.com \
    --to=alan.brunelle@hp.com \
    --cc=James.Bottomley@HansenPartnership.com \
    --cc=jmoyer@redhat.com \
    --cc=ks.bhaskar@fnis.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).