From: Mike Hardy <mhardy@h3c.com>
To: "'linux-raid@vger.kernel.org'" <linux-raid@vger.kernel.org>
Subject: Re: Stress testing system?
Date: Fri, 08 Oct 2004 15:00:19 -0700 [thread overview]
Message-ID: <41670DF3.5040807@h3c.com> (raw)
In-Reply-To: <4167078A.3030609@robinbowes.com>
This is a little off topic, but I stress systems with four loops
loop one unpacks the kernel source, moves it to a new name, unpacks the
kernel source again, and diffs the two, deletes and repeats (tests
memory, disk caching)
loop two unpacks the kernel source, does a make allconfig and a make -j5
bzImage modules, then a make clean and repeats. should get the cpu burning
loop three should run bonnie++ on the array
loop four should work with another machine. each machine should wget
some very large file (1GBish) with output to /dev/null so that the NIC
has to serve interrupts at max
If that doesn't cook your machine in 48 hours or so, I can't think of
anything that will.
This catches out every machine I try it on for some reason or another,
but after a couple tweaks its usually solid.
Slightly more on-topic, one thing that I have do to frequently is boot
with noapic or acpi=off due to interrupt handling problems with various
motherboards.
Additionally, I think there have been reports of problems with raid and
LVM, and there have also been problems with SATA and possibly with
Maxtor drives, so you have may have some tweaking to do. Mentioning
versions of things (distribution, kernel, hardware parts and part
numbers etc) would help
I'm interested to hear what other people do to burn their machines in
though...
Good luck.
-Mike
Robin Bowes wrote:
> Hi,
>
> I've got six 250GB Maxtor drives connected to 2 Promise SATA controllers
> configured as follows:
>
> Each disk has two partitions: 1.5G and 248.5G.
>
> /dev/sda1 & /dev/sdd1 are mirrored and form the root filesystem.
>
> /dev/sd[abcdef]2 are configured as a RAID5 array with one hot spare.
>
> I use lvm to create a 10G /usr partition, a 5G /var partition, and the
> rest of the array (994G) in /home.
>
> The system in which I installed these drives was rock-solid before I
> added the RAID storage (it had a single 120G drive). However, since
> adding the 6 disks I have experienced the system simply powering down
> and requiring filesystem recovering when it restarted.
>
> I suspected this was down to an inadequate power supply (it was 400W) so
> I've upgrade to an OCZ 520W PSU.
>
> I'd like to stress test the system to see if the new PSU has sorted the
> problem, i.e. really work the disks.
>
> What's the best way to do get all six drives working as hard as possible?
>
> Thanks,
>
> R.
next prev parent reply other threads:[~2004-10-08 22:00 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-10-08 21:32 Stress testing system? Robin Bowes
2004-10-08 22:00 ` Mike Hardy [this message]
2004-10-08 22:07 ` Gordon Henderson
2004-10-08 23:49 ` Guy
2004-10-08 22:02 ` Gordon Henderson
2004-10-08 23:44 ` Robin Bowes
2004-10-08 23:48 ` Guy
2004-10-09 9:52 ` Robin Bowes
2004-10-09 16:58 ` Guy
2004-10-09 17:19 ` Robin Bowes
2004-10-10 20:36 ` Gordon Henderson
2004-10-10 21:35 ` Robin Bowes
2004-10-10 22:38 ` Guy
2004-10-11 8:38 ` Gordon Henderson
2004-10-11 9:01 ` Brad Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41670DF3.5040807@h3c.com \
--to=mhardy@h3c.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).