linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: linux@horizon.com
To: htejun@gmail.com, linux-ide@vger.kernel.org
Cc: linux@horizon.com
Subject: Re: sata_sil24 test support
Date: 18 Nov 2005 14:36:52 -0500	[thread overview]
Message-ID: <20051118193652.6403.qmail@science.horizon.com> (raw)
In-Reply-To: <20051118022349.12316.qmail@science.horizon.com>

> One thing I wanna verify on sil24 is data integrity with multiple disks 
> attached.  It would be very helpful if you can do some parallel data 
> stress testing with multiple disks.
> 
> * Parallel 'badblocks -w -t random' on all attached disks.  Maybe repeat 
> it for a few days and verify no corrupted IO occurs.

I only ran it for a day, but I can report success on exactly this
test on 6x Seagate 7200.8 drives (350G partition of 400G drives)
across 3x Sil3132.

That's how I found my problems, and how I verified that they were gone.

(The only "intereting" finding was that one drive was noticeably slower
than the others.  Not 10%, but it finished most of an hour later.  I
checked the cables and all looked well, and its partner on the same
controller was fine.  I'm going to do a bit of swapping to experiment.)

This is with CONFIG_PCI_MSI=y.  It was run in single-user mode (all
file systems mounted read-only) because the question was whether
live file systems were safe.

One thing I'm thinking of as a *driver* test is to write a little utility
that uses O_DIRECT to do heavy I/O to the drive's cache.  That should
be able to exceed the 60 MB/sec media transfer rate limit.

  reply	other threads:[~2005-11-18 19:36 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-11-07  9:59 sata_sil24 corruption details linux
2005-11-07 16:15 ` Greg Freemyer
2005-11-10  7:17 ` linux
2005-11-10  9:01   ` Tejun Heo
2005-11-10 14:15     ` Greg Freemyer
2005-11-10 14:41       ` Tejun Heo
2005-11-10 15:26         ` linux
2005-11-10 17:32         ` Tejun Heo
2005-11-10 20:34           ` Greg Freemyer
2005-11-12  0:49             ` Greg Freemyer
2005-11-12  2:59               ` Tejun Heo
2005-11-13 10:19                 ` Tejun Heo
2005-11-14 23:30                   ` Greg Freemyer
2005-11-18  2:23                     ` sata_sil24 corruption FIXED by motherboard swap linux
2005-11-18 19:36                       ` linux [this message]
2005-11-22  0:23                         ` sata_sil24 test support linux
2005-11-22  1:52                           ` Tejun Heo
2005-11-11  2:16           ` sata_sil24 corruption details linux
2005-11-13  6:11             ` linux
2005-11-10 17:39         ` Jens Axboe
2005-11-10 20:27   ` Edward Falk
     [not found] <46377.137.32.101.32.1132172329.squirrel@www.stubbornroses.com>
     [not found] ` <437C0F2D.1000406@gmail.com>
2005-11-17  5:51   ` sata_sil24 test support James O. Rose, III
2005-11-17  7:01     ` Tejun Heo
2005-11-17 19:50       ` James Rose
  -- strict thread matches above, loose matches on Subject: below --
2005-11-17 22:58 James Rose
2005-11-18  0:35 ` James Rose

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051118193652.6403.qmail@science.horizon.com \
    --to=linux@horizon.com \
    --cc=htejun@gmail.com \
    --cc=linux-ide@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).