From: "Pasi Kärkkäinen" <pasik@iki.fi>
To: Drew <drew.kay@gmail.com>
Cc: Wang Jinpu <jack_wang@usish.com>,
Nick Cheng <nick.cheng@areca.com.tw>,
Andrew Morton <akpm@linux-foundation.org>,
Michael Fuckner <michael@fuckner.net>,
linux-kernel <linux-kernel@vger.kernel.org>,
linux-scsi <linux-scsi@vger.kernel.org>,
Erich Chen <erich@areca.com.tw>
Subject: Re: Re: Performance issues with Areca 1680 SAS Controllers
Date: Thu, 20 Aug 2009 11:00:02 +0300 [thread overview]
Message-ID: <20090820080002.GO19938@edu.joroinen.fi> (raw)
In-Reply-To: <c268e4660908191929u3f11c733oc68a0d50a3e5aca8@mail.gmail.com>
On Wed, Aug 19, 2009 at 07:29:07PM -0700, Drew wrote:
> > The problem is while Areca is doing the flushing _all_ IOs are really slow,
> > including the other raid-10 array for the OS, which uses totally separate physical disks.
> >
> > Opening another shell in screen takes at least 30 seconds, starting "top"
> > takes forever etc..
> >
> > While Areca is flushing the caches (and all the IOs are slow), "iostat 1"
> > doesn't show any "leftover" IOs from the benchmark. So the benchmark was
> > really using direct IO, bypassing kernel caches.
> >
> > I tried with different io-schedulers (cfq,deadline,noop) but they didn't
> > have big effect.. which makes sense, since the OS/kernel is not doing any
> > big IO when the 'stalling' happens.
> >
> > Is there some way to make Areca NOT use all cpu-power for cache flushing?
>
> Is top showing 100% cpu usage?
>
No, cpu usage is almost 0%. I was talking about cpu-usage of _Areca_ controller.
When the disktest-benchmark ends, Linux cpu usage goes to 0%, and there's no IO
anymore (checked with "iostat 1").
But _Areca_ is still flushing it's caches, and all the IO will be slow, even
when there's no load in _Linux_.
-- Pasi
> It wouldn't surprise me if the Areca's internal bus / processor is
> getting swamped. With the RAID60 chewing through two PQ calculations
> over 14 disks I'd expect the performance of other arrays on the
> controller would take a hit
>
> I've seen similar symptoms when moving massive amounts of data between
> disks on my servers. IO maxes out the bus, system responsiveness goes
> down as evidenced by slowly loading apps, and top still shows minimal
> CPU usage.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
next prev parent reply other threads:[~2009-08-20 8:00 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <492403B8.5090007@fuckner.net>
2008-11-20 1:18 ` Performance issues with Areca 1680 SAS Controllers Andrew Morton
2008-11-20 7:39 ` Nick Cheng
2009-08-19 14:08 ` Pasi Kärkkäinen
[not found] ` <200908192247543597389@usish.com>
2009-08-19 16:40 ` Pasi Kärkkäinen
2009-08-20 0:56 ` 答复: " jack wang
2009-08-20 2:29 ` Drew
2009-08-20 8:00 ` Pasi Kärkkäinen [this message]
[not found] ` <SERVER-ARECA8ldpyzy000025bc@areca.com.tw>
2009-08-20 9:56 ` Pasi Kärkkäinen
2009-08-20 12:16 ` Pasi Kärkkäinen
2009-08-20 12:18 ` Nick Cheng
2009-08-20 12:20 ` Pasi Kärkkäinen
2009-08-20 12:19 ` Pasi Kärkkäinen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090820080002.GO19938@edu.joroinen.fi \
--to=pasik@iki.fi \
--cc=akpm@linux-foundation.org \
--cc=drew.kay@gmail.com \
--cc=erich@areca.com.tw \
--cc=jack_wang@usish.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=michael@fuckner.net \
--cc=nick.cheng@areca.com.tw \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox