From: Tejun Heo <htejun@gmail.com>
To: dusty@gmx.li
Cc: linux-ide@vger.kernel.org
Subject: Re: libata pm
Date: Mon, 28 Jan 2008 21:03:48 +0900 [thread overview]
Message-ID: <479DC4A4.30809@gmail.com> (raw)
In-Reply-To: <51861.82.140.47.248.1201516318.squirrel@ssl.cavemail.org>
Hello, Dusty.
dusty@gmx.li wrote:
> Shuffling the drives did not change anything to the linkspeed of the 3
> ports running with 1.5 Gbps. Looks like the problem is port-related.
Hmm... Okay. Maybe the signal traces or connectors have some problems.
But 3.0Gbps on downstream port doesn't make any difference anyway so
unless it leads to errors, it should be fine.
> The first PSU (Multilane) offers (measured) 12,20 V on both lanes. This
> falls during read/write access on all drives attached down to 12,10 V.
That explains the PHY dropouts. I've even seen drives to do hard reset
accompanying emergency head unload due to voltage dropping when high IO
load hits. Of course, data in cache is lost.
> The second PSU (Singlelane) offers (measured) 11,75 V. This doesn't change
> during read/write access on all drives attached. I tested the second PSU
> without anything attached and it still offered 11,75 V.
11.75 is still in spec and it doesn't fluctuate. I like this power much
better.
> I will try to add another PSU this evening and remeasure.
> Currently the whole machine needs about 350W.
> The PSUs are Targan 500W (Multilane - 2x12V 10A)
Maybe one of the lane is only connected to video power connectors?
IIRC, that was the idea of multilane power anyway.
> and Noname 350W (Singlelane - 1x12V 10A) so this 'should' be enough...
>
> This night i copied the backup back to the first raid (the Maxtor drives)
> without any error but during the transfer i got this on the second
> (mounted but unused) raid:
> ata10.00: failed to read SCR 1 (Emask=0x40)
> ata10.01: failed to read SCR 1 (Emask=0x40)
> ata10.02: failed to read SCR 1 (Emask=0x40)
> ata10.03: failed to read SCR 1 (Emask=0x40)
> ata10.04: failed to read SCR 1 (Emask=0x40)
> ata10.05: failed to read SCR 1 (Emask=0x40)
> ata10.00: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
> ata10.01: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
> ata10.02: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen
> ata10.02: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 1 cdb 0x0 data 0
> res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Timeout during heavy IO can be caused from a number of things and bad
power is one of them.
Please lemme know your test result.
--
tejun
next prev parent reply other threads:[~2008-01-28 12:03 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-26 18:03 libata pm dusty
2008-01-26 20:05 ` dusty
2008-01-26 23:33 ` Tejun Heo
2008-01-27 1:38 ` dusty
2008-01-27 19:34 ` dusty
2008-01-28 1:17 ` Tejun Heo
2008-01-28 10:31 ` dusty
2008-01-28 12:03 ` Tejun Heo [this message]
2008-01-30 21:34 ` dusty
2008-01-31 1:13 ` Tejun Heo
2008-02-01 13:42 ` dusty
2008-02-01 13:53 ` Tejun Heo
2008-02-01 14:06 ` Mark Lord
2008-02-01 15:05 ` Tejun Heo
2008-02-05 18:50 ` dusty
2008-02-01 15:12 ` dusty
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=479DC4A4.30809@gmail.com \
--to=htejun@gmail.com \
--cc=dusty@gmx.li \
--cc=linux-ide@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).