From: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@oss.sgi.com
Subject: Re: 3.9.0: general protection fault
Date: Wed, 08 May 2013 19:48:04 +0200 [thread overview]
Message-ID: <518A8FD4.40700@itwm.fraunhofer.de> (raw)
In-Reply-To: <20130507220742.GC24635@dastard>
On 05/08/2013 12:07 AM, Dave Chinner wrote:
> On Tue, May 07, 2013 at 01:18:13PM +0200, Bernd Schubert wrote:
>> On 05/07/2013 03:12 AM, Dave Chinner wrote:
>>> On Mon, May 06, 2013 at 02:47:31PM +0200, Bernd Schubert wrote:
>>>> On 05/06/2013 02:28 PM, Dave Chinner wrote:
>>>>> On Mon, May 06, 2013 at 10:14:22AM +0200, Bernd Schubert wrote:
>>>>>> And anpther protection fault, this time with 3.9.0. Always happens
>>>>>> on one of the servers. Its ECC memory, so I don't suspect a faulty
>>>>>> memory bank. Going to fsck now-
>>>>>
>>>>> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>>>>
>>>> Isn't that a bit overhead? And I can't provide /proc/meminfo and
>>>> others, as this issue causes a kernel panic a few traces later.
>>>
>>> Provide what information you can. Without knowing a single thing
>>> about your hardware, storage config and workload, I can't help you
>>> at all. You're asking me to find a needle in a haystack blindfolded
>>> and with both hands tied behind my back....
>>
>> I see that xfs_info, meminfo, etc are useful, but /proc/mounts?
>> Maybe you want "cat /proc/mounts | grep xfs"?. Attached is the
>> output of /proc/mounts, please let me know if you were really
>> interested in all of that non-xfs output?
>
> Yes. You never know what is relevant to a problem that is reported,
> especially if there are multiple filesystems sharing the same
> device...
Hmm, I see. But you need to extend your questions to multipathing and
shared storage. Both time you can easily get double mounts... I probably
should try to find some time to add ext4s MMP to XFS.
>
>> And I just wonder what you are going to do with the information
>> about the hardware. So it is an Areca hw-raid5 device with 9 disks.
>> But does this help? It doesn't tell if one of the disks reads/writes
>> with hickups or provides any performance characteristics at all.
>
> Yes, it does, because Areca cards are by far the most unreliable HW
> RAID you can buy, which is not surprising because they are also the
Ahem. Compared to other hardware raids Areca is very stable.
> cheapest. This is through experience - we see reports of filesystems
> being badly corrupted ever few months because of problems with Areca
> controllers.
The problem is that telling the hardware controller does not tell
anything about disks. And most raid solutions do not care at all about
disk corruptions, thats getting better with T10DIF/DX, but unfortunately
I still don't see that used most installations.
As I'm aware of that problem for several years we started to write
ql-fstest [1] several years ago, which checks for data corruption. That
is also part of our stress test suite and so far it didn't report
anything. So we can exclude disks/controller data corruption with a very
high probability.
You might want to add to your FAQ something like:
Q: Are you sure there is not disk / controller / memory data corruption?
If so please state why!
Cheers,
Bernd
[1] https://bitbucket.org/aakef/ql-fstest
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-05-08 17:48 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-02 14:45 3.8.7: general protection fault Bernd Schubert
2013-05-06 8:14 ` 3.9.0: " Bernd Schubert
2013-05-06 9:40 ` Bernd Schubert
2013-05-06 12:28 ` Dave Chinner
2013-05-06 12:47 ` Bernd Schubert
2013-05-07 1:12 ` Dave Chinner
2013-05-07 11:18 ` Bernd Schubert
2013-05-07 22:07 ` Dave Chinner
2013-05-08 17:48 ` Bernd Schubert [this message]
2013-05-09 0:41 ` Dave Chinner
2013-05-10 10:19 ` Bernd Schubert
2013-05-10 13:33 ` Eric Sandeen
2013-05-11 0:12 ` Dave Chinner
2013-06-03 16:39 ` Bernd Schubert
2013-05-09 7:16 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=518A8FD4.40700@itwm.fraunhofer.de \
--to=bernd.schubert@itwm.fraunhofer.de \
--cc=david@fromorbit.com \
--cc=linux-xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox