* XFS or MDADM issue?
@ 2012-08-29 18:33 Blair Sonnen
2012-08-29 20:49 ` Keith Keller
2012-08-29 23:06 ` Stan Hoeppner
0 siblings, 2 replies; 3+ messages in thread
From: Blair Sonnen @ 2012-08-29 18:33 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 531 bytes --]
Greetings. I do not know where to inquire next for my issue and I would appreciate it if anyone could suggest something to potentially help regarding XFS. I have already submitted this bug report to mdadm on 5/31 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=675394#5), with no updates as of yet.
Please read my bug report and let me know if I should be considering this a potential XFS issue. Thank you for any help you may lend and your time!
-Blair Sonnen
blair.sonnen@gmail.com
801.696.4353
[-- Attachment #1.2.1: Type: text/html, Size: 827 bytes --]
[-- Attachment #1.2.2: Debian_reportbug_PCS.txt --]
[-- Type: text/plain, Size: 4590 bytes --]
Bottom line: since growing my raid array, and switching raid levels using MDADM, I am unable to successfully copy a file to my array. The resulting file always has different md5sum values from the original file, and is corrupted.
In addition to this, it appears that all files on my raid array have different md5sums after this procedure.
File System: XFS
mdadm --detail /dev/md127
------------------------------------------------------------------------------------------------------------------------------------
/dev/md127:
Version : 0.90
Creation Time : Sun Jan 25 22:44:41 2009
Raid Level : raid6
Array Size : 8790815616 (8383.58 GiB 9001.80 GB)
Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Mon May 7 00:51:55 2012
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : d861d754:1a1065fe:c230666b:5103eba0
Events : 0.386680
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 113 3 active sync /dev/sdh1
4 8 17 4 active sync /dev/sdb1
5 8 96 5 active sync /dev/sdg
6 8 80 6 active sync /dev/sdf
7 8 64 7 active sync /dev/sde
--------------------------------------------------------------------------------------------------------------------------------------------
This raid re-arrangement was done the following way:
1) Started with 5 devices in raid 5 array, functioning correctly.
2) <Shutdown -h>
3) Installed 3 additional hard disks, all same manufacturer and size, however newer firmware. I verified the firmware was compatible via the manufacturers website.
4) Booted up, determined device names
5) successfully Added the drives to the array, by :<mdadm /dev/md127 --add /dev/sdg> etc.
6) successfully grew the array <mdadm --grow /dev/md127 -n 8 -l 6>, took a very long time as expected
7) successfully resized the file system <xfs_growfs -d /dev/md127>
Then, I started noticing problems with files on the array. Specifically, the md5sums from the files on the raid array no longer matched the original md5sums from files copied to the array prior to the above commands.
I figured I would confirm the problem with the following test procedure:
1) mkdir Test_Files/
2) dd if=/dev/urandom bs=1024 count=5000000 of=5GB_Rand,
dd if=/dev/zero bs=1024 count=5000000 of=5GB_Zero
3) md5sum test:
Original Test Files created on Boot Drive
blair@debian:~$ md5sum /home/blair/Test_Files/5GB_RandÂ
0fed0abb19ea7962830e54108631ddac  /home/blair/Test_Files/5GB_Rand
blair@debian:~$ md5sum /home/blair/Test_Files/5GB_Zero
20096e4b3b80a3896dec3d7fdf5d1bfc  /home/blair/Test_Files/5GB_Zero
4) Copied files to /dev/md127 raid array
5) md5sum test on copied files:
blair@debian:~$ md5sum /mnt/movies/Test_Files/5GB_RandÂ
419175a78977007f3d5e97dcaf414b61 Â /mnt/movies/Test_Files/5GB_Rand
blair@debian:~$ md5sum /mnt/movies/Test_Files/5GB_Zero
5846bed2b52532719d4812172a8078ce  /mnt/movies/Test_Files/5GB_Zero
6) Copied files from md/127 array back to Boot Drive
7) md5sum test:
blair@debian:~$ md5sum /home/blair/Test_Files/Test_Files_Copy/5GB_RandÂ
419175a78977007f3d5e97dcaf414b61 Â /home/blair/Test_Files/Test_Files_Copy/5GB_Rand
blair@debian:~$ md5sum /home/blair/Test_Files/Test_Files_Copy/5GB_Zero
5846bed2b52532719d4812172a8078ce  /home/blair/Test_Files/Test_Files_Copy/5GB_Zero
I am not sure what I have done in the process that would have created this problem. I am filing this bug report to mdadm because I have successfully grown my raid in the past, using mdadm with the XFS filesystem. In that instance it was from 4 devices to 5 devices, both on raid 5 (no level switch), and growing the xfs filesystem accordingly. These file problems started as soon as I executed the recent grow to 8 devices and level 6 as indicated above.
In order to install three extra drives, I installed an internal Sata host adapter, as my board only had 6 Sata ports integrated. Not sure this is relevant.
[-- Attachment #1.2.3: Type: text/html, Size: 1611 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: XFS or MDADM issue?
2012-08-29 18:33 XFS or MDADM issue? Blair Sonnen
@ 2012-08-29 20:49 ` Keith Keller
2012-08-29 23:06 ` Stan Hoeppner
1 sibling, 0 replies; 3+ messages in thread
From: Keith Keller @ 2012-08-29 20:49 UTC (permalink / raw)
To: linux-xfs
On 2012-08-29, Blair Sonnen <blair.sonnen@gmail.com> wrote:
>
> Greetings. I do not know where to inquire next for my issue and I would =
> appreciate it if anyone could suggest something to potentially help =
> regarding XFS. I have already submitted this bug report to mdadm on =
> 5/31 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D675394#5), with =
> no updates as of yet. =20
There are many issues with this bug report.
==Is it repeatable? If not, it's probably just a problem (perhaps
with the underlying hardware), not an actual software bug.
==If it's a legitimate bug, are you hoping for Debian to fix it?
Because in general they simply repackage the code and compile it, so if
the bug is truly in md or mdadm, it should be reported upstream.
==If it's not an actual bug, you're more likely to get help on the linux
md mailing list than on the Debian bug tracker.
==Have you tried a similar procedure with a different filesystem? That
would confirm or deny whether it's an issue with XFS.
I think the best thing to do at this point is check with the Linux RAID
mailing list. The mdadm developer is very active there.
http://vger.kernel.org/vger-lists.html#linux-raid
They can help decide whether it's a bug or local problem, and in any case
may have some ideas for either recovery or deciding that recovery is not
possible. (I am a newbie reader on that list.)
--keith
--
kkeller@wombat.san-francisco.ca.us
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: XFS or MDADM issue?
2012-08-29 18:33 XFS or MDADM issue? Blair Sonnen
2012-08-29 20:49 ` Keith Keller
@ 2012-08-29 23:06 ` Stan Hoeppner
1 sibling, 0 replies; 3+ messages in thread
From: Stan Hoeppner @ 2012-08-29 23:06 UTC (permalink / raw)
To: xfs
On 8/29/2012 1:33 PM, Blair Sonnen wrote:
>
> Greetings. I do not know where to inquire next for my issue and I would
> appreciate it if anyone could suggest something to potentially help regarding
> XFS. I have already submitted this bug report to mdadm on 5/31
> (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=675394#5), with no updates as
> of yet.
>
> Please read my bug report and let me know if I should be considering this a
> potential XFS issue. Thank you for any help you may lend and your time!
Does the md5sum change when you create a file in the XFS filesystem in
question, then copy it to another directory in the filesystem?
Which SATA controller and which drives did you add?
Do you see any ATA or other storage device related errors in dmesg?
Are you running any kind of transparent block device encryption?
Did you perform a distribution or other software upgrade directly before
of after changing the array parameters and growing the filesystem?
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-08-29 23:05 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-29 18:33 XFS or MDADM issue? Blair Sonnen
2012-08-29 20:49 ` Keith Keller
2012-08-29 23:06 ` Stan Hoeppner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox