linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Problems after reshaping of Raid5 array
@ 2010-11-29 17:24 Michele Bonera
  2010-11-29 19:12 ` Jan Ceuleers
  2010-11-29 21:45 ` Neil Brown
  0 siblings, 2 replies; 8+ messages in thread
From: Michele Bonera @ 2010-11-29 17:24 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1037 bytes --]

Hi all.

I'm a little bit in panic... and I really need some help to solve this
(if possible......)...

I have a storage server in my LAN where I save everything
for security (sigh).

The system consists in a 32 GB SSD containing the o.s.
plus 4 WD EADS 1TB harddisks in RAID5 with all my data.
The disks are seen by the system as sdb1, sdc1, sdd1, sde1

Yesterday evening I added another WD, this time an EARS
(512 byte sectors): I created a partition on it, respecting the
alignment and then I added it to the array and performed a
grow command

mdadm --add /dev/md6 /dev/sdb1 (after adding it, the hd took sdb)
mdadm --grow /dev/md6 --raid-devices=5

Reshape started... and worked until today. Or better, until the system
hangs and I have to sync+remount-ro with the sysrq keys.

After rebooting, the reshaping restarted, but the disk become sdb
not sdb1 in the raid array, and the file system became unreadable

Any ideas of what happened?

Thanks a lot for any suggestion you can give me.
I attach the mdadm -E and dumpe2fs outputs

[-- Attachment #2: mdadm-e.tgz --]
[-- Type: application/x-gzip, Size: 803 bytes --]

[-- Attachment #3: dumpe2fs --]
[-- Type: application/octet-stream, Size: 1823 bytes --]

root@mizar:~# dumpe2fs /dev/md6
dumpe2fs 1.41.11 (14-Mar-2010)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          11e60234-97a2-47a0-8e52-8fe0c86aa667
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              161202176
Block count:              644794752
Reserved block count:     32239737
Free blocks:              42614406
Free inodes:              120363507
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      870
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32
RAID stripe width:        96
Filesystem created:       Sun Nov  1 09:22:22 2009
Last mount time:          Mon Nov 29 07:55:37 2010
Last write time:          Mon Nov 29 16:46:00 2010
Mount count:              1
Maximum mount count:      31
Last checked:             Sun Nov 28 23:04:27 2010
Check interval:           15552000 (6 months)
Next check after:         Sat May 28 00:04:27 2011
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      4b69fffd-5dd1-425e-8ebd-bd29940c0722
Journal backup:           inode blocks
Journal superblock magic number invalid!


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-12-03  7:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-29 17:24 Problems after reshaping of Raid5 array Michele Bonera
2010-11-29 19:12 ` Jan Ceuleers
2010-11-29 20:04   ` Michele Bonera
2010-11-30 17:00     ` Jan Ceuleers
2010-11-29 21:45 ` Neil Brown
2010-11-30  7:23   ` Michele Bonera
2010-11-30 19:45     ` Jan Ceuleers
2010-12-03  7:03       ` Michele Bonera

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).