linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* IMSM Raid 5 always read only and gone after reboot
@ 2011-08-16 20:19 Iwan Zarembo
  2011-08-17  4:08 ` Daniel Frey
  2011-08-17  4:15 ` Daniel Frey
  0 siblings, 2 replies; 8+ messages in thread
From: Iwan Zarembo @ 2011-08-16 20:19 UTC (permalink / raw)
  To: linux-raid

Hello Everyone,
I am not that new to Linux, but I am quite far away from being an
expert :) I am using linux for a few years now and everything worked
just fine, but not this one with IMSM Raid. I googled for some weeks
and asked everyone I know about the problem but without any luck. The
last possibility to find a solution is this mailing list. I realy hope
someone can help me.

I bought some new components to upgrade my old PC. So I bought:
- Intel Core i7-2600k
- Asrock Z68 Extreme4 (with 82801 SATA RAID Controller on it)
- Some RAM and so on

I also wanted to reuse my four old SAMSUNG HD103UJ 1 TB hard drives.
In the past I used mdadm as fake raid level 5 and everything worked
just fine. Now with the upgrade I wanted to use the Intel RAID
Controller on my mainboard. The advantage is that I would be able to
access the raid drive from my alternative windows system.

So what I did was:
Activated RAID functionality in BIOS and started to work with the wiki
https://raid.wiki.kernel.org/index.php/RAID_setup . There I got a raid
container and a volume. Unfortunately I had in the volume an MBR
partition table. So I converted it by using windows 7 build in
functionality to a GPT partition table.
Now when I boot my Ubuntu (up to date 11.04 with gnome 3) I cannot see
the raid array.
So what I did is to take again the information from the ROM:
    # mdadm --detail-platform
           Platform : Intel(R) Matrix Storage Manager
            Version : 10.6.0.1091
        RAID Levels : raid0 raid1 raid10 raid5
        Chunk Sizes : 4k 8k 16k 32k 64k 128k
          Max Disks : 7
        Max Volumes : 2
     I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2
              Port0 : /dev/sda (ML0221F303XN3D)
              Port2 : /dev/sdb (S13PJ9AQ923317)
              Port3 : /dev/sdc (S13PJ9AQ923315)
              Port4 : /dev/sdd (S13PJ9AQ923313)
              Port5 : /dev/sdg (S13PJ9AQ923320)
              Port1 : - no device attached -

I use the hard drives on ports 2-5.

Scan with mdadm for already used raid arrays
    # mdadm --assemble --scan
    mdadm: Container /dev/md/imsm has been assembled with 4 drives

After the command I can see in gnome-disk-utility -> palimpsest an
inactive raid array /dev/md127 (seems to be the default imsm device
name).
More information on the array:

    # mdadm -E /dev/md127
/dev/md127:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : af791bfa
         Family : af791bfa
     Generation : 00000019
           UUID : 438e7dfa:936d0f29:5c4b2c0d:106da7cf
       Checksum : 1bae98dd correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk02 Serial : S13PJ9AQ923317
          State : active
             Id : 00020000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[raid]:
           UUID : 53e9eb47:c77c7222:20004377:481f36d6
     RAID Level : 5
        Members : 4
          Slots : [UUUU]
      This Slot : 2
     Array Size : 5860560896 (2794.53 GiB 3000.61 GB)
   Per Dev Size : 1953520640 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630940
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : initialize
      Map State : normal <-- uninitialized
     Checkpoint : 93046 (1024)
    Dirty State : clean

  Disk00 Serial : S13PJ9AQ923313
          State : active
             Id : 00040000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

  Disk01 Serial : S13PJ9AQ923315
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

  Disk03 Serial : S13PJ9AQ923320
          State : active
             Id : 00050000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

More Details on the container:
# mdadm -D /dev/md127
/dev/md127:
        Version : imsm
     Raid Level : container
  Total Devices : 4

Working Devices : 4


           UUID : 438e7dfa:936d0f29:5c4b2c0d:106da7cf
  Member Arrays :

    Number   Major   Minor   RaidDevice

       0       8       48        -        /dev/sdd
       1       8       32        -        /dev/sdc
       2       8       16        -        /dev/sdb
       3       8       64        -        /dev/sde


mdstat has the following output:
    # cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md127 : inactive sde[3](S) sdb[2](S) sdc[1](S) sdd[0](S)
      9028 blocks super external:imsm

unused devices: <none>

I started the raid by entering the command:
    # mdadm -I -e imsm /dev/md127
    mdadm: Started /dev/md/raid with 4 devices

Now mdstat has the following output:
    # cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md126 : active (read-only) raid5 sdd[3] sdc[2] sdb[1] sde[0]
      2930280448 blocks super external:/md127/0 level 5, 128k chunk,
algorithm 0 [4/4] [UUUU]
      	resync=PENDING

md127 : inactive sde[3](S) sdb[2](S) sdc[1](S) sdd[0](S)
      9028 blocks super external:imsm

unused devices: <none>

I learned that md126 is so long read only until it was used the first
time. So I tried to create a partition with the documentation from the
wiki, but not with ext3. I used ext4 for this.

 #  mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md/raid
The result was:
mke2fs 1.41.14 (22-Dec-2010)
fs_types für mke2fs.conf Lösung: 'ext4'
/dev/md/raid: The operation is not allowed then creating Superblocks.
Original message in German: Die Operation ist nicht erlaubt beim
Erstellen des Superblocks

This is the first problem. I am not able to do anything on the raid drive.

So I thought maybe a reboot helps and stored the configuration of the
rain in mdadm.conf using command:
# mdadm --detail --scan >> /etc/mdadm/mdadm.conf

By the way output of the command is:
ARRAY /dev/md/imsm metadata=imsm UUID=438e7dfa:936d0f29:5c4b2c0d:106da7cf
ARRAY /dev/md/raid container=/dev/md/imsm member=0
UUID=53e9eb47:c77c7222:20004377:481f36d6

I stored the configuration file mdadm.conf in /etc and /etc/mdadm/,
because I was not sure which one works. The content is the following:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/imsm metadata=imsm UUID=438e7dfa:936d0f29:5c4b2c0d:106da7cf
ARRAY /dev/md/raid container=/dev/md/imsm member=0
UUID=53e9eb47:c77c7222:20004377:481f36d6

The second problem is that the raid is gone after the reboot!

Can anyone help me? What am I doing wrong??? What is missing?

Every help is appreciated.

Thanks,

Iwan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-16 20:19 IMSM Raid 5 always read only and gone after reboot Iwan Zarembo
@ 2011-08-17  4:08 ` Daniel Frey
  2011-08-17  4:15 ` Daniel Frey
  1 sibling, 0 replies; 8+ messages in thread
From: Daniel Frey @ 2011-08-17  4:08 UTC (permalink / raw)
  To: Iwan Zarembo; +Cc: linux-raid

On 08/16/11 13:19, Iwan Zarembo wrote:
> Hello Everyone,
> I am not that new to Linux, but I am quite far away from being an
> expert :) I am using linux for a few years now and everything worked
> just fine, but not this one with IMSM Raid. I googled for some weeks
> and asked everyone I know about the problem but without any luck. The
> last possibility to find a solution is this mailing list. I realy hope
> someone can help me.
> 

I've just gone through this myself.

(snip)

> 
> I started the raid by entering the command:
>     # mdadm -I -e imsm /dev/md127
>     mdadm: Started /dev/md/raid with 4 devices
> 
> Now mdstat has the following output:
>     # cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md126 : active (read-only) raid5 sdd[3] sdc[2] sdb[1] sde[0]
>       2930280448 blocks super external:/md127/0 level 5, 128k chunk,
> algorithm 0 [4/4] [UUUU]
>       	resync=PENDING
> 
> md127 : inactive sde[3](S) sdb[2](S) sdc[1](S) sdd[0](S)
>       9028 blocks super external:imsm
> 
> unused devices: <none>
> 

What you are seeing here is the imsm container (/dev/md127), which you
generally don't use unless you are trying to reconfigure arrays.

The other device (/dev/md126) is the actual raid5 array as defined in
the imsm BIOS. This is what you use in disk operations. Examples:

$ parted /dev/md126

Then create partitions on the device - you'll have to use something
compatible with gpt tables. When you do this, you'll have new devices
available to you, such as /dev/md126p1 (first partition), /dev/md126p2
(second partition), and so on.

However, if all you're doing is creating one big partition, you don't
necessarily need a partition table, you can create a filesystem right on
the array itself (/dev/md126) as I did on my server.

> I learned that md126 is so long read only until it was used the first
> time. So I tried to create a partition with the documentation from the
> wiki, but not with ext3. I used ext4 for this.
> 
>  #  mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md/raid
> The result was:
> mke2fs 1.41.14 (22-Dec-2010)
> fs_types für mke2fs.conf Lösung: 'ext4'
> /dev/md/raid: The operation is not allowed then creating Superblocks.
> Original message in German: Die Operation ist nicht erlaubt beim
> Erstellen des Superblocks
> 
> This is the first problem. I am not able to do anything on the raid drive.
> 

Try using:

`mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md126`

This, of course, assumes you aren't using a partition table. Substitute
the correct partition device should you create one.

> The second problem is that the raid is gone after the reboot!

I'm not familiar with ubuntu, but you likely need to add a service to
the startup scripts in order to start the raid array and have it usable,
assuming that this is not the root device and is just being used for
storage. It may need kernel arguments to tell mdadm to find and assemble
arrays. Hopefully someone with Ubuntu experience can answer this.

I know on my distribution (not Ubuntu!) I have to add a service to the
boot runlevel in order to assemble arrays that are not the root filesystem.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-16 20:19 IMSM Raid 5 always read only and gone after reboot Iwan Zarembo
  2011-08-17  4:08 ` Daniel Frey
@ 2011-08-17  4:15 ` Daniel Frey
  2011-08-19 19:46   ` Iwan Zarembo
  2011-08-24 17:09   ` Iwan Zarembo
  1 sibling, 2 replies; 8+ messages in thread
From: Daniel Frey @ 2011-08-17  4:15 UTC (permalink / raw)
  To: Iwan Zarembo; +Cc: linux-raid

On 08/16/11 13:19, Iwan Zarembo wrote:
> I also wanted to reuse my four old SAMSUNG HD103UJ 1 TB hard drives.
> In the past I used mdadm as fake raid level 5 and everything worked
> just fine. Now with the upgrade I wanted to use the Intel RAID
> Controller on my mainboard. The advantage is that I would be able to
> access the raid drive from my alternative windows system.

(snip)
> 
> I learned that md126 is so long read only until it was used the first
> time. So I tried to create a partition with the documentation from the
> wiki, but not with ext3. I used ext4 for this.
> 
>  #  mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md/raid
> The result was:
> mke2fs 1.41.14 (22-Dec-2010)
> fs_types für mke2fs.conf Lösung: 'ext4'
> /dev/md/raid: The operation is not allowed then creating Superblocks.
> Original message in German: Die Operation ist nicht erlaubt beim
> Erstellen des Superblocks
> 

Are ext4 partitions even accessible in Windows? I remember trying that
myself and not finding a solution.

My comment in the previous message about creating the filesystem on the
raid directly is probably not a good idea in this case, Windows probably
won't like that. You'd have to create a partition.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-17  4:15 ` Daniel Frey
@ 2011-08-19 19:46   ` Iwan Zarembo
  2011-08-24 17:09   ` Iwan Zarembo
  1 sibling, 0 replies; 8+ messages in thread
From: Iwan Zarembo @ 2011-08-19 19:46 UTC (permalink / raw)
  To: Daniel Frey; +Cc: linux-raid

Hello Daniel,
Thank you for your fast reply, but it still does not work.
Firtly, about how to access linux (ext2,3,4) partitions on Windows. I am 
using ext2fsd manager for that. It works perfectly if you want to access 
the drives read only. But it makes a bit trouble accessing them in 
write-enabled mode. Just give it a try :)

I understand how it works with the partitions. I also created a 
partition table with windows and it is accessible from windows. Now if I 
try to work with parted I get the following output (it is in German, I 
added my translation in brackets):
# parted /dev/md126
GNU Parted 2.3
Verwende /dev/md126
Willkommen zu GNU Parted! Geben Sie 'help' ein, um eine Liste der 
verfügbaren
Kommados zu erhalten.
(parted) print
Modell: Linux Software RAID Array (md)
Festplatte (Hard drive) /dev/md126:  3001GB
Sektorgröße (Sector size) (logisch/physisch): 512B/512B
Partitionstabelle (Partition table): gpt

Number  Start  End   Size  Filesystem  Name                          Flags
  1      17,4kB  134MB  134MB               Microsoft reserved 
partition  msftres

(parted) rm 1
Fehler: Die Operation ist nicht erlaubt, während auf /dev/md126 geschrieben
wurde
Error: The operation is not allowed while writing on /dev/md126.

Wiederholen/Retry/Ignorieren/Ignore/Abbrechen/Cancel?

So what I tried it to mark the partition as read write by using
# mdadm --readwrite /dev/md126p1

Then I started parted again and tried the same, but the deletion never 
comes back.
When I open the app palimpsest then I see the status of the raid md126 
write-pending.

After a while I also checked Syslog, it has the following entries:
md: md126 switched to read-write mode.
md: resync of RAID array md126
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 
KB/sec) for resync.
md: using 128k window, over a total of 976760320 blocks.
md: resuming resync of md126 from checkpoint.
INFO: task parted:23009 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
parted          D 0000000000000000     0 23009   2390 0x00000000
  ffff88040c431908 0000000000000086 ffff88040c431fd8 ffff88040c430000
  0000000000013d00 ffff8803d40d03b8 ffff88040c431fd8 0000000000013d00
  ffffffff81a0b020 ffff8803d40d0000 0000000000000286 ffff880432edd278
Call Trace:
  [<ffffffff8147d215>] md_write_start+0xa5/0x1c0
  [<ffffffff81087fb0>] ? autoremove_wake_function+0x0/0x40
  [<ffffffffa0111234>] make_request+0x44/0x3f0 [raid456]
  [<ffffffff8113a6dd>] ? page_add_new_anon_rmap+0x8d/0xa0
  [<ffffffff81038c79>] ? default_spin_lock_flags+0x9/0x10
  [<ffffffff812cf5c1>] ? blkiocg_update_dispatch_stats+0x91/0xb0
  [<ffffffff8147924e>] md_make_request+0xce/0x210
  [<ffffffff8107502b>] ? lock_timer_base.clone.20+0x3b/0x70
  [<ffffffff81113442>] ? prep_new_page+0x142/0x1b0
  [<ffffffff812c11c8>] generic_make_request+0x2d8/0x5c0
  [<ffffffff8110e7c5>] ? mempool_alloc_slab+0x15/0x20
  [<ffffffff8110eb09>] ? mempool_alloc+0x59/0x140
  [<ffffffff812c1539>] submit_bio+0x89/0x120
  [<ffffffff8119773b>] ? bio_alloc_bioset+0x5b/0xf0
  [<ffffffff8119192b>] submit_bh+0xeb/0x120
  [<ffffffff81193670>] __block_write_full_page+0x210/0x3a0
  [<ffffffff81192760>] ? end_buffer_async_write+0x0/0x170
  [<ffffffff81197f90>] ? blkdev_get_block+0x0/0x70
  [<ffffffff81197f90>] ? blkdev_get_block+0x0/0x70
  [<ffffffff81194513>] block_write_full_page_endio+0xe3/0x120
  [<ffffffff8110c6b0>] ? find_get_pages_tag+0x40/0x120
  [<ffffffff81194565>] block_write_full_page+0x15/0x20
  [<ffffffff81198b18>] blkdev_writepage+0x18/0x20
  [<ffffffff811158a7>] __writepage+0x17/0x40
  [<ffffffff81115f2d>] write_cache_pages+0x1ed/0x470
  [<ffffffff81115890>] ? __writepage+0x0/0x40
  [<ffffffff811161d4>] generic_writepages+0x24/0x30
  [<ffffffff81117191>] do_writepages+0x21/0x40
  [<ffffffff8110d5bb>] __filemap_fdatawrite_range+0x5b/0x60
  [<ffffffff8110d61a>] filemap_write_and_wait_range+0x5a/0x80
  [<ffffffff8118fc7a>] vfs_fsync_range+0x5a/0x90
  [<ffffffff8118fd1c>] vfs_fsync+0x1c/0x20
  [<ffffffff8118fd5a>] do_fsync+0x3a/0x60
  [<ffffffff8118ffd0>] sys_fsync+0x10/0x20
  [<ffffffff8100c002>] system_call_fastpath+0x16/0x1b
INFO: task flush-9:126:23013 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
flush-9:126     D 0000000000000005     0 23013      2 0x00000000
  ffff880427bff690 0000000000000046 ffff880427bfffd8 ffff880427bfe000
  0000000000013d00 ffff880432ea03b8 ffff880427bfffd8 0000000000013d00
  ffff88045c6d5b80 ffff880432ea0000 0000000000000bb8 ffff880432edd278

The call trace entries are reoccur after 120 seconds.
I am not sure, but it looks like mdadm or something what mdadm is using 
has a bug. :S

I would like to focus on this error. It is not a big problem that the 
array is not displayed after reboot.
@HTH: I used dpkg-reconfigure mdadm and enabled to autostart the daemon, 
but I assume it does not work due the error above. Syslog has the entry:
kernel: [  151.885406] md: md127 stopped.
kernel: [  151.895662] md: bind<sdd>
kernel: [  151.895788] md: bind<sdc>
kernel: [  151.895892] md: bind<sdb>
kernel: [  151.895984] md: bind<sde>
kernel: [  154.085294] md: bind<sde>
kernel: [  154.085448] md: bind<sdb>
kernel: [  154.085553] md: bind<sdc>
kernel: [  154.085654] md: bind<sdd>
kernel: [  154.144676] bio: create slab <bio-1> at 1
kernel: [  154.144689] md/raid:md126: not clean -- starting background 
reconstruction
kernel: [  154.144700] md/raid:md126: device sdd operational as raid disk 0
kernel: [  154.144702] md/raid:md126: device sdc operational as raid disk 1
kernel: [  154.144705] md/raid:md126: device sdb operational as raid disk 2
kernel: [  154.144707] md/raid:md126: device sde operational as raid disk 3
kernel: [  154.145224] md/raid:md126: allocated 4282kB
kernel: [  154.145320] md/raid:md126: raid level 5 active with 4 out of 
4 devices, algorithm 0
kernel: [  154.145324] RAID conf printout:
kernel: [  154.145326]  --- level:5 rd:4 wd:4
kernel: [  154.145328]  disk 0, o:1, dev:sdd
kernel: [  154.145330]  disk 1, o:1, dev:sdc
kernel: [  154.145332]  disk 2, o:1, dev:sdb
kernel: [  154.145334]  disk 3, o:1, dev:sde
kernel: [  154.145367] md126: detected capacity change from 0 to 
3000607178752
mdadm[1188]: NewArray event detected on md device /dev/md126
kernel: [  154.174753]  md126: p1
mdadm[1188]: RebuildStarted event detected on md device /dev/md126


Kind Regards,

Iwan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-17  4:15 ` Daniel Frey
  2011-08-19 19:46   ` Iwan Zarembo
@ 2011-08-24 17:09   ` Iwan Zarembo
  2011-08-24 23:54     ` NeilBrown
  1 sibling, 1 reply; 8+ messages in thread
From: Iwan Zarembo @ 2011-08-24 17:09 UTC (permalink / raw)
  Cc: linux-raid

Hi Everyone,
Nothing worked with the last mail, so I tried it again. A different 
approach.
What I tried again:

1. I stopped and deleted the array using:
mdadm --stop /dev/md126
mdadm --stop /dev/md127
mdadm --remove /dev/md127
mdadm --zero-superblock /dev/sdb
mdadm --zero-superblock /dev/sdc
mdadm --zero-superblock /dev/sdd
mdadm --zero-superblock /dev/sde

2. I deleted all data (including partition table) on every HDD:
dd if=/dev/zero of=/dev/sd[b-e] bs=512 count=1

3. Checked if mdadm --assemble --scan can find any arrays, but I did not 
find anything.

4. I created the array again using 
https://raid.wiki.kernel.org/index.php/RAID_setup#External_Metadata
mdadm --create --verbose /dev/md/imsm /dev/sd[b-e] --raid-devices 4 
--metadata=imsm
mdadm --create --verbose /dev/md/raid /dev/md/imsm --raid-devices 4 
--level 5

The new Array did not have any partitions, since I deleted everything. 
So everything looks good.
The details are:
# mdadm -D /dev/md127
/dev/md127:
         Version : imsm
      Raid Level : container
   Total Devices : 4

Working Devices : 4


            UUID : 790217ac:df4a8367:7892aaab:b822d6eb
   Member Arrays :

     Number   Major   Minor   RaidDevice

        0       8       16        -        /dev/sdb
        1       8       32        -        /dev/sdc
        2       8       48        -        /dev/sdd
        3       8       64        -        /dev/sde

# mdadm -D /dev/md126
/dev/md126:
       Container : /dev/md/imsm, member 0
      Raid Level : raid5
      Array Size : 2930280448 (2794.53 GiB 3000.61 GB)
   Used Dev Size : 976760320 (931.51 GiB 1000.20 GB)
    Raid Devices : 4
   Total Devices : 4

           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : left-asymmetric
      Chunk Size : 128K


            UUID : 4ebb43fd:6327cb4e:2506b1d3:572e774e
     Number   Major   Minor   RaidDevice State
        0       8       48        0      active sync   /dev/sdd
        1       8       32        1      active sync   /dev/sdc
        2       8       16        2      active sync   /dev/sdb
        3       8       64        3      active sync   /dev/sde

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
[raid4] [raid10]
md126 : active (read-only) raid5 sde[3] sdb[2] sdc[1] sdd[0]
       2930280448 blocks super external:/md127/0 level 5, 128k chunk, 
algorithm 0 [4/4] [UUUU]
           resync=PENDING

md127 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
       836 blocks super external:imsm

unused devices: <none>

Then I stored the configuration of the array using the command
mdadm --examine --scan >> /etc/mdadm.conf

5. I used dpkg-reconfigure mdadm to make sure mdadm starts properly at 
boot time.

6. I rebooted and checked if the array was created in BIOS of the Intel 
raid.
Yes it is existing, and it looks good there.

7. I still cannot see the created array. But in palimpsest I see that my 
four hard drives are a part of a raid.

I also checked the logs for any strange entries, but no success :S

9. I used mdadm --assemble --scan to see the array in palimpsest

10. Started sync process using command from 
http://linuxmonk.ch/trac/wiki/LinuxMonk/Sysadmin/SoftwareRAID#CheckRAIDstate

#echo active > /sys/block/md126/md/array_state

#cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
[raid4] [raid10]
md126 : active raid5 sdd[3] sdc[2] sdb[1] sde[0]
       2930280448 blocks super external:/md127/0 level 5, 128k chunk, 
algorithm 0 [4/4] [UUUU]
       [>....................]  resync =  0.9% (9029760/976760320) 
finish=151.7min speed=106260K/sec

The problem is that the raid was gone after a restart. So I did step 9 
and 10 again.

11. Then I started to create a gtp partition table with parted.
Unfortunately mktable gpt on device /dev/md/raid (or the target of the 
link /dev/md126) never came back. Even after a few hours.

I really do not know what else I need to do to get the raid working. Can 
someone help me? I do not think I am the first person having trouble 
with it :S

Kind Regards,

Iwan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-24 17:09   ` Iwan Zarembo
@ 2011-08-24 23:54     ` NeilBrown
  2011-08-25 19:16       ` Iwan Zarembo
  0 siblings, 1 reply; 8+ messages in thread
From: NeilBrown @ 2011-08-24 23:54 UTC (permalink / raw)
  To: Iwan Zarembo; +Cc: linux-raid

On Wed, 24 Aug 2011 19:09:13 +0200 Iwan Zarembo <iwan@zarembo.de> wrote:

> Hi Everyone,
> Nothing worked with the last mail, so I tried it again. A different 
> approach.
> What I tried again:
> 
> 1. I stopped and deleted the array using:
> mdadm --stop /dev/md126
> mdadm --stop /dev/md127
> mdadm --remove /dev/md127
> mdadm --zero-superblock /dev/sdb
> mdadm --zero-superblock /dev/sdc
> mdadm --zero-superblock /dev/sdd
> mdadm --zero-superblock /dev/sde
> 
> 2. I deleted all data (including partition table) on every HDD:
> dd if=/dev/zero of=/dev/sd[b-e] bs=512 count=1
> 
> 3. Checked if mdadm --assemble --scan can find any arrays, but I did not 
> find anything.
> 
> 4. I created the array again using 
> https://raid.wiki.kernel.org/index.php/RAID_setup#External_Metadata
> mdadm --create --verbose /dev/md/imsm /dev/sd[b-e] --raid-devices 4 
> --metadata=imsm
> mdadm --create --verbose /dev/md/raid /dev/md/imsm --raid-devices 4 
> --level 5
> 
> The new Array did not have any partitions, since I deleted everything. 
> So everything looks good.
> The details are:
> # mdadm -D /dev/md127
> /dev/md127:
>          Version : imsm
>       Raid Level : container
>    Total Devices : 4
> 
> Working Devices : 4
> 
> 
>             UUID : 790217ac:df4a8367:7892aaab:b822d6eb
>    Member Arrays :
> 
>      Number   Major   Minor   RaidDevice
> 
>         0       8       16        -        /dev/sdb
>         1       8       32        -        /dev/sdc
>         2       8       48        -        /dev/sdd
>         3       8       64        -        /dev/sde
> 
> # mdadm -D /dev/md126
> /dev/md126:
>        Container : /dev/md/imsm, member 0
>       Raid Level : raid5
>       Array Size : 2930280448 (2794.53 GiB 3000.61 GB)
>    Used Dev Size : 976760320 (931.51 GiB 1000.20 GB)
>     Raid Devices : 4
>    Total Devices : 4
> 
>            State : clean
>   Active Devices : 4
> Working Devices : 4
>   Failed Devices : 0
>    Spare Devices : 0
> 
>           Layout : left-asymmetric
>       Chunk Size : 128K
> 
> 
>             UUID : 4ebb43fd:6327cb4e:2506b1d3:572e774e
>      Number   Major   Minor   RaidDevice State
>         0       8       48        0      active sync   /dev/sdd
>         1       8       32        1      active sync   /dev/sdc
>         2       8       16        2      active sync   /dev/sdb
>         3       8       64        3      active sync   /dev/sde
> 
> # cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md126 : active (read-only) raid5 sde[3] sdb[2] sdc[1] sdd[0]
>        2930280448 blocks super external:/md127/0 level 5, 128k chunk, 
> algorithm 0 [4/4] [UUUU]
>            resync=PENDING
> 
> md127 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
>        836 blocks super external:imsm
> 
> unused devices: <none>
> 
> Then I stored the configuration of the array using the command
> mdadm --examine --scan >> /etc/mdadm.conf
> 
> 5. I used dpkg-reconfigure mdadm to make sure mdadm starts properly at 
> boot time.
> 
> 6. I rebooted and checked if the array was created in BIOS of the Intel 
> raid.
> Yes it is existing, and it looks good there.
> 
> 7. I still cannot see the created array. But in palimpsest I see that my 
> four hard drives are a part of a raid.
> 
> I also checked the logs for any strange entries, but no success :S
> 
> 9. I used mdadm --assemble --scan to see the array in palimpsest
> 
> 10. Started sync process using command from 
> http://linuxmonk.ch/trac/wiki/LinuxMonk/Sysadmin/SoftwareRAID#CheckRAIDstate
> 
> #echo active > /sys/block/md126/md/array_state
> 
> #cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md126 : active raid5 sdd[3] sdc[2] sdb[1] sde[0]
>        2930280448 blocks super external:/md127/0 level 5, 128k chunk, 
> algorithm 0 [4/4] [UUUU]
>        [>....................]  resync =  0.9% (9029760/976760320) 
> finish=151.7min speed=106260K/sec
> 
> The problem is that the raid was gone after a restart. So I did step 9 
> and 10 again.
> 
> 11. Then I started to create a gtp partition table with parted.
> Unfortunately mktable gpt on device /dev/md/raid (or the target of the 
> link /dev/md126) never came back. Even after a few hours.
> 
> I really do not know what else I need to do to get the raid working. Can 
> someone help me? I do not think I am the first person having trouble 
> with it :S

It sounds like mdmon is not being started.
mdmon monitors the array and performs any metadata updates required.

The reason mktable is taking more than a second is that it tries to write to
the array, the kernel marks the array as 'write-pending' and waits for mdmon
to notice, update the metadata, and switch the array to 'active'.  But mdmon
never does that.

mdmon should be started by mdadm but just to check you can start it by hand:

 /sbin/mdmon md126
or
 /sbin/mdmon --all

If this makes it work, you need to work out why mdmon isn't being started.

NeilBrown


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-24 23:54     ` NeilBrown
@ 2011-08-25 19:16       ` Iwan Zarembo
  2011-08-26 10:54         ` linbloke
  0 siblings, 1 reply; 8+ messages in thread
From: Iwan Zarembo @ 2011-08-25 19:16 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


> It sounds like mdmon is not being started.
> mdmon monitors the array and performs any metadata updates required.
>
> The reason mktable is taking more than a second is that it tries to write to
> the array, the kernel marks the array as 'write-pending' and waits for mdmon
> to notice, update the metadata, and switch the array to 'active'.  But mdmon
> never does that.
>
> mdmon should be started by mdadm but just to check you can start it by hand:
>
>   /sbin/mdmon md126
> or
>   /sbin/mdmon --all
>
> If this makes it work, you need to work out why mdmon isn't being started.
>
> NeilBrown
>
Hello NeilBrown,
I finally found it. I was using mdadm 3.1.4, it is in the repository of 
ubuntu. This version does not really support IMSM that this is the real 
problem. I found it because I did not have mdmon. It does not exists in 
this old version. So I downloaded the latest official relesae 3.2.1 and 
installed it via make && sudo make install. Now everything works 
perfectly. The array is available after reboot and the synchronization 
process works over BIOS and not over mdadm itself.
I would never found out that the version is making the trouble without 
your comment about mdmon. Thank you.

@Daniel,linbloke: Also a big thank you to you both. I learned a lot 
about raid with this problem.

Cheers,

Iwan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: IMSM Raid 5 always read only and gone after reboot
  2011-08-25 19:16       ` Iwan Zarembo
@ 2011-08-26 10:54         ` linbloke
  0 siblings, 0 replies; 8+ messages in thread
From: linbloke @ 2011-08-26 10:54 UTC (permalink / raw)
  To: Iwan Zarembo; +Cc: linux-raid

On 26/08/11 5:16 AM, Iwan Zarembo wrote:
>
>> It sounds like mdmon is not being started.
>> mdmon monitors the array and performs any metadata updates required.
>>
>> The reason mktable is taking more than a second is that it tries to 
>> write to
>> the array, the kernel marks the array as 'write-pending' and waits 
>> for mdmon
>> to notice, update the metadata, and switch the array to 'active'.  
>> But mdmon
>> never does that.
>>
>> mdmon should be started by mdadm but just to check you can start it 
>> by hand:
>>
>>   /sbin/mdmon md126
>> or
>>   /sbin/mdmon --all
>>
>> If this makes it work, you need to work out why mdmon isn't being 
>> started.
>>
>> NeilBrown
>>
> Hello NeilBrown,
> I finally found it. I was using mdadm 3.1.4, it is in the repository 
> of ubuntu. This version does not really support IMSM that this is the 
> real problem. I found it because I did not have mdmon. It does not 
> exists in this old version. So I downloaded the latest official 
> relesae 3.2.1 and installed it via make && sudo make install. Now 
> everything works perfectly. The array is available after reboot and 
> the synchronization process works over BIOS and not over mdadm itself.
> I would never found out that the version is making the trouble without 
> your comment about mdmon. Thank you.
>
> @Daniel,linbloke: Also a big thank you to you both. I learned a lot 
> about raid with this problem.
>

my advice didn't make it to the list (reply-all duh), but it was short 
and for the record:

On debian-based systems to specify which arrays are required at boot time:

try:
dpkg-reconfigure mdadm

it should ask which arrays (if any) to start on boot

HTH

> Cheers,
>
> Iwan
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-08-26 10:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-16 20:19 IMSM Raid 5 always read only and gone after reboot Iwan Zarembo
2011-08-17  4:08 ` Daniel Frey
2011-08-17  4:15 ` Daniel Frey
2011-08-19 19:46   ` Iwan Zarembo
2011-08-24 17:09   ` Iwan Zarembo
2011-08-24 23:54     ` NeilBrown
2011-08-25 19:16       ` Iwan Zarembo
2011-08-26 10:54         ` linbloke

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).