* GPT Table broken on a Raid1 @ 2012-09-19 11:03 Günther J. Niederwimmer 2012-09-20 2:39 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-19 11:03 UTC (permalink / raw) To: linux-raid Hello, can any tell me, why the GPT Table is broken by a linux installation I have a dual boot system, the GPT Table is created with Win "diskpart" This is a (U)EFI installation. With linux kernel 3.5.x, I have always this message in the log Sep 17 09:54:27 techz kernel: [ 1.204681] GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.204685] GPT:625137663 != 625142447 Sep 17 09:54:27 techz kernel: [ 1.204687] GPT:Alternate GPT header not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.204689] GPT:625137663 != 625142447 Sep 17 09:54:27 techz kernel: [ 1.204691] GPT: Use GNU Parted to correct GPT errors. Sep 17 09:54:27 techz kernel: [ 1.204710] sdb: sdb1 sdb2 sdb3 sdb4 sdb5 sdb6 Sep 17 09:54:27 techz kernel: [ 1.205613] sd 1:0:0:0: [sdb] Attached SCSI disk Sep 17 09:54:27 techz kernel: [ 1.212374] GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.212377] GPT:625137663 != 625142447 Sep 17 09:54:27 techz kernel: [ 1.212379] GPT:Alternate GPT header not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.212381] GPT:625137663 != 625142447 Sep 17 09:54:27 techz kernel: [ 1.212382] GPT: Use GNU Parted to correct GPT errors. Sep 17 09:54:27 techz kernel: [ 1.212404] sda: sda1 sda2 sda3 sda4 sda5 sda6 When I like to repair this with parted the system is not booting and the Raid1 is destroyed. -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-19 11:03 GPT Table broken on a Raid1 Günther J. Niederwimmer @ 2012-09-20 2:39 ` Chris Murphy 2012-09-20 11:05 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-20 2:39 UTC (permalink / raw) To: Linux RAID On Sep 19, 2012, at 5:03 AM, Günther J. Niederwimmer wrote: > Hello, > > can any tell me, why the GPT Table is broken by a linux installation What format superblock? I will guess it's 1.0 which stores the superblock at the end of the disk and may be stepping on the secondary GPT header. And then when you repair the GPT, you're wiping out part of the md superblock so it breaks your RAID. It's confusing that the GPT table is created with diskpart, instead of parted. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-20 2:39 ` Chris Murphy @ 2012-09-20 11:05 ` Günther J. Niederwimmer 2012-09-20 17:34 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-20 11:05 UTC (permalink / raw) To: linux-raid Hello, Am Mittwoch, 19. September 2012, 20:39:55 schrieb Chris Murphy: > On Sep 19, 2012, at 5:03 AM, Günther J. Niederwimmer wrote: > > Hello, > > > > can any tell me, why the GPT Table is broken by a linux installation > > What format superblock? I will guess it's 1.0 which stores the superblock at > the end of the disk and may be stepping on the secondary GPT header. And > then when you repair the GPT, you're wiping out part of the md superblock > so it breaks your RAID. > > It's confusing that the GPT table is created with diskpart, instead of > parted. this is "normal" I install "windows 7" first ;). but the problem with the wrong GPT Table Message I have also on a Raid10 with "Intel Matrix Storage" and dmraid installation. The Question is, work parted correct with Raid GPT Table? -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-20 11:05 ` Günther J. Niederwimmer @ 2012-09-20 17:34 ` Chris Murphy 2012-09-21 11:42 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-20 17:34 UTC (permalink / raw) To: Linux RAID On Sep 20, 2012, at 5:05 AM, Günther J. Niederwimmer wrote: > > this is "normal" I install "windows 7" first ;). but the problem with the > wrong GPT Table Message I have also on a Raid10 with "Intel Matrix Storage" > and dmraid installation. > > OK you didn't answer my question about what mdadm metadata version your md RAID is using. If it's 1.0 instead of 1.2 that could be the culprit. > The Question is, work parted correct with Raid GPT Table? Could be a bug in parted 2.3, which is old. Try using parted 3.0. Another question is, do you really think it's a good idea to be mixing different RAID implementations on the same physical devices, which also appear to contain boot volumes? I think one of your RAID implementations is stepping on the GPT secondary header. I've seen this before in a BIOS "fakeraid" implementation. So I would look at all of your RAID implementations, make sure they're using the latest versions. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-20 17:34 ` Chris Murphy @ 2012-09-21 11:42 ` Günther J. Niederwimmer 2012-09-21 19:35 ` Chris Murphy 2012-09-21 21:30 ` Chris Murphy 0 siblings, 2 replies; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-21 11:42 UTC (permalink / raw) To: linux-raid Hello Chris, Am Donnerstag, 20. September 2012, 11:34:12 schrieb Chris Murphy: > On Sep 20, 2012, at 5:05 AM, Günther J. Niederwimmer wrote: > > this is "normal" I install "windows 7" first ;). but the problem with the > > wrong GPT Table Message I have also on a Raid10 with "Intel Matrix > > Storage" > > and dmraid installation. > > OK you didn't answer my question about what mdadm metadata version your md > RAID is using. If it's 1.0 instead of 1.2 that could be the culprit. > > The Question is, work parted correct with Raid GPT Table? Thank you for the answer. excuse ;). OK. I have mdadm-3.2.5-3.7.1 parted 2.4.-24.2.2 on the computer. > Could be a bug in parted 2.3, which is old. Try using parted 3.0. I will search ;) > Another question is, do you really think it's a good idea to be mixing > different RAID implementations on the same physical devices, which also > appear to contain boot volumes? My installing is, win7 on the Raid1 GPT EFI the Raid1 have Partitions, 5 for NTFS and one ext4 for the "/Home" Direktory My Linux is on a SSD, SSD is also GPT EFI formated. > > I think one of your RAID implementations is stepping on the GPT secondary > header. I've seen this before in a BIOS "fakeraid" implementation. So I > would look at all of your RAID implementations, make sure they're using the > latest versions. Yes, it is a Intel Matrix Storage System, the Board is a DX58SO2 with last Bios & Firmware for the "fakeraid" ;) my question is, win can work with no Problem on the "Fakeraid" and a GPT Table. Could it be, that the kernel or / and parted have a Problem with the GPT Table on a Raid1 or a Raid10 Fakeraid ? I have this Problem on all my three Systems (Intel), also on my Small 19" Intel Server. -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 11:42 ` Günther J. Niederwimmer @ 2012-09-21 19:35 ` Chris Murphy 2012-09-21 22:43 ` Chris Murphy 2012-09-21 21:30 ` Chris Murphy 1 sibling, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-21 19:35 UTC (permalink / raw) To: Linux RAID On Sep 21, 2012, at 5:42 AM, Günther J. Niederwimmer wrote: > > I have > mdadm-3.2.5-3.7.1 If you're making the RAID with that, it defaults to metadata version 1.2. But to be sure mdadm -E /dev/mdX > parted 2.4.-24.2.2 > > on the computer. > >> Could be a bug in parted 2.3, which is old. Try using parted 3.0. > > I will search ;) Or track down gdisk (a.k.a. GPT fdisk), which I prefer to parted anyway. > my question is, win can work with no Problem on the "Fakeraid" and a GPT > Table. Could it be, that the kernel or / and parted have a Problem with the > GPT Table on a Raid1 or a Raid10 Fakeraid ? Just because Windows works with fakeraid doesn't mean it's a linux kernel problem. It's clearly not finding the secondary partition header where the primary one says it should be. So something is moving it, or stepping on it. What linux distribution and version are you using? The fact you've got an older parted makes me wonder if the distribution predates the fix for dmraid and GPT, which is to use kpartx for activating partitions. https://bugs.launchpad.net/ubuntu/+bug/777056 Another possibility is that Windows 7 is moving it, and is perfectly OK with the move, but then that breaks everything else. http://ubuntuforums.org/showpost.php?p=10449114&postcount=12 Chris -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 19:35 ` Chris Murphy @ 2012-09-21 22:43 ` Chris Murphy 2012-09-22 8:37 ` Günther J. Niederwimmer 2012-09-22 15:31 ` John Robinson 0 siblings, 2 replies; 38+ messages in thread From: Chris Murphy @ 2012-09-21 22:43 UTC (permalink / raw) To: Linux RAID On Sep 21, 2012, at 1:35 PM, Chris Murphy wrote: > If you're making the RAID with that, it defaults to metadata version 1.2. But to be sure > mdadm -E /dev/mdX Scratch that. I was confused. Try these instead: mdadm -–detail-platform mdadm –D /dev/md/imsm mdadm –E /dev/sdX Anyway, I'm suspicious that you've got either your SATA controller also with RAID enabled, or you're also using dmraid and it's conflicting with the md driver. Or you've misconfigured mdadm for imsm. The result is the secondary GPT is getting squished. I think you should read this document, as it proposes creating a container first, then RAID within that. If you're creating the RAID entirely from within Windows this may not be what it does. http://download.intel.com/design/intarch/PAPERS/326024.pdf Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 22:43 ` Chris Murphy @ 2012-09-22 8:37 ` Günther J. Niederwimmer 2012-09-22 18:30 ` Chris Murphy 2012-09-22 15:31 ` John Robinson 1 sibling, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-22 8:37 UTC (permalink / raw) To: linux-raid Hello Chris, Am Freitag, 21. September 2012, 16:43:52 schrieb Chris Murphy: > On Sep 21, 2012, at 1:35 PM, Chris Murphy wrote: > > If you're making the RAID with that, it defaults to metadata version 1.2. > > But to be sure mdadm -E /dev/mdX > > Scratch that. I was confused. Try these instead: This is the output from a openSUSE 12.2 (DX58SO2) > mdadm -–detail-platform Platform : Intel(R) Matrix Storage Manager Version : 11.0.0.1339 RAID Levels : raid0 raid1 raid10 raid5 Chunk Sizes : 4k 8k 16k 32k 64k 128k 2TB volumes : supported 2TB disks : supported Max Disks : 6 Max Volumes : 2 per array, 4 per controller I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA) > mdadm –D /dev/md/imsm On my system I have a /imsm0 /dev/md/imsm0: Version : imsm Raid Level : container Total Devices : 2 Working Devices : 2 UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Member Arrays : /dev/md/Volume0 Number Major Minor RaidDevice 0 8 0 - /dev/sda 1 8 16 - /dev/sdb > mdadm –E /dev/sdX /dev/sda: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 00013417 Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e9e527c correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 0 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) /dev/sdb: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 0001342f Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e9e5294 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 1 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) > Anyway, I'm suspicious that you've got either your SATA controller also with > RAID enabled, or you're also using dmraid and it's conflicting with the md > driver. Or you've misconfigured mdadm for imsm. The result is the secondary > GPT is getting squished. I think you should read this document, as it > proposes creating a container first, then RAID within that. If you're > creating the RAID entirely from within Windows this may not be what it > does. > > http://download.intel.com/design/intarch/PAPERS/326024.pdf I working on this, to read all. I installed with YaST2 and I hope it is only mdadm not all together ;). But I mean I have read in the changelog from parted 3.1, it is a Raid(1) GPT Error Bug corrected ? I hope I can create a working mdadm 3.1 package witch is working with openSUSE 12.2. (I am not a programmer) Thanks for the hint to fedora, for the tool gdisk I don't found it before. -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 8:37 ` Günther J. Niederwimmer @ 2012-09-22 18:30 ` Chris Murphy 2012-09-23 6:47 ` Hello,Re: " Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-22 18:30 UTC (permalink / raw) To: Linux RAID On Sep 22, 2012, at 2:37 AM, Günther J. Niederwimmer wrote: > But I mean I have read in the changelog from parted 3.1, it is a Raid(1) GPT > Error Bug corrected ? I don't think this is related to parted, honestly. Did you partition sda and sdb before you created the RAID container? I think the problem is that the IMSM metadata has stepped on the GPT secondary partition at the end of the sda and sdb, that's what you original post indicates. I could be wrong but it seems like the problem is that sda and sdb shouldn't even have a GPT in the first place, they should just be treated as raw physical devices for Intel Raid to take over, and you'd only create a GPT for the virtual device (the array) within. If correct, the way to get rid of the kernel GPT error messages would be to remove the unnecessary GPT structures from sda and sdb, leaving only the IMSM metadata intact. Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Hello,Re: GPT Table broken on a Raid1 2012-09-22 18:30 ` Chris Murphy @ 2012-09-23 6:47 ` Günther J. Niederwimmer 2012-09-23 7:17 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-23 6:47 UTC (permalink / raw) To: linux-raid Hello, Am Samstag, 22. September 2012, 12:30:28 schrieb Chris Murphy: > On Sep 22, 2012, at 2:37 AM, Günther J. Niederwimmer wrote: > > But I mean I have read in the changelog from parted 3.1, it is a Raid(1) > > GPT Error Bug corrected ? > > I don't think this is related to parted, honestly. Did you partition sda and > sdb before you created the RAID container? I think the problem is that the > IMSM metadata has stepped on the GPT secondary partition at the end of the > sda and sdb, that's what you original post indicates. > > I could be wrong but it seems like the problem is that sda and sdb shouldn't > even have a GPT in the first place, they should just be treated as raw > physical devices for Intel Raid to take over, and you'd only create a GPT > for the virtual device (the array) within. If correct, the way to get rid > of the kernel GPT error messages would be to remove the unnecessary GPT > structures from sda and sdb, leaving only the IMSM metadata intact. My steps to install a computer with dualboot, but I can also install Linux alone with the same result. I create the Array's with the on board BIOS run indexing the I run a Tool most "diskpart" (dualboot) to create a GPT Table. Install method is EFI. afterward I install win7 The next step is to install a Linux (most openSUSE) or Install Linux on a BIOS created Array (Raid1 or Raid10) for a Server create a GPT Table the result are the same. OK, the system is running but never start parted or a other tool and say to anything yes, then you have a destroyed system :(. -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-23 6:47 ` Hello,Re: " Günther J. Niederwimmer @ 2012-09-23 7:17 ` Chris Murphy 2012-09-24 7:28 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-23 7:17 UTC (permalink / raw) To: Linux RAID On Sep 23, 2012, at 12:47 AM, Günther J. Niederwimmer wrote: > OK, the system is running but never start parted or a other tool and say to > anything yes, then you have a destroyed system :( This will not destroy anything parted -l or gdisk -l /dev/sda gdisk -l /dev/sdb I think those disks have latent GPTs on them and that's the source of the (likely ignorable) problem. Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-23 7:17 ` Chris Murphy @ 2012-09-24 7:28 ` Günther J. Niederwimmer 2012-09-24 17:21 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-24 7:28 UTC (permalink / raw) To: linux-raid Hello Chris, Am Sonntag, 23. September 2012, 01:17:43 schrieb Chris Murphy: > On Sep 23, 2012, at 12:47 AM, Günther J. Niederwimmer wrote: > > OK, the system is running but never start parted or a other tool and say > > to > > anything yes, then you have a destroyed system :( > > This will not destroy anything > > parted -l Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Fix/Ignore/Cancel? Now I know, do NOT Fix ;). Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Fix/Ignore/Cancel? i Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 4784 blocks) or continue with the current setting? Fix/Ignore? i Model: ATA ST3320620AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: gpt_sync_mbr Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 EFI system partition boot 2 106MB 240MB 134MB Microsoft reserved partition msftres 3 240MB 105GB 105GB ntfs Basic data partition 4 105GB 190GB 84,9GB ntfs Basic data partition 5 190GB 212GB 22,0GB ntfs Basic data partition 6 212GB 268GB 55,6GB ext4 Basic data partition Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Fix/Ignore/Cancel? I Model: ATA ST3320620AS (scsi) Disk /dev/sdb: 320GB Sector size (logical/physical): 512B/512B Partition Table: gpt_sync_mbr Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 EFI system partition boot 2 106MB 240MB 134MB Microsoft reserved partition msftres 3 240MB 105GB 105GB ntfs Basic data partition 4 105GB 190GB 84,9GB ntfs Basic data partition 5 190GB 212GB 22,0GB ntfs Basic data partition 6 212GB 268GB 55,6GB ext4 Basic data partition Model: ATA ST3320613AS (scsi) Disk /dev/sdc: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 320GB 320GB primary ext4 type=83 Model: ATA ST3320613AS (scsi) Disk /dev/sdd: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 4392MB 4391MB primary linux-swap(v1) type=82 2 4392MB 320GB 316GB primary ext4 type=83 Model: ATA OCZ-VERTEX4 (scsi) Disk /dev/sde: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 181MB 180MB fat32 primary boot 2 181MB 116GB 116GB primary Model: Linux Software RAID Array (md) Disk /dev/md126: 320GB Sector size (logical/physical): 512B/512B Partition Table: gpt_sync_mbr Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 EFI system partition boot 2 106MB 240MB 134MB Microsoft reserved partition msftres 3 240MB 105GB 105GB ntfs Basic data partition 4 105GB 190GB 84,9GB ntfs Basic data partition 5 190GB 212GB 22,0GB ntfs Basic data partition 6 212GB 268GB 55,6GB ext4 Basic data partition > or > > gdisk -l /dev/sda GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 625142448 sectors, 298.1 GiB Logical sector size: 512 bytes Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 625137630 Partitions will be aligned on 2048-sector boundaries Total free space is 102406077 sectors (48.8 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 206847 100.0 MiB EF00 EFI system partition 2 206848 468991 128.0 MiB 0C01 Microsoft reserved part 3 468992 205297663 97.7 GiB 0700 Basic data partition 4 205297664 371181567 79.1 GiB 0700 Basic data partition 5 371181568 414189567 20.5 GiB 0700 Basic data partition 6 414189568 522733567 51.8 GiB 0700 Basic data partition > gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 625142448 sectors, 298.1 GiB Logical sector size: 512 bytes Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 625137630 Partitions will be aligned on 2048-sector boundaries Total free space is 102406077 sectors (48.8 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 206847 100.0 MiB EF00 EFI system partition 2 206848 468991 128.0 MiB 0C01 Microsoft reserved part 3 468992 205297663 97.7 GiB 0700 Basic data partition 4 205297664 371181567 79.1 GiB 0700 Basic data partition 5 371181568 414189567 20.5 GiB 0700 Basic data partition 6 414189568 522733567 51.8 GiB 0700 Basic data partition > gdisk -l /dev/md126 GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/md126: 625137664 sectors, 298.1 GiB Logical sector size: 512 bytes Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 625137630 Partitions will be aligned on 2048-sector boundaries Total free space is 102406077 sectors (48.8 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 206847 100.0 MiB EF00 EFI system partition 2 206848 468991 128.0 MiB 0C01 Microsoft reserved part 3 468992 205297663 97.7 GiB 0700 Basic data partition 4 205297664 371181567 79.1 GiB 0700 Basic data partition 5 371181568 414189567 20.5 GiB 0700 Basic data partition 6 414189568 522733567 51.8 GiB 0700 Basic data partition > I think those disks have latent GPTs on them and that's the source of the > (likely ignorable) problem. ;) I mean this is a little to high for me :(. But I have one Question more. Is this also the same Problem with the GPT Table. I make a Test installation before, with dmraid for the Raid1. The /home Directory on the Raid1, all other on the SSD. After the Installation the most time the System have a Problem to mount the "/home" or a other Partition on the Raid1 and fall down in the repair mode. But the good thing from this, Grub2 found my Windows 7 Installation on the Raid1 and Make a Start Entry, with mdadm this is not working, Grub2 don't found the windows? -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 7:28 ` Günther J. Niederwimmer @ 2012-09-24 17:21 ` Chris Murphy 2012-09-24 19:06 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-24 17:21 UTC (permalink / raw) To: Linux RAID On Sep 24, 2012, at 1:28 AM, Günther J. Niederwimmer wrote: > > Model: ATA ST3320620AS (scsi) > Disk /dev/sda: 320GB > Sector size (logical/physical): 512B/512B > Partition Table: gpt_sync_mbr Interesting. Parted thinks this disk has a hybrid MBR. Günther, what is the result from: fdisk -l /dev/sda ? And one more, I'd like to see the result from: mount I want to make sure nothing is directly mounting sda or sdb. > But the good thing from this, Grub2 found my Windows 7 Installation on the > Raid1 and Make a Start Entry, with mdadm this is not working, Grub2 don't > found the windows? Let's do one thing at a time. We don't even really have the basics covered yet. I am not able to reproduce your problem where the kernel and parted complain about the alternate GPT header not being at the end of the disk. I'm using kernel 3.5.3-1.fc17 and mdadm 3.2.5. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 17:21 ` Chris Murphy @ 2012-09-24 19:06 ` Günther J. Niederwimmer 2012-09-24 20:18 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-24 19:06 UTC (permalink / raw) To: linux-raid Hello Chris, Am Montag, 24. September 2012, 11:21:35 schrieb Chris Murphy: > On Sep 24, 2012, at 1:28 AM, Günther J. Niederwimmer wrote: > > Model: ATA ST3320620AS (scsi) > > Disk /dev/sda: 320GB > > Sector size (logical/physical): 512B/512B > > Partition Table: gpt_sync_mbr > > Interesting. Parted thinks this disk has a hybrid MBR. Günther, what is the > result from: > > fdisk -l /dev/sda WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GN Parted. Disk /dev/sda: 320.1 GB, 320072933376 bytes 256 Köpfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Gerät boot. Anfang Ende Blöcke Id System /dev/sda1 1 4294967295 2147483647+ ee GPT > And one more, I'd like to see the result from: > mount devtmpfs on /dev type devtmpfs (rw,relatime,size=12354704k,nr_inodes=3088676,mode=755) tmpfs on /dev/shm type tmpfs (rw,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000) /dev/sde2 on / type btrfs (rw,relatime,ssd,space_cache) proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs (rw,relatime) tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/lib/systemd/systemd-cgroups- agent,name=systemd) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) tmpfs on /var/lock type tmpfs (rw,nosuid,nodev,relatime,mode=755) tmpfs on /var/run type tmpfs (rw,nosuid,nodev,relatime,mode=755) tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) securityfs on /sys/kernel/security type securityfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) /dev/sde1 on /boot/efi type vfat (rw,relatime,fmask=0002,dmask=0002,allow_utime=0020,codepage=cp437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount- ro) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) /dev/sdd2 on /data1 type ext4 (rw,relatime,data=ordered) /dev/sdc1 on /data type ext4 (rw,relatime,data=ordered) /dev/md126p4 on /windows/D type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/md126p5 on /windows/E type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) /dev/md126p6 on /home type ext4 (rw,relatime,data=ordered) /dev/md126p3 on /windows/C type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) bbs:/data1/ on /datas1 type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.100.200,mountvers=3,mountport=1516,mountproto=udp,local_lock=none,addr=192.168.100.200) bbs:/data2/ on /datas2 type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.100.200,mountvers=3,mountport=1516,mountproto=udp,local_lock=none,addr=192.168.100.200) none on /var/lib/ntp/proc type proc (ro,nosuid,nodev,relatime) gvfs-fuse-daemon on /run/user/gjn/gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,relatime,user_id=1000,group_id=100 > > I want to make sure nothing is directly mounting sda or sdb. OK > > But the good thing from this, Grub2 found my Windows 7 Installation on the > > Raid1 and Make a Start Entry, with mdadm this is not working, Grub2 don't > > found the windows? > > Let's do one thing at a time. We don't even really have the basics covered > yet. I am not able to reproduce your problem where the kernel and parted > complain about the alternate GPT header not being at the end of the disk. > I'm using kernel 3.5.3-1.fc17 and mdadm 3.2.5. OK Chris ;) Thanks for the help, -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 19:06 ` Günther J. Niederwimmer @ 2012-09-24 20:18 ` Chris Murphy 2012-09-24 21:06 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-24 20:18 UTC (permalink / raw) To: Linux RAID On Sep 24, 2012, at 1:06 PM, Günther J. Niederwimmer wrote: > > Disk /dev/sda: 320.1 GB, 320072933376 bytes > 256 Köpfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 Sektoren > Einheiten = Sektoren von 1 × 512 = 512 Bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disk identifier: 0x00000000 > > Gerät boot. Anfang Ende Blöcke Id System > /dev/sda1 1 4294967295 2147483647+ ee GPT Well that's screwy. But at least the PMBR is protecting all sectors (and quite a bit more that don't exist) of this disk. Last time we checked, the array state was dirty. Is that still the case? Report the results from these two: mdadm -D /dev/md126 mdadm -Ds Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 20:18 ` Chris Murphy @ 2012-09-24 21:06 ` Günther J. Niederwimmer 2012-09-24 21:12 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-24 21:06 UTC (permalink / raw) To: linux-raid Hello Chris, Am Montag, 24. September 2012, 14:18:27 schrieb Chris Murphy: > On Sep 24, 2012, at 1:06 PM, Günther J. Niederwimmer wrote: > > Disk /dev/sda: 320.1 GB, 320072933376 bytes > > 256 Köpfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 Sektoren > > Einheiten = Sektoren von 1 × 512 = 512 Bytes > > Sector size (logical/physical): 512 bytes / 512 bytes > > I/O size (minimum/optimal): 512 bytes / 512 bytes > > Disk identifier: 0x00000000 > > > > Gerät boot. Anfang Ende Blöcke Id System > > > > /dev/sda1 1 4294967295 2147483647+ ee GPT > > Well that's screwy. But at least the PMBR is protecting all sectors (and > quite a bit more that don't exist) of this disk. > > Last time we checked, the array state was dirty. Is that still the case? > Report the results from these two: Yes Sir ;). > mdadm -D /dev/md126 /dev/md126: Container : /dev/md/imsm0, member 0 Raid Level : raid1 Array Size : 312568832 (298.09 GiB 320.07 GB) Used Dev Size : 312568964 (298.09 GiB 320.07 GB) Raid Devices : 2 Total Devices : 2 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc Number Major Minor RaidDevice State 1 8 0 0 active sync /dev/sda 0 8 16 1 active sync /dev/sdb > mdadm -Ds ARRAY /dev/md/imsm0 metadata=imsm UUID=363f146f:e7f29dc8:f05996c3:577ead6a ARRAY /dev/md/Volume0 container=/dev/md/imsm0 member=0 UUID=ec120401:b6ed52e6:3814fac4:48fcf4fc Question: Why create mdadm also a md127, I found this now ? /dev/md127: Version : imsm Raid Level : container Total Devices : 2 Working Devices : 2 UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Member Arrays : /dev/md/Volume0 Number Major Minor RaidDevice 0 8 16 - /dev/sdb 1 8 0 - /dev/sda -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 21:06 ` Günther J. Niederwimmer @ 2012-09-24 21:12 ` Chris Murphy 2012-09-25 8:27 ` Günther J. Niederwimmer 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-24 21:12 UTC (permalink / raw) To: Linux RAID On Sep 24, 2012, at 3:06 PM, Günther J. Niederwimmer wrote: > >> mdadm -D /dev/md126 > > /dev/md126: > Container : /dev/md/imsm0, member 0 > Raid Level : raid1 > Array Size : 312568832 (298.09 GiB 320.07 GB) > Used Dev Size : 312568964 (298.09 GiB 320.07 GB) > Raid Devices : 2 > Total Devices : 2 > > State : active Probably clean. Try: mdadm -E /dev/sda > > ARRAY /dev/md/imsm0 metadata=imsm UUID=363f146f:e7f29dc8:f05996c3:577ead6a > ARRAY /dev/md/Volume0 container=/dev/md/imsm0 member=0 > UUID=ec120401:b6ed52e6:3814fac4:48fcf4fc > > > Question: Why create mdadm also a md127, I found this now ? Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same on my system, although it's using md126. Not sure if that's controllable or not. Maybe someone else can answer it. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-24 21:12 ` Chris Murphy @ 2012-09-25 8:27 ` Günther J. Niederwimmer 2012-09-25 9:28 ` John Robinson 2012-09-25 16:55 ` Chris Murphy 0 siblings, 2 replies; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-25 8:27 UTC (permalink / raw) To: linux-raid Hello Chris, Am Montag, 24. September 2012, 15:12:22 schrieb Chris Murphy: > On Sep 24, 2012, at 3:06 PM, Günther J. Niederwimmer wrote: > >> mdadm -D /dev/md126 > > > > /dev/md126: > > Container : /dev/md/imsm0, member 0 > > > > Raid Level : raid1 > > Array Size : 312568832 (298.09 GiB 320.07 GB) > > > > Used Dev Size : 312568964 (298.09 GiB 320.07 GB) > > > > Raid Devices : 2 > > > > Total Devices : 2 > > > > State : active > > Probably clean. Try: > > mdadm -E /dev/sda /dev/sda: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 00017ffe Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e9e9e6b correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 0 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) > > ARRAY /dev/md/imsm0 metadata=imsm UUID=363f146f:e7f29dc8:f05996c3:577ead6a > > ARRAY /dev/md/Volume0 container=/dev/md/imsm0 member=0 > > UUID=ec120401:b6ed52e6:3814fac4:48fcf4fc > > > > > > Question: Why create mdadm also a md127, I found this now ? > > Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same on my > system, although it's using md126. Not sure if that's controllable or not. > Maybe someone else can answer it. No Problem, but i tell it to you, for me it is mystery. ;) -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-25 8:27 ` Günther J. Niederwimmer @ 2012-09-25 9:28 ` John Robinson 2012-09-25 16:55 ` Chris Murphy 1 sibling, 0 replies; 38+ messages in thread From: John Robinson @ 2012-09-25 9:28 UTC (permalink / raw) To: "Günther J. Niederwimmer"; +Cc: linux-raid On 25/09/2012 09:27, Günther J. Niederwimmer wrote: > Am Montag, 24. September 2012, 15:12:22 schrieb Chris Murphy: >> On Sep 24, 2012, at 3:06 PM, Günther J. Niederwimmer wrote: [...] >>> Question: Why create mdadm also a md127, I found this now ? >> >> Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same on my >> system, although it's using md126. Not sure if that's controllable or not. >> Maybe someone else can answer it. > > No Problem, but i tell it to you, for me it is mystery. ;) It's the way IMSM works. You have md127 which is a "container", essentially spanning all the discs concerned. That container holds one or more RAID sets. You have md126 aka Volume0 as a RAID-1 set that fills the container, but you could have more than one RAID set, e.g. you could have a RAID-1 as md126 and a RAID-10 as md125. Cheers, John. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-25 8:27 ` Günther J. Niederwimmer 2012-09-25 9:28 ` John Robinson @ 2012-09-25 16:55 ` Chris Murphy 2012-09-26 12:17 ` Günther J. Niederwimmer 1 sibling, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-25 16:55 UTC (permalink / raw) To: Linux RAID On Sep 25, 2012, at 2:27 AM, Günther J. Niederwimmer wrote: > > Dirty State : dirty Basically this is my only remaining concern, and while I don't know why it's dirty, I think it needs to be resolved if you care about having RAID 1 in the first place. My best guess is that neither md on Linux nor the IMSM driver on Windows have an unambiguous way to determine which disk is correct, which is why it hasn't just sync'd them. So you kinda have to pick one (?) and force a sync - again I'll have to defer to someone else to answer that question but it's probably not ideal to just leave it in a dirty state. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-25 16:55 ` Chris Murphy @ 2012-09-26 12:17 ` Günther J. Niederwimmer 2012-09-26 19:33 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Günther J. Niederwimmer @ 2012-09-26 12:17 UTC (permalink / raw) To: linux-raid Hello Chris, Am Dienstag, 25. September 2012, 10:55:39 schrieb Chris Murphy: > On Sep 25, 2012, at 2:27 AM, Günther J. Niederwimmer wrote: > > Dirty State : dirty > > Basically this is my only remaining concern, and while I don't know why it's > dirty, I think it needs to be resolved if you care about having RAID 1 in > the first place. My best guess is that neither md on Linux nor the IMSM > driver on Windows have an unambiguous way to determine which disk is > correct, which is why it hasn't just sync'd them. So you kinda have to pick > one (?) and force a sync - again I'll have to defer to someone else to > answer that question but it's probably not ideal to just leave it in a > dirty state. OK, I run the Intel Tool in windows two times with the last Tool I found. The Tool don't found any Problem (?) and don't repair, but mdadm.... /dev/sda: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 00019ef2 Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e8b1bfe correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 0 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : clean Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) /dev/sdb: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 00019f47 Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e8c1c53 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 1 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty °°°°°°° Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-26 12:17 ` Günther J. Niederwimmer @ 2012-09-26 19:33 ` Chris Murphy 2012-09-27 2:43 ` NeilBrown 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-26 19:33 UTC (permalink / raw) To: Linux RAID On Sep 26, 2012, at 6:17 AM, Günther J. Niederwimmer wrote: >> >> dirty state. > > OK, I run the Intel Tool in windows two times with the last Tool I found. > > The Tool don't found any Problem (?) and don't repair, but mdadm…. I'm going to trim this down: > > /dev/sda: > [Volume0]: > UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc > Map State : normal > Dirty State : clean > > > /dev/sdb: > [Volume0]: > UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc > Map State : normal > Dirty State : dirty > °°°°°°° I don't understand this UI. Are there two Volume0's? I can see how the dirty state would apply independently among physical devices /dev/sda and /dev/sdb. But the virtual device, the array volume, "Volume0" seems like it would have only one instance. So I don't understand how it can be clean in one case and dirty in another. Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: Hello,Re: GPT Table broken on a Raid1 2012-09-26 19:33 ` Chris Murphy @ 2012-09-27 2:43 ` NeilBrown 0 siblings, 0 replies; 38+ messages in thread From: NeilBrown @ 2012-09-27 2:43 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID [-- Attachment #1: Type: text/plain, Size: 1697 bytes --] On Wed, 26 Sep 2012 13:33:49 -0600 Chris Murphy <lists@colorremedies.com> wrote: > > On Sep 26, 2012, at 6:17 AM, Günther J. Niederwimmer wrote: > >> > >> dirty state. > > > > OK, I run the Intel Tool in windows two times with the last Tool I found. > > > > The Tool don't found any Problem (?) and don't repair, but mdadm…. > > I'm going to trim this down: > > > > > /dev/sda: > > [Volume0]: > > UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc > > Map State : normal > > Dirty State : clean > > > > > > /dev/sdb: > > [Volume0]: > > UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc > > Map State : normal > > Dirty State : dirty > > °°°°°°° > > I don't understand this UI. Are there two Volume0's? > > I can see how the dirty state would apply independently among physical devices /dev/sda and /dev/sdb. But the virtual device, the array volume, "Volume0" seems like it would have only one instance. So I don't understand how it can be clean in one case and dirty in another. > It just means that when looking at the metadata on /dev/sda, we see it marked 'clean', and when looking at the metadata on /dev/sdb, we see that it is marked 'dirty'. Possibly something wrote to Volume0 between these two events, so the volume got marked 'dirty' so the write could happen. Wait a few seconds and it should get marked 'clean' again. Or possibly there is a bug somewhere. I would open two windows. In one run watch -d mdadm -E /dev/sda and in the other run watch -d mdadm -E /dev/sdb then access the array, or maybe leave it alone, and see how the metadata changes with time. NeilBrown [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 828 bytes --] ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 22:43 ` Chris Murphy 2012-09-22 8:37 ` Günther J. Niederwimmer @ 2012-09-22 15:31 ` John Robinson 2012-09-22 18:45 ` Chris Murphy 1 sibling, 1 reply; 38+ messages in thread From: John Robinson @ 2012-09-22 15:31 UTC (permalink / raw) To: "Günther J. Niederwimmer"; +Cc: Linux RAID, Chris Murphy On 21/09/2012 23:43, Chris Murphy wrote: > > On Sep 21, 2012, at 1:35 PM, Chris Murphy wrote: > >> If you're making the RAID with that, it defaults to metadata version 1.2. But to be sure >> mdadm -E /dev/mdX > > Scratch that. I was confused. Try these instead: > > mdadm -–detail-platform > mdadm –D /dev/md/imsm > mdadm –E /dev/sdX I don't think there's anything wrong here. The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata. Once you've assembled the IMSM array with mdadm, the partition table inside /dev/md/Volume0 is correct. You'd see the same thing with a native md device with metadata 0.90 or 1.0 made from whole discs and with a GPT partition table inside. Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata. Cheers, John. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 15:31 ` John Robinson @ 2012-09-22 18:45 ` Chris Murphy 2012-09-22 21:58 ` NeilBrown 2012-09-22 22:07 ` John Robinson 0 siblings, 2 replies; 38+ messages in thread From: Chris Murphy @ 2012-09-22 18:45 UTC (permalink / raw) To: Linux RAID On Sep 22, 2012, at 9:31 AM, John Robinson wrote: > > I don't think there's anything wrong here. > > The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata. OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me. > Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata. Sounds like either imsm metadata is either not well designed for GPT, or it was intended to be placed on an unpartitioned disk in the first place. Chris Murphy ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 18:45 ` Chris Murphy @ 2012-09-22 21:58 ` NeilBrown 2012-09-22 22:07 ` John Robinson 1 sibling, 0 replies; 38+ messages in thread From: NeilBrown @ 2012-09-22 21:58 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID [-- Attachment #1: Type: text/plain, Size: 1349 bytes --] On Sat, 22 Sep 2012 12:45:39 -0600 Chris Murphy <lists@colorremedies.com> wrote: > > On Sep 22, 2012, at 9:31 AM, John Robinson wrote: > > > > > I don't think there's anything wrong here. > > > > The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata. > > OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me. > > > > Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata. > > Sounds like either imsm metadata is either not well designed for GPT, or it was intended to be placed on an unpartitioned disk in the first place. IMSM metadata is definitely intended to be placed on an un-partitioned disk. The only real point of IMSM is to provide interoperability with other implementations, whether the one for Windows (allowing dual-boot) or the one in the bios (allowing boot-from-RAID5 etc). Those other implementations use IMSM on the whole device, so it would be pointless using mdadm/IMSM on partitions. Note that I haven't really been following this thread, so I might have missed the point. I'm really just responding to that last sentence. NeilBrown [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 828 bytes --] ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 18:45 ` Chris Murphy 2012-09-22 21:58 ` NeilBrown @ 2012-09-22 22:07 ` John Robinson 2012-09-22 22:30 ` Chris Murphy 1 sibling, 1 reply; 38+ messages in thread From: John Robinson @ 2012-09-22 22:07 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID On 22/09/2012 19:45, Chris Murphy wrote: > On Sep 22, 2012, at 9:31 AM, John Robinson wrote: >> I don't think there's anything wrong here. >> >> The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata. > > OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me. That's not what I'm saying. I'm saying that sda and sdb weren't partitioned in the first place. I'm saying that when the Linux kernel boots, and the AHCI driver starts, it sees the IMSM member discs as raw discs, which get probed for partitions. The GPT partition probe spots one of the copies of the GPT partition, but can't find the other one because the IMSM metadata's there. Then later on, md starts, reads the IMSM metadata, and presents the RAID set, including the GPT partition table that the raw-disc probe already whinged about, but which in the RAID set is correct. The error messages that Günther saw are a false positive and harmless - or at least, harmless until someone starts telling him to go messing around rewriting the contents of the raw drives underneath md by deleting the misidentified partition tables, the effect of which is likely to be to damage his partitions inside the IMSM array and/or destroy the IMSM array metadata. >> Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata. > > Sounds like either imsm metadata is either not well designed for GPT I think I'd describe it was being designed so that individual discs from RAID-1 mirrors can be read independently. IMSM predates GPT anyway. > or it was intended to be placed on an unpartitioned disk in the first place. It can't be anything else. Cheers, John. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 22:07 ` John Robinson @ 2012-09-22 22:30 ` Chris Murphy 2012-09-23 12:00 ` John Robinson 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-22 22:30 UTC (permalink / raw) To: Linux RAID On Sep 22, 2012, at 4:07 PM, John Robinson wrote: > On 22/09/2012 19:45, Chris Murphy wrote: >> On Sep 22, 2012, at 9:31 AM, John Robinson wrote: >>> I don't think there's anything wrong here. >>> >>> The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata. >> >> OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me. > > That's not what I'm saying. I'm saying that sda and sdb weren't partitioned in the first place. I disagree, it appears they were partitioned, but we'll see when the OP posts back the results from gdisk. The kernel messages in the very first post seems to imply it was partitioned (at least, at one time) or there wouldn't be an sdb1, sdb2, sdb3, etc. From the first post: > Sep 17 09:54:27 techz kernel: [ 1.204710] sdb: sdb1 sdb2 sdb3 sdb4 sdb5 > sdb6 > I'm saying that when the Linux kernel boots, and the AHCI driver starts, it sees the IMSM member discs as raw discs, which get probed for partitions. The GPT partition probe spots one of the copies of the GPT partition, but can't find the other one because the IMSM metadata's there. The IMSM metadata will go at the end of the PHYSICAL disk. If you partition the virtual disk, i.e. the array, /dev/md/imsm0, then the GPT alternate header goes at the end of the virtual disk, NOT the end of the physical disk. Besides, if what you say is true, as soon as he GPT partitioned the array, if the secondary GPT header stepped on IMSM metadata, then the array would instantly break. That's not what happened. > > > The error messages that Günther saw are a false positive and harmless I think so too, but are the result of the raw disks being previously partitioned. Possibly even they are stale GPTs that should have been nuked before getting started with IMSM RAID. > - or at least, harmless until someone starts telling him to go messing around rewriting the contents of the raw drives underneath md by deleting the misidentified partition tables, the effect of which is likely to be to damage his partitions inside the IMSM array and/or destroy the IMSM array metadata. There's every reason to believe the primary GPT header on /dev/sda and /dev/sdb is a.) intact and b.) is in LBA 1 and c.) Cannot also include IMSM metadata. So removing that header would thus make obliterate the GPT on the physical disks and he'd stop getting the error message. If he's really bothered by what is in effect a harmless error message because that (stale) GPT doesn't matter anyway. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-22 22:30 ` Chris Murphy @ 2012-09-23 12:00 ` John Robinson 2012-09-23 17:44 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: John Robinson @ 2012-09-23 12:00 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID On 22/09/2012 23:30, Chris Murphy wrote: > On Sep 22, 2012, at 4:07 PM, John Robinson wrote: [...] >> I'm saying that when the Linux kernel boots, and the AHCI driver starts, it sees the IMSM member discs as raw discs, which get probed for partitions. The GPT partition probe spots one of the copies of the GPT partition, but can't find the other one because the IMSM metadata's there. > > The IMSM metadata will go at the end of the PHYSICAL disk. If you partition the virtual disk, i.e. the array, /dev/md/imsm0, then the GPT alternate header goes at the end of the virtual disk, NOT the end of the physical disk. That's right. But the main GPT partition table will go at LBA=1 of the virtual disc, which maps to LBA=1 of the physical disc. So when the physical disc gets probed for partitions, the main GPT partition table is visible at LBA=1, but there isn't a secondary GPT partition table at the end of the disc, hence the error. Cheers, John. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-23 12:00 ` John Robinson @ 2012-09-23 17:44 ` Chris Murphy 2012-09-23 18:53 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-23 17:44 UTC (permalink / raw) To: Linux RAID On Sep 23, 2012, at 6:00 AM, John Robinson wrote: > > That's right. But the main GPT partition table will go at LBA=1 of the virtual disc, which maps to LBA=1 of the physical disc. So when the physical disc gets probed for partitions, the main GPT partition table is visible at LBA=1, but there isn't a secondary GPT partition table at the end of the disc, hence the error. The primary header contains the location of the alternate header. So the kernel wouldn't be looking at the end of the physical disk if the GPT were created for the virtual disk, and not the physical disk. If IMSM only applies an offset to the end of the disk, such that it merely changes the last usable LBA and therefore there is a 1:1 correlation between array LBA's and physical disk LBA's, the kernel would find the alternate GPT header. Yet it isn't, so still something isn't right. Chris Murphy ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-23 17:44 ` Chris Murphy @ 2012-09-23 18:53 ` Chris Murphy 2012-09-24 9:37 ` John Robinson 0 siblings, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-23 18:53 UTC (permalink / raw) To: Linux RAID On Sep 23, 2012, at 11:44 AM, Chris Murphy wrote: > > If IMSM only applies an offset to the end of the disk, such that it merely changes the last usable LBA and therefore there is a 1:1 correlation between array LBA's and physical disk LBA's, the kernel would find the alternate GPT header. Yet it isn't, so still something isn't right. 1. IMSM metadata starts at LBA -32 from the end of the physical disk after issuing this command: mdadm -C /dev/md/imsm /dev/sd[bc] -n 2 -e imsm At this point nothing else is on the disk, per hexdump. 2. A small amount of additional data is added in those reserve 32 sectors at the end of the physical disk after issuing this command: mdadm -C /dev/md/vol0 /dev/md/imsm -n 2 -l 1 3. gdisk sees /dev/sdb as having 16777216 sectors. gdisk sees /dev/md/vol0 has having 16769024 sectors. 4. Upon creating a GPT on /dev/md/vol0, identical structures at identical byte offsets are created on /dev/sd[bc] and /dev/md/vol0. The primary GPT header says the alternate header is at 0xffdfff. The alternate GPT header is at 0xffdfff., or sector 16769023, right where it should be. Regardless of whether the kernel sees the disk as Intel RAID or a bare disk, it finds the alternate GPT header, and doesn't complain about anything. Conclusions: A. There is a 1:1 correlation between physical disk LBA and array LBA (at least for RAID 1), there is merely an offset at the end of the disk for IMSM metadata. B. Günther's disks had GPTs made on the physical disk devices themselves, prior to the creation of IMSM metadata. When IMSM metadata was created, the alternate GPT header and table data were squished because it was in the wrong location. C. The fact Günther's "mdadm -E" command on the physical disks reveals both are in a dirty state, indicates to me the array is not assembled, and is not presently mirroring. So I think he's actually not booted from or using the array at all, at least not from within linux. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-23 18:53 ` Chris Murphy @ 2012-09-24 9:37 ` John Robinson 2012-09-24 17:35 ` Chris Murphy 0 siblings, 1 reply; 38+ messages in thread From: John Robinson @ 2012-09-24 9:37 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID On 23/09/2012 19:53, Chris Murphy wrote: [...] > Upon creating a GPT on /dev/md/vol0, identical structures at identical byte offsets are created on /dev/sd[bc] and /dev/md/vol0. > > The primary GPT header says the alternate header is at 0xffdfff. > The alternate GPT header is at 0xffdfff., or sector 16769023, right where it should be. > > Regardless of whether the kernel sees the disk as Intel RAID or a bare disk, it finds the alternate GPT header, and doesn't complain about anything. Yes, it does. It finds the alternate header just fine, according to the offset in the primary header, but warns that it's not at the end of the disc, which is where it expected to find it: Sep 17 09:54:27 techz kernel: [ 1.204681] GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.204685] GPT:625137663 != 625142447 Sep 17 09:54:27 techz kernel: [ 1.204687] GPT:Alternate GPT header not at the end of the disk. Sep 17 09:54:27 techz kernel: [ 1.204689] GPT:625137663 != 625142447 I know that's not what I said earlier, so my apologies for that, but it remains true that this is not a problem and there are no more GPT headers anywhere else on the disc than those written inside the IMSM volume. Cheers, John. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-24 9:37 ` John Robinson @ 2012-09-24 17:35 ` Chris Murphy 2012-09-24 18:17 ` Roberto Spadim ` (2 more replies) 0 siblings, 3 replies; 38+ messages in thread From: Chris Murphy @ 2012-09-24 17:35 UTC (permalink / raw) To: Linux RAID On Sep 24, 2012, at 3:37 AM, John Robinson wrote: > On 23/09/2012 19:53, Chris Murphy wrote: > [...] >> Upon creating a GPT on /dev/md/vol0, identical structures at identical byte offsets are created on /dev/sd[bc] and /dev/md/vol0. >> >> The primary GPT header says the alternate header is at 0xffdfff. >> The alternate GPT header is at 0xffdfff., or sector 16769023, right where it should be. >> >> Regardless of whether the kernel sees the disk as Intel RAID or a bare disk, it finds the alternate GPT header, and doesn't complain about anything. > > Yes, it does. It finds the alternate header just fine, according to the offset in the primary header, but warns that it's not at the end of the disc, which is where it expected to find it: Not for me, is what I meant. Linux 3.5.3-1.fc17 does not complain about the alternate GPT header not being at the end of the disk. However, parted 3.0 does. And gdisk 0.8.5 does not. Further, it seems increasingly clear as I'm reading the UEFI spec on GPT, that IMSM is incompatible with GPT. The GPT alternate header by spec is expected at the end of the disk, but then IMSM also demands to be in basically the exact same location. And yet Intel is shipping UEFI hardware, which require GPT disks, with IMSM on board that also requires metadata in the same location? How is this not a WTF moment? I'm still wondering why parted reports Günther's disks have hybrid MBRs. That's weird, even if unrelated. And I wonder why his kernel complains about the GPTs not being at the end of the disk, but my kernel doesn't. Both are 3.5 kernels. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-24 17:35 ` Chris Murphy @ 2012-09-24 18:17 ` Roberto Spadim [not found] ` <CABYL=ToFtzXv95At54=jSCaD0QVSB+bdxbssda1AMw5gNBqvhg@mail.gmail.com> 2012-09-25 7:33 ` Miquel van Smoorenburg 2 siblings, 0 replies; 38+ messages in thread From: Roberto Spadim @ 2012-09-24 18:17 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID i used some disks one or two years ago of 1.5TB, when i boot the disk for the first time the bios recognized it as a 1TB disk, i change some bios parameters (removed from automatic) and i could put it to work with 1.5TB could you check if it´s a bios problem? in my case the disk appears as 1TB when it booted wrong, and after changing bios options it goes nice with 1.5TB 2012/9/24 Chris Murphy <lists@colorremedies.com> > > > On Sep 24, 2012, at 3:37 AM, John Robinson wrote: > > > On 23/09/2012 19:53, Chris Murphy wrote: > > [...] > >> Upon creating a GPT on /dev/md/vol0, identical structures at identical byte offsets are created on /dev/sd[bc] and /dev/md/vol0. > >> > >> The primary GPT header says the alternate header is at 0xffdfff. > >> The alternate GPT header is at 0xffdfff., or sector 16769023, right where it should be. > >> > >> Regardless of whether the kernel sees the disk as Intel RAID or a bare disk, it finds the alternate GPT header, and doesn't complain about anything. > > > > Yes, it does. It finds the alternate header just fine, according to the offset in the primary header, but warns that it's not at the end of the disc, which is where it expected to find it: > > Not for me, is what I meant. Linux 3.5.3-1.fc17 does not complain about the alternate GPT header not being at the end of the disk. However, parted 3.0 does. And gdisk 0.8.5 does not. > > Further, it seems increasingly clear as I'm reading the UEFI spec on GPT, that IMSM is incompatible with GPT. The GPT alternate header by spec is expected at the end of the disk, but then IMSM also demands to be in basically the exact same location. And yet Intel is shipping UEFI hardware, which require GPT disks, with IMSM on board that also requires metadata in the same location? How is this not a WTF moment? > > I'm still wondering why parted reports Günther's disks have hybrid MBRs. That's weird, even if unrelated. And I wonder why his kernel complains about the GPTs not being at the end of the disk, but my kernel doesn't. Both are 3.5 kernels. > > > Chris Murphy-- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Roberto Spadim Spadim Technology / SPAEmpresarial -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
[parent not found: <CABYL=ToFtzXv95At54=jSCaD0QVSB+bdxbssda1AMw5gNBqvhg@mail.gmail.com>]
* Re: GPT Table broken on a Raid1 [not found] ` <CABYL=ToFtzXv95At54=jSCaD0QVSB+bdxbssda1AMw5gNBqvhg@mail.gmail.com> @ 2012-09-24 18:55 ` Chris Murphy 0 siblings, 0 replies; 38+ messages in thread From: Chris Murphy @ 2012-09-24 18:55 UTC (permalink / raw) To: Linux RAID On Sep 24, 2012, at 12:17 PM, Roberto Spadim wrote: > i used some disks one or two years ago of 1.5TB, when i boot the disk for the first time the bios recognized it as a 1TB disk, i change some bios parameters (removed from automatic) and i could put it to work with 1.5TB could you check if it´s a bios problem? in my case the disk appears as 1TB when it booted wrong, and after changing bios options it goes nice with 1.5TB I don't think that's related. The disks size aren't being misreported. And it's also a UEFI [1] computer, not BIOS [2], therefore again there shouldn't be any concern about firmware induced disk size misinterpretation. But then, I'd also not expect a UEFI computer to offer a GPT incompatible RAID implementation either – I'm still hoping I've got this wrong somehow. Chris Murphy [1] UEFI has no bugs. Not a single one. [2] I can't be the only one who finds it irritating that even the manufacturers persist in conflating UEFI and BIOS: All of Intel's firmware downloads for Günther's/OP's motherboard are referred to as BIOS. I really wish they wouldn't do that.-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-24 17:35 ` Chris Murphy 2012-09-24 18:17 ` Roberto Spadim [not found] ` <CABYL=ToFtzXv95At54=jSCaD0QVSB+bdxbssda1AMw5gNBqvhg@mail.gmail.com> @ 2012-09-25 7:33 ` Miquel van Smoorenburg 2 siblings, 0 replies; 38+ messages in thread From: Miquel van Smoorenburg @ 2012-09-25 7:33 UTC (permalink / raw) To: Chris Murphy; +Cc: Linux RAID On 24-09-12 7:35 PM, Chris Murphy wrote: > Further, it seems increasingly clear as I'm reading the UEFI spec on > GPT, that IMSM is incompatible with GPT. The GPT alternate header by > spec is expected at the end of the disk, but then IMSM also demands > to be in basically the exact same location. And yet Intel is shipping > UEFI hardware, which require GPT disks, with IMSM on board that also > requires metadata in the same location? How is this not a WTF > moment? It isn't. It's just RAID setup with the superblock at the end of the disk- as soon as RAID1 is activated, the RAID volume you see is just a bit smaller than the raw disksize, and the GPT alternate header is at the right place- at the end of the RAID volume. The thing is that the Linux kernel detects the GPT partition table on the raw disks before the RAID1 volume is assembled. More of a cosmetical bug. People have argued before that the kernel should do no partitiontable discovery at all, and just leave it to userspace. That's what kpartx is for, for example. In that case, with a correctly ordered and configured stack, the disks would get detected, any whole-disk RAID volumes would get assembled, and only then partition detection would be done. But I think that never got popular because of all the "I want to boot a kernel without an initramfs" people. Mike. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 11:42 ` Günther J. Niederwimmer 2012-09-21 19:35 ` Chris Murphy @ 2012-09-21 21:30 ` Chris Murphy 2012-09-21 21:49 ` Chris Murphy 1 sibling, 1 reply; 38+ messages in thread From: Chris Murphy @ 2012-09-21 21:30 UTC (permalink / raw) To: Linux RAID And more, since you mentioned mdadm and dmraid: http://forums.gentoo.org/viewtopic-t-888520-start-0.html It sounds like you need to pick one, and the one to pick is mdraid, and expressly disable dmraid. I personally would consider that you go into BIOS and blow away this RAID. Recreate it from scratch, and partition with gdisk from a LiveCD (e.g. gdisk is on the Fedora 17 livecd). Then install Windows. Then go back to the LiveCD and confirm that the GPT is still OK. Then install linux. Chris Murphy ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: GPT Table broken on a Raid1 2012-09-21 21:30 ` Chris Murphy @ 2012-09-21 21:49 ` Chris Murphy 0 siblings, 0 replies; 38+ messages in thread From: Chris Murphy @ 2012-09-21 21:49 UTC (permalink / raw) To: Linux RAID Another thing to check is if your SATA controller has it's own RAID, with RAID vs AHCI modes. If so, make sure it's in AHCI mode. If you have two RAIDs configured and don't know about it, that'll also cause problems. Chris Murphy ^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2012-09-27 2:43 UTC | newest] Thread overview: 38+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-09-19 11:03 GPT Table broken on a Raid1 Günther J. Niederwimmer 2012-09-20 2:39 ` Chris Murphy 2012-09-20 11:05 ` Günther J. Niederwimmer 2012-09-20 17:34 ` Chris Murphy 2012-09-21 11:42 ` Günther J. Niederwimmer 2012-09-21 19:35 ` Chris Murphy 2012-09-21 22:43 ` Chris Murphy 2012-09-22 8:37 ` Günther J. Niederwimmer 2012-09-22 18:30 ` Chris Murphy 2012-09-23 6:47 ` Hello,Re: " Günther J. Niederwimmer 2012-09-23 7:17 ` Chris Murphy 2012-09-24 7:28 ` Günther J. Niederwimmer 2012-09-24 17:21 ` Chris Murphy 2012-09-24 19:06 ` Günther J. Niederwimmer 2012-09-24 20:18 ` Chris Murphy 2012-09-24 21:06 ` Günther J. Niederwimmer 2012-09-24 21:12 ` Chris Murphy 2012-09-25 8:27 ` Günther J. Niederwimmer 2012-09-25 9:28 ` John Robinson 2012-09-25 16:55 ` Chris Murphy 2012-09-26 12:17 ` Günther J. Niederwimmer 2012-09-26 19:33 ` Chris Murphy 2012-09-27 2:43 ` NeilBrown 2012-09-22 15:31 ` John Robinson 2012-09-22 18:45 ` Chris Murphy 2012-09-22 21:58 ` NeilBrown 2012-09-22 22:07 ` John Robinson 2012-09-22 22:30 ` Chris Murphy 2012-09-23 12:00 ` John Robinson 2012-09-23 17:44 ` Chris Murphy 2012-09-23 18:53 ` Chris Murphy 2012-09-24 9:37 ` John Robinson 2012-09-24 17:35 ` Chris Murphy 2012-09-24 18:17 ` Roberto Spadim [not found] ` <CABYL=ToFtzXv95At54=jSCaD0QVSB+bdxbssda1AMw5gNBqvhg@mail.gmail.com> 2012-09-24 18:55 ` Chris Murphy 2012-09-25 7:33 ` Miquel van Smoorenburg 2012-09-21 21:30 ` Chris Murphy 2012-09-21 21:49 ` Chris Murphy
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).