linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Is this likely to cause me problems?
@ 2010-09-21 20:33 Jon Hardcastle
  2010-09-21 21:15 ` John Robinson
  0 siblings, 1 reply; 9+ messages in thread
From: Jon Hardcastle @ 2010-09-21 20:33 UTC (permalink / raw)
  To: linux-raid

Hi,

I am finally replacing an old and now failed drive with a new one.

I normally create a partition the size of the entire disk and add that but whilst checking the sizes marry up i noticed that is an odity...

Below is an fdisk dump of all the drives in my RAID6 array

sdc---
/dev/sdc1            2048  1953525167   976761560   fd  Linux raid autodetect
---
Seems to be different to sda say which is also '1TB'

sda---
/dev/sda1              63  1953520064   976760001   fd  Linux raid autodetect
---

Now i read somewhere that the sizes flucuate but as some core value remains the same can anyone confirm if this is the case?

I am reluctant to add to my array until i know for sure...

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xabb7ea39

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1              63  1953520064   976760001   fd  Linux raid autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   976768064   488384001   fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc7314361

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  1953525167   976761560   fd  Linux raid autodetect

Disk /dev/sdd: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1465144064   732572001   fd  Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1              63  1953520064   976760001   fd  Linux raid autodetect

Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4d291cc0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1              63  1953520064   976760001   fd  Linux raid autodetect

Disk /dev/sdg: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1              63  1465144064   732572001   fd  Linux raid autodetect


-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'There comes a time when you look into the mirror, and you realise that what you see is all that you will ever be. Then you accept it, or you kill yourself. Or you stop looking into mirrors... :)'

***********
Please note, I am phasing out jd_hardcastle AT yahoo.com and replacing it with jon AT eHardcastle.com
***********

-----------------------


      

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-21 20:33 Is this likely to cause me problems? Jon Hardcastle
@ 2010-09-21 21:15 ` John Robinson
  2010-09-21 21:18   ` Jon Hardcastle
  2010-09-22  6:42   ` Jon Hardcastle
  0 siblings, 2 replies; 9+ messages in thread
From: John Robinson @ 2010-09-21 21:15 UTC (permalink / raw)
  To: Jon; +Cc: linux-raid

On 21/09/2010 21:33, Jon Hardcastle wrote:
> I am finally replacing an old and now failed drive with a new one.
>
> I normally create a partition the size of the entire disk and add that but whilst checking the sizes marry up i noticed that is an odity...
>
> Below is an fdisk dump of all the drives in my RAID6 array
>
> sdc---
> /dev/sdc1            2048  1953525167   976761560   fd  Linux raid autodetect
> ---
> Seems to be different to sda say which is also '1TB'
>
> sda---
> /dev/sda1              63  1953520064   976760001   fd  Linux raid autodetect
> ---
>
> Now i read somewhere that the sizes flucuate but as some core value remains the same can anyone confirm if this is the case?
>
> I am reluctant to add to my array until i know for sure...

Looks like you've used a different partition tool on the new disc than 
you used on the old ones - old ones started the first partition at the 
beginning of cylinder 1, new ones like to start partitions at 1MB so 
they're aligned on 4K sector boundaries and SSDs' erase group boundaries 
etc. You could duplicate the original partition table like this:

sfdisk -d /dev/older-disc | sfdisk /dev/new-disc

But it wouldn't cause you any problems, because the new partition is 
bigger than the old one, despite starting a couple of thousand sectors 
later. This in itself is odd - how did you come to not use the last 
chunk of your original discs?

Cheers,

John.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-21 21:15 ` John Robinson
@ 2010-09-21 21:18   ` Jon Hardcastle
  2010-09-21 22:34     ` John Robinson
  2010-09-22  6:42   ` Jon Hardcastle
  1 sibling, 1 reply; 9+ messages in thread
From: Jon Hardcastle @ 2010-09-21 21:18 UTC (permalink / raw)
  To: Jon, John Robinson; +Cc: linux-raid

--- On Tue, 21/9/10, John Robinson <john.robinson@anonymous.org.uk> wrote:

> From: John Robinson <john.robinson@anonymous.org.uk>
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Tuesday, 21 September, 2010, 22:15
> On 21/09/2010 21:33, Jon Hardcastle
> wrote:
> > I am finally replacing an old and now failed drive
> with a new one.
> > 
> > I normally create a partition the size of the entire
> disk and add that but whilst checking the sizes marry up i
> noticed that is an odity...
> > 
> > Below is an fdisk dump of all the drives in my RAID6
> array
> > 
> > sdc---
> > /dev/sdc1           
> 2048 
> 1953525167   976761560   fd 
> Linux raid autodetect
> > ---
> > Seems to be different to sda say which is also '1TB'
> > 
> > sda---
> > /dev/sda1           
>   63 
> 1953520064   976760001   fd 
> Linux raid autodetect
> > ---
> > 
> > Now i read somewhere that the sizes flucuate but as
> some core value remains the same can anyone confirm if this
> is the case?
> > 
> > I am reluctant to add to my array until i know for
> sure...
> 
> Looks like you've used a different partition tool on the
> new disc than you used on the old ones - old ones started
> the first partition at the beginning of cylinder 1, new ones
> like to start partitions at 1MB so they're aligned on 4K
> sector boundaries and SSDs' erase group boundaries etc. You
> could duplicate the original partition table like this:
> 
> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
> 
> But it wouldn't cause you any problems, because the new
> partition is bigger than the old one, despite starting a
> couple of thousand sectors later. This in itself is odd -
> how did you come to not use the last chunk of your original
> discs?
> 
> Cheers,
> 
> John.
> 
> --

I used fdisk in all cases.. on the same machine.. so unless fdisk has changed?

primary... 1 partition.. default start and end.

and what do you mean about not using the last chunk of old disc?

Thank you!


      
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-21 21:18   ` Jon Hardcastle
@ 2010-09-21 22:34     ` John Robinson
  0 siblings, 0 replies; 9+ messages in thread
From: John Robinson @ 2010-09-21 22:34 UTC (permalink / raw)
  To: Jon; +Cc: linux-raid

On 21/09/2010 22:18, Jon Hardcastle wrote:
> --- On Tue, 21/9/10, John Robinson<john.robinson@anonymous.org.uk>  wrote:
>
>> From: John Robinson<john.robinson@anonymous.org.uk>
>> Subject: Re: Is this likely to cause me problems?
>> To: Jon@eHardcastle.com
>> Cc: linux-raid@vger.kernel.org
>> Date: Tuesday, 21 September, 2010, 22:15
>> On 21/09/2010 21:33, Jon Hardcastle
>> wrote:
>>> I am finally replacing an old and now failed drive
>> with a new one.
>>>
>>> I normally create a partition the size of the entire
>> disk and add that but whilst checking the sizes marry up i
>> noticed that is an odity...
>>>
>>> Below is an fdisk dump of all the drives in my RAID6
>> array
>>>
>>> sdc---
>>> /dev/sdc1
>> 2048
>> 1953525167   976761560   fd
>> Linux raid autodetect
>>> ---
>>> Seems to be different to sda say which is also '1TB'
>>>
>>> sda---
>>> /dev/sda1
>>    63
>> 1953520064   976760001   fd
>> Linux raid autodetect
>>> ---
>>>
>>> Now i read somewhere that the sizes flucuate but as
>> some core value remains the same can anyone confirm if this
>> is the case?
>>>
>>> I am reluctant to add to my array until i know for
>> sure...
>>
>> Looks like you've used a different partition tool on the
>> new disc than you used on the old ones - old ones started
>> the first partition at the beginning of cylinder 1, new ones
>> like to start partitions at 1MB so they're aligned on 4K
>> sector boundaries and SSDs' erase group boundaries etc. You
>> could duplicate the original partition table like this:
>>
>> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
>>
>> But it wouldn't cause you any problems, because the new
>> partition is bigger than the old one, despite starting a
>> couple of thousand sectors later. This in itself is odd -
>> how did you come to not use the last chunk of your original
>> discs?
>>
>> Cheers,
>>
>> John.
>>
>> --
>
> I used fdisk in all cases.. on the same machine.. so unless fdisk has changed?

May have done. Certainly my util-linux from CentOS 5 is newer than the 
last version of util-linux on freshmeat.net and kernel.org. Peeking at 
the source code, it looks like Red Hat have been patching util-linux 
themselves for almost 5 years.

> primary... 1 partition.. default start and end.
>
> and what do you mean about not using the last chunk of old disc?

Your sda has 1953525168 sectors but your partition ends at sector 
1953520064, 5104 sectors short of the end of the disc. This may be 
related to the possible bug somebody complains about on freshmeat.net 
whereby fdisk gets the last cylinder wrong. I just checked, on my 1TB 
discs I have the same end sector as you so I guess the fdisk I had when 
I built my array was the same as yours when you built yours.

Cheers,

John.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-21 21:15 ` John Robinson
  2010-09-21 21:18   ` Jon Hardcastle
@ 2010-09-22  6:42   ` Jon Hardcastle
  2010-09-22 11:25     ` John Robinson
  1 sibling, 1 reply; 9+ messages in thread
From: Jon Hardcastle @ 2010-09-22  6:42 UTC (permalink / raw)
  To: Jon, John Robinson; +Cc: linux-raid

--- On Tue, 21/9/10, John Robinson <john.robinson@anonymous.org.uk> wrote:

> From: John Robinson <john.robinson@anonymous.org.uk>
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Tuesday, 21 September, 2010, 22:15
> On 21/09/2010 21:33, Jon Hardcastle
> wrote:
> > I am finally replacing an old and now failed drive
> with a new one.
> > 
> > I normally create a partition the size of the entire
> disk and add that but whilst checking the sizes marry up i
> noticed that is an odity...
> > 
> > Below is an fdisk dump of all the drives in my RAID6
> array
> > 
> > sdc---
> > /dev/sdc1           
> 2048 
> 1953525167   976761560   fd 
> Linux raid autodetect
> > ---
> > Seems to be different to sda say which is also '1TB'
> > 
> > sda---
> > /dev/sda1           
>   63 
> 1953520064   976760001   fd 
> Linux raid autodetect
> > ---
> > 
> > Now i read somewhere that the sizes flucuate but as
> some core value remains the same can anyone confirm if this
> is the case?
> > 
> > I am reluctant to add to my array until i know for
> sure...
> 
> Looks like you've used a different partition tool on the
> new disc than you used on the old ones - old ones started
> the first partition at the beginning of cylinder 1, new ones
> like to start partitions at 1MB so they're aligned on 4K
> sector boundaries and SSDs' erase group boundaries etc. You
> could duplicate the original partition table like this:
> 
> sfdisk -d /dev/older-disc | sfdisk /dev/new-disc
> 
> But it wouldn't cause you any problems, because the new
> partition is bigger than the old one, despite starting a
> couple of thousand sectors later. This in itself is odd -
> how did you come to not use the last chunk of your original
> discs?
> 
> Cheers,
> 
> John.
> 

Ok, Thank you.

So do you have any recommendations? I would like to 'trust' the new version of fdisk but I can not risk torpedoing myself. I have 2 more drives I need to 'phase out' at some point; but they will liklely be with 1.5TB drives.

My gut tells me that i should whilst I have other drives the same size use the same paratermeters... then when i have a bigger drive that is definately not going to cause any size issues let fdisk do its magic.

So following that premsis is there any down side to copy the partition table off another drive?


      
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
       [not found] <94202.62107.qm@web51304.mail.re2.yahoo.com>
@ 2010-09-22  9:09 ` Tim Small
  0 siblings, 0 replies; 9+ messages in thread
From: Tim Small @ 2010-09-22  9:09 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org; +Cc: Jon

On 21/09/10 22:07, Jon Hardcastle wrote:
> Are you sure this is the issue?

Pretty sure.

>   the number of blocks is different in both measurements see below..
>    

Yes - differing CHS-compatible geometry will do this, because 
CHS-compatible partitions will start/end on the fake "cylinder" 
boundaries.  So you have different amounts of unnecessary wastage at 
both the start and the end when using different number of pretend 
cylinders, heads, and sectors per track...

If there's nothing on the disk yet, then surely you haven't got anything 
to lose by telling fdisk to use different CHS layouts (using the command 
line switches) anyway, or just ignoring CHS entirely and using the whole 
disk - and like I said it's highly unlikely anything on your system ever 
does anything with CHS block addressing anyway - Linux uses LBA 
addressing exclusively, and so do its bootloaders.

Tim.

-- 
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-22  6:42   ` Jon Hardcastle
@ 2010-09-22 11:25     ` John Robinson
  2010-09-22 14:38       ` Jon Hardcastle
  0 siblings, 1 reply; 9+ messages in thread
From: John Robinson @ 2010-09-22 11:25 UTC (permalink / raw)
  To: Jon; +Cc: linux-raid

On 22/09/2010 07:42, Jon Hardcastle wrote:
[...]
> So do you have any recommendations? I would like to 'trust' the new version of fdisk but I can not risk torpedoing myself. I have 2 more drives I need to 'phase out' at some point; but they will liklely be with 1.5TB drives.

The new layout you've got is meant for SSDs and drives with 4K sectors; 
the old layout is fine for you.

> My gut tells me that i should whilst I have other drives the same size use the same paratermeters... then when i have a bigger drive that is definately not going to cause any size issues let fdisk do its magic.
>
> So following that premsis is there any down side to copy the partition table off another drive?

I can't think of one. For backup, if I remember correctly Doug Ledford's 
hot-swap auto-rebuilding onto virgin drives work was going to create 
partition tables by copying them from existing drives (or by having the 
user copy the required partition table from a drive in advance).

Cheers,

John.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-22 11:25     ` John Robinson
@ 2010-09-22 14:38       ` Jon Hardcastle
  2010-09-23 13:14         ` John Robinson
  0 siblings, 1 reply; 9+ messages in thread
From: Jon Hardcastle @ 2010-09-22 14:38 UTC (permalink / raw)
  To: John Robinson; +Cc: linux-raid

--- On Wed, 22/9/10, John Robinson <john.robinson@anonymous.org.uk> wrote:

> From: John Robinson <john.robinson@anonymous.org.uk>
> Subject: Re: Is this likely to cause me problems?
> To: Jon@eHardcastle.com
> Cc: linux-raid@vger.kernel.org
> Date: Wednesday, 22 September, 2010, 12:25
> On 22/09/2010 07:42, Jon Hardcastle
> wrote:
> [...]
> > So do you have any recommendations? I would like to
> 'trust' the new version of fdisk but I can not risk
> torpedoing myself. I have 2 more drives I need to 'phase
> out' at some point; but they will liklely be with 1.5TB
> drives.
> 
> The new layout you've got is meant for SSDs and drives with
> 4K sectors; the old layout is fine for you.
> 
> > My gut tells me that i should whilst I have other
> drives the same size use the same paratermeters... then when
> i have a bigger drive that is definately not going to cause
> any size issues let fdisk do its magic.
> > 
> > So following that premsis is there any down side to
> copy the partition table off another drive?
> 
> I can't think of one. For backup, if I remember correctly
> Doug Ledford's hot-swap auto-rebuilding onto virgin drives
> work was going to create partition tables by copying them
> from existing drives (or by having the user copy the
> required partition table from a drive in advance).
> 
> Cheers,
> 
> John.
> 

Hi,

Thanks for your help. I have been doing some background reading and am concinving myself to leave the boundaries as they are as it appears there is performance gains to be had? Assuming this is true as long as the parition size is LARGER than the other 1TB paritions I should be ok, right?

Device Boot         Start  End          Blocks
/dev/sda1              63  1953520064   976760001
/dev/sdc1            2048  1953525167   976761560

If i subtract Start from End
sda = 1953520064 - 63   = 1953520001
sdc = 1953525167 - 2048 = 1953523119 (3118 larger than sda)

as long as sdc is larger which it is by 3118 I should be ok right?

I am even thinking about individually removeing my drives from the array and letting fdisk use its new calculations for the existing drives. I could do with better performance!


      

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Is this likely to cause me problems?
  2010-09-22 14:38       ` Jon Hardcastle
@ 2010-09-23 13:14         ` John Robinson
  0 siblings, 0 replies; 9+ messages in thread
From: John Robinson @ 2010-09-23 13:14 UTC (permalink / raw)
  To: Jon; +Cc: linux-raid

On 22/09/2010 15:38, Jon Hardcastle wrote:
[...]
> Thanks for your help. I have been doing some background reading and am concinving myself to leave the boundaries as they are as it appears there is performance gains to be had? Assuming this is true as long as the parition size is LARGER than the other 1TB paritions I should be ok, right?
>
> Device Boot         Start  End          Blocks
> /dev/sda1              63  1953520064   976760001
> /dev/sdc1            2048  1953525167   976761560
>
> If i subtract Start from End
> sda = 1953520064 - 63   = 1953520001
> sdc = 1953525167 - 2048 = 1953523119 (3118 larger than sda)
>
> as long as sdc is larger which it is by 3118 I should be ok right?
>
> I am even thinking about individually removeing my drives from the array and letting fdisk use its new calculations for the existing drives. I could do with better performance!

Don't do that. There is no performance benefit from aligning your 
partitions. There would be a performance benefit to making LVM align 
itself correctly over md RAID stripes, and the filesystem over LVM or md 
RAID, but there is no performance benefit from aligning md RAID over 
partitions, *unless* you have 4K sector drives or SSD.

Honestly you are better off duplicating your original partition table 
onto your new drive so all your partitions are the same, mostly so there 
can't be any more confusion later on.

Cheers,

John.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-09-23 13:14 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-21 20:33 Is this likely to cause me problems? Jon Hardcastle
2010-09-21 21:15 ` John Robinson
2010-09-21 21:18   ` Jon Hardcastle
2010-09-21 22:34     ` John Robinson
2010-09-22  6:42   ` Jon Hardcastle
2010-09-22 11:25     ` John Robinson
2010-09-22 14:38       ` Jon Hardcastle
2010-09-23 13:14         ` John Robinson
     [not found] <94202.62107.qm@web51304.mail.re2.yahoo.com>
2010-09-22  9:09 ` Tim Small

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).