linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Stupid Question?
@ 2004-05-23 21:52 AndyLiebman
  2004-05-23 23:23 ` Jose Luis Domingo Lopez
  2004-05-24  2:01 ` jim
  0 siblings, 2 replies; 9+ messages in thread
From: AndyLiebman @ 2004-05-23 21:52 UTC (permalink / raw)
  To: linux-raid

Hi, 

I feel like this is a stupid question. But I actually don't know the answer 
to it. If I'm going to make a Software RAID array with a bunch of identical 
disks, do the disks have to have at least one partition on them? Or can I use 
disks with NO partitions? 

Similarly, if I have made a hardware RAID array (say, with a 3ware 8506 
card), do I have to create at least a single partition on it before I put a file 
system on it? 

If partitions aren't necessary, is there any advantage or disadvantage to 
having a single partition on a disk versus having none? Is having no partitions 
faster? 

Andy Liebman

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Stupid Question?
  2004-05-23 21:52 Stupid Question? AndyLiebman
@ 2004-05-23 23:23 ` Jose Luis Domingo Lopez
  2004-05-24  2:01 ` jim
  1 sibling, 0 replies; 9+ messages in thread
From: Jose Luis Domingo Lopez @ 2004-05-23 23:23 UTC (permalink / raw)
  To: linux-raid

On Sunday, 23 May 2004, at 17:52:48 -0400,
AndyLiebman@aol.com wrote:

> I feel like this is a stupid question. But I actually don't know the answer 
> to it. If I'm going to make a Software RAID array with a bunch of identical 
> disks, do the disks have to have at least one partition on them? Or can I use 
> disks with NO partitions? 
> 
You can use any block device as part of a Linux software RAID device, so
full disks with no partitions inside should be OK. I seem to remember
that this is not the case with LVM, wher you have (should?) to create a
partition even if you want to use the full disk, but is late and maybe I
and mixing things.

> Similarly, if I have made a hardware RAID array (say, with a 3ware 8506 
> card), do I have to create at least a single partition on it before I put a file 
> system on it? 
> 
I don't think so. You can "format" any block device, like a disk
partition, a logical volume, and even a file on disk through the loop
device, so the requirement for a partition table doesn't seem to exist.
I think the only thing needed is a couple of (major,minor) trhough which
to access the underlying block device.

> If partitions aren't necessary, is there any advantage or disadvantage to 
> having a single partition on a disk versus having none? Is having no partitions 
> faster? 
> 
A disk with no partition table is a contiguous block device from sector
zero to the end of the device. Maybe you should follow LVM's advice
about using full disks with no partition table. From pvcreate(8):

DESCRIPTION
       pvcreate initializes PhysicalVolume for later use by the Logical Volume
       Manager (LVM).  Each PhysicalVolume can  be  a  disk  partition,  whole
       disk, meta device, or loopback file.  For DOS disk partitions, the par-
       tition id should be set to 0x8e using fdisk(8), cfdisk(8), or a equiva-
       lent.   For whole disk devices only the partition table must be erased,
       which will effectively destroy all data on that disk.  This can be done
       by zeroing the first sector with:

       dd if=/dev/zero of=PhysicalVolume bs=512 count=1


Greetings.

-- 
Jose Luis Domingo Lopez
Linux Registered User #189436     Debian Linux Sid (Linux 2.6.6)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Stupid Question?
  2004-05-23 21:52 Stupid Question? AndyLiebman
  2004-05-23 23:23 ` Jose Luis Domingo Lopez
@ 2004-05-24  2:01 ` jim
  2004-05-24  2:33   ` Neil Brown
  1 sibling, 1 reply; 9+ messages in thread
From: jim @ 2004-05-24  2:01 UTC (permalink / raw)
  To: AndyLiebman; +Cc: linux-raid

> 
> Hi, 
> 
> I feel like this is a stupid question. But I actually don't know the answer 
> to it. If I'm going to make a Software RAID array with a bunch of identical 
> disks, do the disks have to have at least one partition on them? Or can I use 
> disks with NO partitions? 

Yes, the disks have to have partitions.  When you specify "make a raid
out of X, Y, Z", the X, Y, and Z are partition names, like /dev/hda1,
/dev/hdb1.  You can't make a raid out of /dev/hda and /dev/hdb - whole
devices.

> 
> Similarly, if I have made a hardware RAID array (say, with a 3ware 8506 
> card), do I have to create at least a single partition on it before I put a file 
> system on it? 

Hardware raid is usually device oriented, so you do say "make a raid
pair out of drive 0 and drive 1, equivalent to /dev/hda and /dev/hdb 
for example.  This drive pair is then presented to Linux as a single
hardware devices, like /dev/hdi.  You always need to make partitions
with fdisk before initializing the partition with a filesystem using
mkfs.

> If partitions aren't necessary, is there any advantage or disadvantage to 
> having a single partition on a disk versus having none? Is having no partitions 
> faster? 
> 
> Andy Liebman
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Stupid Question?
  2004-05-24  2:01 ` jim
@ 2004-05-24  2:33   ` Neil Brown
  2004-05-24  9:09     ` AW: " root
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2004-05-24  2:33 UTC (permalink / raw)
  To: jim; +Cc: AndyLiebman, linux-raid

On Sunday May 23, jim@rubylane.com wrote:
> > 
> > Hi, 
> > 
> > I feel like this is a stupid question. But I actually don't know the answer 
> > to it. If I'm going to make a Software RAID array with a bunch of identical 
> > disks, do the disks have to have at least one partition on them? Or can I use 
> > disks with NO partitions? 
> 
> Yes, the disks have to have partitions.  When you specify "make a raid
> out of X, Y, Z", the X, Y, and Z are partition names, like /dev/hda1,
> /dev/hdb1.  You can't make a raid out of /dev/hda and /dev/hdb - whole
> devices.

Why not?  I do it all the time.

You cannot get the kernel to auto-detect the raid if it is made from
whole devices, but I don't care much about that.  It works for me.

NeilBrown

^ permalink raw reply	[flat|nested] 9+ messages in thread

* AW: Stupid Question?
  2004-05-24  2:33   ` Neil Brown
@ 2004-05-24  9:09     ` root
  2004-05-24 22:58       ` Neil Brown
  0 siblings, 1 reply; 9+ messages in thread
From: root @ 2004-05-24  9:09 UTC (permalink / raw)
  To: 'Neil Brown', jim; +Cc: AndyLiebman, linux-raid

Hi,

> > > I feel like this is a stupid question. But I actually 
> don't know the 
> > > answer
> > > to it. If I'm going to make a Software RAID array with a 
> bunch of identical 
> > > disks, do the disks have to have at least one partition 
> on them? Or can I use 
> > > disks with NO partitions? 
> > 
> > Yes, the disks have to have partitions.  When you specify 
> "make a raid 
> > out of X, Y, Z", the X, Y, and Z are partition names, like 
> /dev/hda1, 
> > /dev/hdb1.  You can't make a raid out of /dev/hda and 
> /dev/hdb - whole 
> > devices.
> 
> Why not?  I do it all the time.
> 
> You cannot get the kernel to auto-detect the raid if it is 
> made from whole devices, but I don't care much about that.  
> It works for me.

Whats the benefit of doing an Array over the whole devices? Are there
any problems, for example that you maybe cannot boot from that array?

Thanks,

Greetings,
Stephan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Stupid Question?
@ 2004-05-24  9:33 AndyLiebman
  0 siblings, 0 replies; 9+ messages in thread
From: AndyLiebman @ 2004-05-24  9:33 UTC (permalink / raw)
  To: jim; +Cc: linux-raid


>You always need to make partitions
>with fdisk before initializing the partition with a filesystem using
>mkfs.

After creating a Hardware RAID-5 with a 3ware 8506 card -- which comes up as 
/dev/sda on my machine (my system drive is a single IDE drive, /dev/hda) -- I 
have found that I AM able to put my filesystem on the RAID without making a 
partition. 

I ask this question because I have seen some discussions about some issues 
with "blockdev" and "block sizes" and how maybe you can get better throughput on 
devices if they are "higher" up the chain. I.E., /dev/sda is higher than 
/dev/sda1 and /dev/sda2. 

I didn't see any problems "formating" or "putting a file system" on a whole 
block device like /dev/sda. I ran Bonnie++ on it without issue. (I got the same 
results as with /dev/sda1 -- but I'm now looking into what I've been reading 
about blockdev and doing some tuning). But maybe I'm missing some. I'm NOT 
booting from the RAID. It's just for data storage. 

By the way, for those with 3ware 8506 cards, at the suggestion of 3ware I 
tuned the readahead value on my RAID with 'blockdev --setra 16384 /dev/sdX' 
(actually they had recommended this for their forthcoming 9500 series) and my read 
results with Bonnie++ jumped from 100 to 200 MB/sec on a 8-drive SATA RAID-5 
array. I don't yet know how the tuning will affect real world results. 

Might similar tuning make any difference for Software RAID-5 (I already get 
175 MB/sec using the same 3ware card in its JBOD mode). 

Andy Liebman

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: AW: Stupid Question?
  2004-05-24  9:09     ` AW: " root
@ 2004-05-24 22:58       ` Neil Brown
  2004-05-24 23:11         ` robin-lists
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2004-05-24 22:58 UTC (permalink / raw)
  To: root; +Cc: jim, AndyLiebman, linux-raid

On Monday May 24, root@warpy.yomeganet.biz wrote:
> > >             You can't make a raid out of /dev/hda and 
> > /dev/hdb - whole 
> > > devices.
> > 
> > Why not?  I do it all the time.
> > 
> > You cannot get the kernel to auto-detect the raid if it is 
> > made from whole devices, but I don't care much about that.  
> > It works for me.
> 
> Whats the benefit of doing an Array over the whole devices? Are there
> any problems, for example that you maybe cannot boot from that array?
> 

No particular benefit.  But if I want to RAID together a number of
whole drives, there seems little point partitioning them first.  It
would certainly work either way.  I choose the way that makes most
sense in my situation.

I often RAID1 a pair of whole devices, partition them into
root/swap/rest and boot off them (this is using md partitioning code
which isn't in 2.4, but is in 2.6).  There is a problem here that lilo
thinks it knows all about md arrays.  However I figured out how to
trick it into not realising they are md arrays and it works fine (I
create a symlink from /dev/MD* -> /dev/md* and only tell lilo about
/dev/MD*).

NeilBrown

> Thanks,
> 
> Greetings,
> Stephan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: AW: Stupid Question?
  2004-05-24 22:58       ` Neil Brown
@ 2004-05-24 23:11         ` robin-lists
  2004-05-24 23:40           ` Neil Brown
  0 siblings, 1 reply; 9+ messages in thread
From: robin-lists @ 2004-05-24 23:11 UTC (permalink / raw)
  To: linux-raid

Neil Brown wrote:
> 
> I often RAID1 a pair of whole devices, partition them into
> root/swap/rest and boot off them (this is using md partitioning code
> which isn't in 2.4, but is in 2.6).  There is a problem here that lilo
> thinks it knows all about md arrays.  However I figured out how to
> trick it into not realising they are md arrays and it works fine (I
> create a symlink from /dev/MD* -> /dev/md* and only tell lilo about
> /dev/MD*).

Neil,

Would you care to elaborate?

I'm about to build a box with 6 x 250GB SATA disks and am currently
designing the partitioning scheme.

Could I potentially put in a small IDE disk onto which to install just
enough system to boot and mount a RAID5 array built from the 6 whole disks?

Presumably you then partition the RAID array, create the filesystems and
mount the partitions as you would a single disk?

Thanks,

R.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: AW: Stupid Question?
  2004-05-24 23:11         ` robin-lists
@ 2004-05-24 23:40           ` Neil Brown
  0 siblings, 0 replies; 9+ messages in thread
From: Neil Brown @ 2004-05-24 23:40 UTC (permalink / raw)
  To: robin-lists; +Cc: linux-raid

On Tuesday May 25, robin-lists@robinbowes.com wrote:
> Neil Brown wrote:
> > 
> > I often RAID1 a pair of whole devices, partition them into
> > root/swap/rest and boot off them (this is using md partitioning code
> > which isn't in 2.4, but is in 2.6).  There is a problem here that lilo
> > thinks it knows all about md arrays.  However I figured out how to
> > trick it into not realising they are md arrays and it works fine (I
> > create a symlink from /dev/MD* -> /dev/md* and only tell lilo about
> > /dev/MD*).
> 
> Neil,
> 
> Would you care to elaborate?
> 
> I'm about to build a box with 6 x 250GB SATA disks and am currently
> designing the partitioning scheme.
> 
> Could I potentially put in a small IDE disk onto which to install just
> enough system to boot and mount a RAID5 array built from the 6 whole disks?
> 
> Presumably you then partition the RAID array, create the filesystems and
> mount the partitions as you would a single disk?

In this configuration I wouldn't raid whole drives.
I would partition each drive into
 6 gig root
 2 gig swap
 N-8 gig extra

I would raid1 2 or 3 root partitions together and add the rest as hot
spares.
I would raid1 2 or 3 swap partitions together and add the rest as hot
spares. 
I would raid5 the extra partitions together and put the bulk storage
there.

If you want a single large root and no extra partitions, I would then
make a 1 gig boot and an N-1 gig root on each device,
raid1 the boot devices together and put kernels there.
raid5 the root devices together and put everything there.
swap to a file on the root filesystem.

The two situations where I raid together whole devices are:

1/ computers with only two drives - then I mirror and partition.
2/ fileservers where the main storage drives are physically separate
in some way, either in a different cage, or in the same cage but a
different size (2 smaller drives for boot/root/etc, several larger
drives for storage).  Then I raid5 all the storage devices together.

In between these two (just two drives, and lots of drives) is the
several-drives-in-one-case situation, where partitioning and then
RAIDing works well.

NeilBrown

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2004-05-24 23:40 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-23 21:52 Stupid Question? AndyLiebman
2004-05-23 23:23 ` Jose Luis Domingo Lopez
2004-05-24  2:01 ` jim
2004-05-24  2:33   ` Neil Brown
2004-05-24  9:09     ` AW: " root
2004-05-24 22:58       ` Neil Brown
2004-05-24 23:11         ` robin-lists
2004-05-24 23:40           ` Neil Brown
  -- strict thread matches above, loose matches on Subject: below --
2004-05-24  9:33 AndyLiebman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).