public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* Raid1 with 3 drives
@ 2010-03-05 19:28 Grady Neely
  2010-03-05 19:40 ` Josef Bacik
  0 siblings, 1 reply; 9+ messages in thread
From: Grady Neely @ 2010-03-05 19:28 UTC (permalink / raw)
  To: linux-btrfs

Hello,

I have a 3 1TB drives that I wanted to make a Raid1 system on.  I issued the following command "mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd" And it seems to have created the fs, with no issue.  When I do an df -h, I see that the available space is 3TB.  Seems like with RAID1 I would only see 1TB available, and the other two drives would mirror the first.  Am I misunderstanding how the RAID1 works under btrfs?  Can you have more than two drives in RAID1 in btrfs, so you can survive multiple drive failures?  Is there a better option with only 3 drives?  I am not wedded to RAID1 if there is a better way.



Here is my uname -a:

Linux gemini 2.6.31-19-generic #56-Ubuntu SMP Thu Jan 28 01:26:53 UTC 2010 i686 GNU/Linux


And I am using btrfs 0.19.

Thank you,



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 19:28 Raid1 with 3 drives Grady Neely
@ 2010-03-05 19:40 ` Josef Bacik
  2010-03-05 19:58   ` Chris Ball
  0 siblings, 1 reply; 9+ messages in thread
From: Josef Bacik @ 2010-03-05 19:40 UTC (permalink / raw)
  To: Grady Neely; +Cc: linux-btrfs

On Fri, Mar 05, 2010 at 01:28:00PM -0600, Grady Neely wrote:
> Hello,
> 
> I have a 3 1TB drives that I wanted to make a Raid1 system on.  I issued the following command "mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd" And it seems to have created the fs, with no issue.  When I do an df -h, I see that the available space is 3TB.  Seems like with RAID1 I would only see 1TB available, and the other two drives would mirror the first.  Am I misunderstanding how the RAID1 works under btrfs?  Can you have more than two drives in RAID1 in btrfs, so you can survive multiple drive failures?  Is there a better option with only 3 drives?  I am not wedded to RAID1 if there is a better way.
> 
> 

DF with btrfs is a loaded question.  In the RAID1 case you are going to show 3TB
of free space, but everytime you use some space you are going to show 3 times
the amount used (I think thats right).  There are some patches forthcoming to
make the reporting for RAID stuff make more sense, but for the time being just
ignore df.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 19:40 ` Josef Bacik
@ 2010-03-05 19:58   ` Chris Ball
  2010-03-05 20:29     ` Grady Neely
  0 siblings, 1 reply; 9+ messages in thread
From: Chris Ball @ 2010-03-05 19:58 UTC (permalink / raw)
  To: Josef Bacik; +Cc: Grady Neely, linux-btrfs

Hi,

   > DF with btrfs is a loaded question.  In the RAID1 case you are
   > going to show 3TB of free space, but everytime you use some space
   > you are going to show 3 times the amount used (I think thats
   > right).  There are some patches forthcoming to make the reporting
   > for RAID stuff make more sense, but for the time being just
   > ignore df.

Added to:

http://btrfs.wiki.kernel.org/index.php/FAQ#Why_does_df_show_incorrect_free_space_for_my_RAID_volume.3F

since we're often seeing this question on the list and IRC.

- Chris.
-- 
Chris Ball   <cjb@laptop.org>
One Laptop Per Child

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 19:58   ` Chris Ball
@ 2010-03-05 20:29     ` Grady Neely
  2010-03-05 20:31       ` Josef Bacik
  0 siblings, 1 reply; 9+ messages in thread
From: Grady Neely @ 2010-03-05 20:29 UTC (permalink / raw)
  To: Chris Ball; +Cc: Josef Bacik, linux-btrfs

Thank you!

One more question:

Since I have three devices in a RAID1 pool, can it survive 2 drive failures?


On Mar 5, 2010, at 1:58 PM, Chris Ball wrote:

> Hi,
> 
>> DF with btrfs is a loaded question.  In the RAID1 case you are
>> going to show 3TB of free space, but everytime you use some space
>> you are going to show 3 times the amount used (I think thats
>> right).  There are some patches forthcoming to make the reporting
>> for RAID stuff make more sense, but for the time being just
>> ignore df.
> 
> Added to:
> 
> http://btrfs.wiki.kernel.org/index.php/FAQ#Why_does_df_show_incorrect_free_space_for_my_RAID_volume.3F
> 
> since we're often seeing this question on the list and IRC.
> 
> - Chris.
> -- 
> Chris Ball   <cjb@laptop.org>
> One Laptop Per Child


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 20:29     ` Grady Neely
@ 2010-03-05 20:31       ` Josef Bacik
  2010-03-05 21:49         ` Bart Noordervliet
  0 siblings, 1 reply; 9+ messages in thread
From: Josef Bacik @ 2010-03-05 20:31 UTC (permalink / raw)
  To: Grady Neely; +Cc: Chris Ball, Josef Bacik, linux-btrfs

On Fri, Mar 05, 2010 at 02:29:56PM -0600, Grady Neely wrote:
> Thank you!
> 
> One more question:
> 
> Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
> 

Yes, tho you won't be able to remove more than 1 at a time (since it wants you
to keep at least two disks around).  Thanks,

Josef

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 20:31       ` Josef Bacik
@ 2010-03-05 21:49         ` Bart Noordervliet
  2010-03-05 22:13           ` Mike Fedyk
  2010-03-06  1:02           ` Ravi Pinjala
  0 siblings, 2 replies; 9+ messages in thread
From: Bart Noordervliet @ 2010-03-05 21:49 UTC (permalink / raw)
  To: linux-btrfs

On Fri, Mar 5, 2010 at 21:31, Josef Bacik <josef@redhat.com> wrote:
>> Since I have three devices in a RAID1 pool, can it survive 2 drive f=
ailures?
>
> Yes, tho you won't be able to remove more than 1 at a time (since it =
wants you
> to keep at least two disks around). =A0Thanks,
>
> Josef

Hmm, I would expect the raid1 data mode to keep 2 copies of each file
and thus yield 50% effective storage capacity, even with 3 disks. I
see no real reason to stick with the full-disk mirroring mentality of
previous raid systems since raid implemented in a filesystem works
differently. Or would it be difficult to implement btrfs raid1 like
this?

Maybe it's worth to consider leaving the burdened raid* terminology
behind and name the btrfs redundancy modes more clearly by what they
do. For instance "-d double|triple" or "-d 2n|3n". And for raid5/6 "-d
single-parity|double-parity" or "-d n+1|n+2".

Regards,

Bart
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 21:49         ` Bart Noordervliet
@ 2010-03-05 22:13           ` Mike Fedyk
  2010-03-05 22:27             ` Hubert Kario
  2010-03-06  1:02           ` Ravi Pinjala
  1 sibling, 1 reply; 9+ messages in thread
From: Mike Fedyk @ 2010-03-05 22:13 UTC (permalink / raw)
  To: Bart Noordervliet; +Cc: linux-btrfs

On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet <bart@noordervliet.ne=
t> wrote:
> On Fri, Mar 5, 2010 at 21:31, Josef Bacik <josef@redhat.com> wrote:
>>> Since I have three devices in a RAID1 pool, can it survive 2 drive =
failures?
>>
>> Yes, tho you won't be able to remove more than 1 at a time (since it=
 wants you
>> to keep at least two disks around). =C2=A0Thanks,
>>
>> Josef
>
> Hmm, I would expect the raid1 data mode to keep 2 copies of each file
> and thus yield 50% effective storage capacity, even with 3 disks. I
> see no real reason to stick with the full-disk mirroring mentality of
> previous raid systems since raid implemented in a filesystem works
> differently. Or would it be difficult to implement btrfs raid1 like
> this?
>
> Maybe it's worth to consider leaving the burdened raid* terminology
> behind and name the btrfs redundancy modes more clearly by what they
> do. For instance "-d double|triple" or "-d 2n|3n". And for raid5/6 "-=
d
> single-parity|double-parity" or "-d n+1|n+2".
>

+1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 22:13           ` Mike Fedyk
@ 2010-03-05 22:27             ` Hubert Kario
  0 siblings, 0 replies; 9+ messages in thread
From: Hubert Kario @ 2010-03-05 22:27 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: Bart Noordervliet, linux-btrfs

On Friday 05 March 2010 23:13:54 Mike Fedyk wrote:
> On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet <bart@noordervliet.=
net>=20
wrote:
> > Maybe it's worth to consider leaving the burdened raid* terminology
> > behind and name the btrfs redundancy modes more clearly by what the=
y
> > do. For instance "-d double|triple" or "-d 2n|3n". And for raid5/6 =
"-d
> > single-parity|double-parity" or "-d n+1|n+2".
>=20
> +1

Good idea IMHO.

When we will be able to specify the redundancy modes on a file by file =
basis=20
it will make it much less confusig for the users to talk about double, =
triple =20
replication or [single|double]-parity.=20

It's a bit silly to talk about "Arrays of Disks" when we mean groups of=
=20
blocks.

--=20
Hubert Kario
QBS - Quality Business Software
ul. Ksawer=C3=B3w 30/85
02-656 Warszawa
POLAND
tel. +48 (22) 646-61-51, 646-74-24
fax +48 (22) 646-61-50
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Raid1 with 3 drives
  2010-03-05 21:49         ` Bart Noordervliet
  2010-03-05 22:13           ` Mike Fedyk
@ 2010-03-06  1:02           ` Ravi Pinjala
  1 sibling, 0 replies; 9+ messages in thread
From: Ravi Pinjala @ 2010-03-06  1:02 UTC (permalink / raw)
  To: Bart Noordervliet; +Cc: linux-btrfs

On 03/05/10 15:49, Bart Noordervliet wrote:
> On Fri, Mar 5, 2010 at 21:31, Josef Bacik<josef@redhat.com>  wrote:
>>> Since I have three devices in a RAID1 pool, can it survive 2 drive failures?
>>
>> Yes, tho you won't be able to remove more than 1 at a time (since it wants you
>> to keep at least two disks around).  Thanks,
>>
>> Josef
>
> Hmm, I would expect the raid1 data mode to keep 2 copies of each file
> and thus yield 50% effective storage capacity, even with 3 disks. I
> see no real reason to stick with the full-disk mirroring mentality of
> previous raid systems since raid implemented in a filesystem works
> differently. Or would it be difficult to implement btrfs raid1 like
> this?
>
> Maybe it's worth to consider leaving the burdened raid* terminology
> behind and name the btrfs redundancy modes more clearly by what they
> do. For instance "-d double|triple" or "-d 2n|3n". And for raid5/6 "-d
> single-parity|double-parity" or "-d n+1|n+2".
>
> Regards,
>
> Bart
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

This would be pretty excellent - there's a real need for a storage 
system where you can just give it a bunch of disks and a policy, and let 
the system worry about the details. Current RAID implementations are 
pretty inflexible, for example when dealing with disks of varying size.

--Ravi

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-03-06  1:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-05 19:28 Raid1 with 3 drives Grady Neely
2010-03-05 19:40 ` Josef Bacik
2010-03-05 19:58   ` Chris Ball
2010-03-05 20:29     ` Grady Neely
2010-03-05 20:31       ` Josef Bacik
2010-03-05 21:49         ` Bart Noordervliet
2010-03-05 22:13           ` Mike Fedyk
2010-03-05 22:27             ` Hubert Kario
2010-03-06  1:02           ` Ravi Pinjala

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox