* ANNOUNCE: mdadm 1.7.0 - A tool for managing Soft RAID under Linux
@ 2004-08-11 2:29 Neil Brown
2004-08-11 13:55 ` Jure Peèar
0 siblings, 1 reply; 7+ messages in thread
From: Neil Brown @ 2004-08-11 2:29 UTC (permalink / raw)
To: linux-raid
I am pleased to announce the availability of
mdadm version 1.7.0
It is available at
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
as a source tar-ball and (at the first site) as an SRPM, and as an RPM for i386.
mdadm is a tool for creating, managing and monitoring
device arrays using the "md" driver in Linux, also
known as Software RAID arrays.
Release 1.7.0 adds:
- Support "--grow --add" to add a device to a linear array, if the
kernel supports it. Not documented yet.
- Restore support for uclibc which was broken recently.
- Several improvements to the output of --detail, including
reporting "resyncing" or "recovering" in the state.
- Close filedescriptor at end of --detail (exit would have closed it
anyway, so this isn't abig deal).
- Report "Sync checkpoint" in --examine output if appropriate.
- Add --update=resync for --assemble mode to for a resync when the
array is assembled.
- Add support for "raid10", which is under development in 2.6.
Not documented yet.
- --monitor now reads spare-group and spares info from config file
even when names of arrays to scan are given on the command line
It is expected that the next full release of mdadm will be 2.0.0
and it will have substantially re-written handling for superblocks and
array creation. In particular, it will be able to work with the new
superblock format (version 1) supported by 2.6.
Prior to that, some point releases (1.7.1, 1.7.2 ...) may be released
so that the changes can be tested by interrested parties.
Development of mdadm is sponsored by CSE@UNSW:
The School of Computer Science and Engineering
at
The University of New South Wales
NeilBrown 11 August 2004
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: ANNOUNCE: mdadm 1.7.0 - A tool for managing Soft RAID under Linux
2004-08-11 2:29 ANNOUNCE: mdadm 1.7.0 - A tool for managing Soft RAID under Linux Neil Brown
@ 2004-08-11 13:55 ` Jure Peèar
2004-08-11 23:04 ` Neil Brown
0 siblings, 1 reply; 7+ messages in thread
From: Jure Peèar @ 2004-08-11 13:55 UTC (permalink / raw)
To: linux-raid
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 565 bytes --]
On Wed, 11 Aug 2004 12:29:13 +1000
Neil Brown <neilb@cse.unsw.edu.au> wrote:
> - Add support for "raid10", which is under development in 2.6.
> Not documented yet.
Where can one read more about this? I assume it's going to be a sepparate
raid personality, not just raid0 over raid1? Or is it going to be a dm
mapping scheme?
--
Jure Peèar
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: ANNOUNCE: mdadm 1.7.0 - A tool for managing Soft RAID under Linux
2004-08-11 13:55 ` Jure Peèar
@ 2004-08-11 23:04 ` Neil Brown
2004-08-14 2:18 ` raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0) Jure Peèar
0 siblings, 1 reply; 7+ messages in thread
From: Neil Brown @ 2004-08-11 23:04 UTC (permalink / raw)
To: Jure Peèar; +Cc: linux-raid
On Wednesday August 11, pegasus@nerv.eu.org wrote:
> On Wed, 11 Aug 2004 12:29:13 +1000
> Neil Brown <neilb@cse.unsw.edu.au> wrote:
>
> > - Add support for "raid10", which is under development in 2.6.
> > Not documented yet.
>
> Where can one read more about this? I assume it's going to be a sepparate
> raid personality, not just raid0 over raid1? Or is it going to be a dm
> mapping scheme?
>
It is a separate personality that has combined raid1 and raid0
features.
Data is laid out in a raid0 style, but multiple copies of each chunk
are possible. There can be "near" copies, where copies of the one
block are at the same or similar offsets in different drives, and
"far" copies, where copies of the one block are at a substantial
offset from one drive to the next.
e.g. 2 near copies on a 3 drive array would look like?
drive1 drive2 drive2
A A B
B C C
D D E
E F F
: : :
2 far copies on a 3 drive array would look like
A B C
D E F
G H I
: : :
C A B
F D E
I G H
: : :
My current code substantially works, but there are a few bugs left in
it. You can find it by hunting around in:
http://neilb.web. cse.unsw.edu.au/patches/linux-devel/2.6/2004-08-11-22/
I included raid10 support in mdadm 1.7.0 so that as soon as the kernel
patch was ready, people would be able to try it out.
NeilBrown
^ permalink raw reply [flat|nested] 7+ messages in thread
* raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)
2004-08-11 23:04 ` Neil Brown
@ 2004-08-14 2:18 ` Jure Peèar
2004-08-14 4:28 ` Guy
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Jure Peèar @ 2004-08-14 2:18 UTC (permalink / raw)
To: linux-raid; +Cc: dm-devel
On Thu, 12 Aug 2004 09:04:21 +1000
Neil Brown <neilb@cse.unsw.edu.au> wrote:
> Data is laid out in a raid0 style, but multiple copies of each chunk
> are possible. There can be "near" copies, where copies of the one
> block are at the same or similar offsets in different drives, and
> "far" copies, where copies of the one block are at a substantial
> offset from one drive to the next.
... this really fuels my imagination.
Imagine having a pool of drives, where chunks of data are distributed evenly
across all drives in a redundant manner. If one drive dies, the chunks that
are not redundant anymore get their copies on the remaining drives, provided
that there's enough space left; if one or more drives are added to the
array, new chunks are written there until the balance is reached again.
Disk space could be the first key for balancing across the drives, with
transfer rate or seek time maybe added later. Maybe the pool could even
adapt dinamically to the i/o patterns ...
Am i dreaming (it's well over 4am here :) ? Or is something like this
possible? Maybe not with a md personality, but by some daemon that would be
taking care of a dm map?
--
Jure Pečar
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)
2004-08-14 2:18 ` raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0) Jure Peèar
@ 2004-08-14 4:28 ` Guy
2004-08-14 9:51 ` [dm-devel] " christophe varoqui
2004-08-14 14:18 ` Mark Hahn
2 siblings, 0 replies; 7+ messages in thread
From: Guy @ 2004-08-14 4:28 UTC (permalink / raw)
To: 'Jure Peèar', linux-raid; +Cc: dm-devel
Many years ago HP had a product called an AutoRAID (12 and 12H). It was similar to what you describe. If you added a disk it used it as needed. Even the "spare" was used for live data. It used a combination of RAID1 and RAID5. Writes were to the RAID1 area, during low usage times it would move the RAID1 data to the RAID5 area. It would also balance the disks during low usage times. When you added a disk, it would be out of balance, later it would start to balance the disks. Any unallocated space (not used for LUNs) was used as extra RAID1 space to improve performance, as was the spare.
It was a real good idea! The performance sucked....
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Jure Peèar
Sent: Friday, August 13, 2004 10:18 PM
To: linux-raid@vger.kernel.org
Cc: dm-devel@redhat.com
Subject: raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)
On Thu, 12 Aug 2004 09:04:21 +1000
Neil Brown <neilb@cse.unsw.edu.au> wrote:
> Data is laid out in a raid0 style, but multiple copies of each chunk
> are possible. There can be "near" copies, where copies of the one
> block are at the same or similar offsets in different drives, and
> "far" copies, where copies of the one block are at a substantial
> offset from one drive to the next.
... this really fuels my imagination.
Imagine having a pool of drives, where chunks of data are distributed evenly
across all drives in a redundant manner. If one drive dies, the chunks that
are not redundant anymore get their copies on the remaining drives, provided
that there's enough space left; if one or more drives are added to the
array, new chunks are written there until the balance is reached again.
Disk space could be the first key for balancing across the drives, with
transfer rate or seek time maybe added later. Maybe the pool could even
adapt dinamically to the i/o patterns ...
Am i dreaming (it's well over 4am here :) ? Or is something like this
possible? Maybe not with a md personality, but by some daemon that would be
taking care of a dm map?
--
Jure Pečar
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dm-devel] raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)
2004-08-14 2:18 ` raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0) Jure Peèar
2004-08-14 4:28 ` Guy
@ 2004-08-14 9:51 ` christophe varoqui
2004-08-14 14:18 ` Mark Hahn
2 siblings, 0 replies; 7+ messages in thread
From: christophe varoqui @ 2004-08-14 9:51 UTC (permalink / raw)
To: device-mapper development; +Cc: linux-raid
That is what you get from a LVM2 setting with 1 VG containing all your
disks and mirrored LVs.
I didn't check lately, but PE allocators certainly need to be more
intelligent with regard to not allowing mirror members on the same spin.
regards,
cvaroqui
> Imagine having a pool of drives, where chunks of data are distributed evenly
> across all drives in a redundant manner. If one drive dies, the chunks that
> are not redundant anymore get their copies on the remaining drives, provided
> that there's enough space left; if one or more drives are added to the
> array, new chunks are written there until the balance is reached again.
>
> Disk space could be the first key for balancing across the drives, with
> transfer rate or seek time maybe added later. Maybe the pool could even
> adapt dinamically to the i/o patterns ...
>
> Am i dreaming (it's well over 4am here :) ? Or is something like this
> possible? Maybe not with a md personality, but by some daemon that would be
> taking care of a dm map?
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)
2004-08-14 2:18 ` raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0) Jure Peèar
2004-08-14 4:28 ` Guy
2004-08-14 9:51 ` [dm-devel] " christophe varoqui
@ 2004-08-14 14:18 ` Mark Hahn
2 siblings, 0 replies; 7+ messages in thread
From: Mark Hahn @ 2004-08-14 14:18 UTC (permalink / raw)
To: Jure Peèar; +Cc: linux-raid, dm-devel
> Am i dreaming (it's well over 4am here :) ? Or is something like this
> possible? Maybe not with a md personality, but by some daemon that would be
> taking care of a dm map?
such things are available commercially; I can't say how well they work.
personally, I doubt that this kind of thing can be done well at a purely
block level - I think that some file-level information would be beneficial,
perhaps essential. but this is a layering-violating idea, and seems to
rub a lot of people the wrong way. on the other hand, there is a seperate
movement towards "object storage" systems, which is certainly a related idea.
regards, mark hahn.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2004-08-14 14:18 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-11 2:29 ANNOUNCE: mdadm 1.7.0 - A tool for managing Soft RAID under Linux Neil Brown
2004-08-11 13:55 ` Jure Peèar
2004-08-11 23:04 ` Neil Brown
2004-08-14 2:18 ` raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0) Jure Peèar
2004-08-14 4:28 ` Guy
2004-08-14 9:51 ` [dm-devel] " christophe varoqui
2004-08-14 14:18 ` Mark Hahn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).