linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID5 / 6 Growth
@ 2009-12-16  5:40 Leslie Rhorer
  2009-12-16  6:37 ` Majed B.
  0 siblings, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-16  5:40 UTC (permalink / raw)
  To: linux-raid


	I just purchased 2 new drives to add to a RAID5 and RAID6 array,
respectively.  I have already added the 8th drive to the RAID5 array and
begun the growth.  The critical phase has been completed.  For safety, I
umounted the array before I started the growth processs.  At the rate the
data is being read and written to the drives, the resync is going to take a
very long time - about 4 days.  I'd rather not have the array down that
long, unless it's really necessary.  Is there any greater amount of jeopardy
to a RAID5 array during the growth (once the critical phase is complete)
than under ordinary circumstances?  That is to say, will losing a drive
during the resync wreak havoc on the data?

	Once I am done upgrading the RAID5 array, I'm going to add a drive
to the RAID6 array.  'Same question, there.  Will the loss of 2 drives
during the resync cause a loss of data?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-16  5:40 RAID5 / 6 Growth Leslie Rhorer
@ 2009-12-16  6:37 ` Majed B.
  2009-12-16  8:06   ` Leslie Rhorer
  0 siblings, 1 reply; 27+ messages in thread
From: Majed B. @ 2009-12-16  6:37 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Have you made sure that the value of
/proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200
MB/s) along with /proc/sys/dev/raid/speed_limit_max?

I interrupted an array resyncing a couple of times without issues.
Only one time I interrupted an array during growth process and I had
an old version of mdadm (2.6.3) which didn't support resuming that. I
think Neil told me that 2.6.9 is the minimum requirement to resume.

On Wed, Dec 16, 2009 at 8:40 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>
>        I just purchased 2 new drives to add to a RAID5 and RAID6 array,
> respectively.  I have already added the 8th drive to the RAID5 array and
> begun the growth.  The critical phase has been completed.  For safety, I
> umounted the array before I started the growth processs.  At the rate the
> data is being read and written to the drives, the resync is going to take a
> very long time - about 4 days.  I'd rather not have the array down that
> long, unless it's really necessary.  Is there any greater amount of jeopardy
> to a RAID5 array during the growth (once the critical phase is complete)
> than under ordinary circumstances?  That is to say, will losing a drive
> during the resync wreak havoc on the data?
>
>        Once I am done upgrading the RAID5 array, I'm going to add a drive
> to the RAID6 array.  'Same question, there.  Will the loss of 2 drives
> during the resync cause a loss of data?
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-16  6:37 ` Majed B.
@ 2009-12-16  8:06   ` Leslie Rhorer
  2009-12-16  8:12     ` Michael Evans
                       ` (3 more replies)
  0 siblings, 4 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-16  8:06 UTC (permalink / raw)
  To: 'Majed B.'; +Cc: linux-raid

> Have you made sure that the value of
> /proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200

High enough?  Wouldn't a higher speed limit mean more stress on the systems?
Its value is 1000.

> MB/s) along with /proc/sys/dev/raid/speed_limit_max?

It's 200,000

> I interrupted an array resyncing a couple of times without issues.
> Only one time I interrupted an array during growth process and I had
> an old version of mdadm (2.6.3) which didn't support resuming that. I
> think Neil told me that 2.6.9 is the minimum requirement to resume.

It's 2.6.7.2.  Debian does not admit new software into its distro until they
are rock hard stable, unless it is a bug fix release.  I guess I'll have to
wait a few more days.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-16  8:06   ` Leslie Rhorer
@ 2009-12-16  8:12     ` Michael Evans
  2009-12-16  8:38       ` Leslie Rhorer
  2009-12-16 11:21     ` Majed B.
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 27+ messages in thread
From: Michael Evans @ 2009-12-16  8:12 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: Majed B., linux-raid

On Wed, Dec 16, 2009 at 12:06 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> Have you made sure that the value of
>> /proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200
>
> High enough?  Wouldn't a higher speed limit mean more stress on the systems?
> Its value is 1000.
>
>> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
>
> It's 200,000
>
>> I interrupted an array resyncing a couple of times without issues.
>> Only one time I interrupted an array during growth process and I had
>> an old version of mdadm (2.6.3) which didn't support resuming that. I
>> think Neil told me that 2.6.9 is the minimum requirement to resume.
>
> It's 2.6.7.2.  Debian does not admit new software into its distro until they
> are rock hard stable, unless it is a bug fix release.  I guess I'll have to
> wait a few more days.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

You could seek out a testing version or compile it your self.  mdadm
doesn't depend on that much...
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-16  8:12     ` Michael Evans
@ 2009-12-16  8:38       ` Leslie Rhorer
  0 siblings, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-16  8:38 UTC (permalink / raw)
  To: linux-raid

> >> I interrupted an array resyncing a couple of times without issues.
> >> Only one time I interrupted an array during growth process and I had
> >> an old version of mdadm (2.6.3) which didn't support resuming that. I
> >> think Neil told me that 2.6.9 is the minimum requirement to resume.
> >
> > It's 2.6.7.2.  Debian does not admit new software into its distro until
> they
> > are rock hard stable, unless it is a bug fix release.  I guess I'll have
> to
> > wait a few more days.
> >
> 
> You could seek out a testing version or compile it your self.  mdadm
> doesn't depend on that much...

	Yes, I could, butt these are servers, and one of the main reasons I
run Debian on them is stability.  'Not that I have a huge issue with
compiling from source, generally speaking, but I much prefer to keep
everything on these systems wholly in the stable release whenever possible.
With Debian, that's usually 9 months or so after a piece of software is
released as stable in the other distros.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-16  8:06   ` Leslie Rhorer
  2009-12-16  8:12     ` Michael Evans
@ 2009-12-16 11:21     ` Majed B.
  2009-12-17  1:36       ` Leslie Rhorer
  2009-12-19  1:13       ` Leslie Rhorer
  2009-12-16 13:25     ` Goswin von Brederlow
  2009-12-17 17:53     ` Bill Davidsen
  3 siblings, 2 replies; 27+ messages in thread
From: Majed B. @ 2009-12-16 11:21 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Oh, I thought this was a backup system. I always ramp up the speed on
min & max to have the resync & reshapes finish fast, but at the loss
of performance during that.

You mentioned that you've unmounted the filesystem, so why does it
make a different if you max the speed? Or do you have other services
running?

I think at the time I needed a new mdadm, I downloaded 3.0 which is
the stable one for now (3.1 was pulled -- donno about 3.1.1).

On Wed, Dec 16, 2009 at 11:06 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> Have you made sure that the value of
>> /proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200
>
> High enough?  Wouldn't a higher speed limit mean more stress on the systems?
> Its value is 1000.
>
>> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
>
> It's 200,000
>
>> I interrupted an array resyncing a couple of times without issues.
>> Only one time I interrupted an array during growth process and I had
>> an old version of mdadm (2.6.3) which didn't support resuming that. I
>> think Neil told me that 2.6.9 is the minimum requirement to resume.
>
> It's 2.6.7.2.  Debian does not admit new software into its distro until they
> are rock hard stable, unless it is a bug fix release.  I guess I'll have to
> wait a few more days.
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-16  8:06   ` Leslie Rhorer
  2009-12-16  8:12     ` Michael Evans
  2009-12-16 11:21     ` Majed B.
@ 2009-12-16 13:25     ` Goswin von Brederlow
  2009-12-17  1:51       ` Leslie Rhorer
  2009-12-17 17:53     ` Bill Davidsen
  3 siblings, 1 reply; 27+ messages in thread
From: Goswin von Brederlow @ 2009-12-16 13:25 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: 'Majed B.', linux-raid

"Leslie Rhorer" <lrhorer@satx.rr.com> writes:

>> Have you made sure that the value of
>> /proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200
>
> High enough?  Wouldn't a higher speed limit mean more stress on the systems?
> Its value is 1000.

A higher min value will block more normal IO (if there is
any). Raising min is usefull to ensure the job gets done in a certain
time, to not let normal IO slow down a rebuild too much.

>> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
>
> It's 200,000

I only ever had to tune this once when too much IO would deadlock an
external enclosure. Otherwise keep this really high so it uses all
idle IO there is.



As to your initial question: Being able to keep the filesystem mounted
and used is the whole point of having online growing of the raid
system. If that weren't save then there would be no point to it as you
could just as well stop the raid if you already umounted it and grow
it offline.

MfG,
        Goswin

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-16 11:21     ` Majed B.
@ 2009-12-17  1:36       ` Leslie Rhorer
  2009-12-19  1:13       ` Leslie Rhorer
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-17  1:36 UTC (permalink / raw)
  To: 'Majed B.'; +Cc: linux-raid

> Oh, I thought this was a backup system. I always ramp up the speed on
> min & max to have the resync & reshapes finish fast, but at the loss
> of performance during that.

	One of them, the one resyncing right now, is a backup system.  When
it is done, I will grow the main server.  I don't want to have both
performing potentially sensitive operations simultaneously.

> You mentioned that you've unmounted the filesystem, so why does it
> make a different if you max the speed? Or do you have other services
> running?

	It probably doesn't matter on the backup, but the point is I would
rather not have it unmounted for the better part of 5 days.  I would rather
not go nearly 5 days without a valid backup, if I can avoid it.  Ordinarily
rsync runs every morning.

> I think at the time I needed a new mdadm, I downloaded 3.0 which is
> the stable one for now (3.1 was pulled -- donno about 3.1.1).

	Unless there is some compelling reason to download 3.0, I think I
will wait for it to be included in the stable distro of Debian.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-16 13:25     ` Goswin von Brederlow
@ 2009-12-17  1:51       ` Leslie Rhorer
  2009-12-17  9:27         ` John Robinson
  2009-12-18 12:27         ` Goswin von Brederlow
  0 siblings, 2 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-17  1:51 UTC (permalink / raw)
  To: goswin-v-b; +Cc: linux-raid

> > High enough?  Wouldn't a higher speed limit mean more stress on the
> systems?
> > Its value is 1000.
> 
> A higher min value will block more normal IO (if there is
> any).

	No, not really.  All the system that is currently being grown does
is incrementally rsync the data from the main server every morning.  The
main server, which will be grown after this server is done, is another
matter.

> Raising min is usefull to ensure the job gets done in a certain
> time, to not let normal IO slow down a rebuild too much.

	OK, but like I said, for the most part there isn't any other I/O.
 
> >> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
> >
> > It's 200,000
> 
> I only ever had to tune this once when too much IO would deadlock an
> external enclosure. Otherwise keep this really high so it uses all
> idle IO there is.

	200MBps is far more than the system can handle.  These are consumer
class drives on a relatively inexpensive 4 port controller feeding a Port
Multiplier chassis.

> As to your initial question: Being able to keep the filesystem mounted
> and used is the whole point of having online growing of the raid
> system. If that weren't save then there would be no point to it as you
> could just as well stop the raid if you already umounted it and grow
> it offline.

	I know that is the point of the utility.  My question boils down to,
"How safe is it to avail one's self of the capability if it is not essential
to have the array mounted for the duration?"  I don't particularly like
having the array unavailable (especially not for nearly 5 days), but I
prefer that to risking data loss, or especially risking irretrievably losing
the entire array.  The question is particularly pertinent given the fact the
growth is going to take nearly 5 days (a lot can happen in 5 days), and the
fact the system was having the rather squirrelly issue a few days back which
seems - emphasis on SEEMS - to have been resolved by disabling NCQ.  What
happens if the system kicks a couple of drives, especially if one drive gets
kicked, a bunch of data gets written and then a few minutes later another
drive gets kicked?  In particular, what if neither of the two drives that
get kicked are the new drive?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-17  1:51       ` Leslie Rhorer
@ 2009-12-17  9:27         ` John Robinson
  2009-12-18  1:33           ` Leslie Rhorer
  2009-12-19  1:11           ` Leslie Rhorer
  2009-12-18 12:27         ` Goswin von Brederlow
  1 sibling, 2 replies; 27+ messages in thread
From: John Robinson @ 2009-12-17  9:27 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: Linux RAID

On 17/12/2009 01:51, Leslie Rhorer wrote:
[...]
>> As to your initial question: Being able to keep the filesystem mounted
>> and used is the whole point of having online growing of the raid
>> system. If that weren't save then there would be no point to it as you
>> could just as well stop the raid if you already umounted it and grow
>> it offline.
> 
> 	I know that is the point of the utility.  My question boils down to,
> "How safe is it to avail one's self of the capability if it is not essential
> to have the array mounted for the duration?"  I don't particularly like
> having the array unavailable (especially not for nearly 5 days), but I
> prefer that to risking data loss, or especially risking irretrievably losing
> the entire array.  The question is particularly pertinent given the fact the
> growth is going to take nearly 5 days (a lot can happen in 5 days), and the
> fact the system was having the rather squirrelly issue a few days back which
> seems - emphasis on SEEMS - to have been resolved by disabling NCQ.  What
> happens if the system kicks a couple of drives, especially if one drive gets
> kicked, a bunch of data gets written and then a few minutes later another
> drive gets kicked?  In particular, what if neither of the two drives that
> get kicked are the new drive?

Well, what happens if two drives get kicked in normal use over the 
course of 5 days? I think you're being overly cautious, and I'll try to 
explain why.

The reshape only reduces redundancy during the "critical section". After 
that, you're as redundant as usual and can tolerate a drive failure. On 
RAID-6, 2 drive failures. A reshape should be considerably safer than 
doing a resync to a replacement drive, because in the reshape case if 
you get bad sectors md can regenerate the data from the parity info.

Do you regularly run a check on your array? Or have you done one 
recently? And does the SMART info on all your drives look OK? These 
should be the case before attempting any reshape anyway, so I'd say just 
keep the partition mounted.

Cheers,

John.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-16  8:06   ` Leslie Rhorer
                       ` (2 preceding siblings ...)
  2009-12-16 13:25     ` Goswin von Brederlow
@ 2009-12-17 17:53     ` Bill Davidsen
  2009-12-18  1:46       ` Leslie Rhorer
  2009-12-19  1:12       ` Leslie Rhorer
  3 siblings, 2 replies; 27+ messages in thread
From: Bill Davidsen @ 2009-12-17 17:53 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: 'Majed B.', linux-raid

Leslie Rhorer wrote:
>> Have you made sure that the value of
>> /proc/sys/dev/raid/speed_limit_min is high enough? (200000 means 200
>>     
>
> High enough?  Wouldn't a higher speed limit mean more stress on the systems?
> Its value is 1000.
>
>   
>> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
>>     
>
> It's 200,000
>
>   
What are you actually seeing for rebuild speed? Pushing the min up 
shouldn't matter with the max set high, but if you're not getting 
something like 200MB rebuild there's an issue of some kind. What's the 
stripe cache size?
>> I interrupted an array resyncing a couple of times without issues.
>> Only one time I interrupted an array during growth process and I had
>> an old version of mdadm (2.6.3) which didn't support resuming that. I
>> think Neil told me that 2.6.9 is the minimum requirement to resume.
>>     
>
> It's 2.6.7.2.  Debian does not admit new software into its distro until they
> are rock hard stable, unless it is a bug fix release.  I guess I'll have to
> wait a few more days.
>   


-- 
Bill Davidsen <davidsen@tmr.com>
  "We can't solve today's problems by using the same thinking we
   used in creating them." - Einstein


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-17  9:27         ` John Robinson
@ 2009-12-18  1:33           ` Leslie Rhorer
  2009-12-19  1:11           ` Leslie Rhorer
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-18  1:33 UTC (permalink / raw)
  To: 'John Robinson'; +Cc: 'Linux RAID'

> > the entire array.  The question is particularly pertinent given the fact
> the
> > growth is going to take nearly 5 days (a lot can happen in 5 days), and
> the
> > fact the system was having the rather squirrelly issue a few days back
> which
> > seems - emphasis on SEEMS - to have been resolved by disabling NCQ.
> What
> > happens if the system kicks a couple of drives, especially if one drive
> gets
> > kicked, a bunch of data gets written and then a few minutes later
> another
> > drive gets kicked?  In particular, what if neither of the two drives
> that
> > get kicked are the new drive?
> 
> Well, what happens if two drives get kicked in normal use over the
> course of 5 days?

	Nothing of any consequence, unless it happens in quick succession.
When drive A is kicked, if it is spurious, then the drive is simply added
back and a resync performed.  If the drive actually failed, then it is
replaced, and once again a resync is done.  Either way, it takes vastly less
time than a growth.  Assuming at least one of the kicks is not an
out-and-out drive failure, then recovering the bulk of the data is fairly
easy.  That may not be the case with two drives kicked during a growth,
since a big chunk of the data on the last drive will be completely missing.
What's more, one is left with an array which has neitehr properly N nor N +
1 drives, but is in the process of changing from one to the other.  Again,
recovering from a failed resync or a sudden non-drive failure (like a power
failure or a drive cable being accidentally yanked) is fairly easy.  I don't
know what will happen if one of the drive cables feeding three of the drives
is accidentally yanked.  That's why I am asking.

> I think you're being overly cautious, and I'll try to
> explain why.
 
> The reshape only reduces redundancy during the "critical section". After
> that, you're as redundant as usual and can tolerate a drive failure. On
> RAID-6, 2 drive failures.

	Yes, I know.  I've experienced a number of issues where two or more
drives have been taken offline by md, though.  As I say, recovering from
this when the array was in a stable configuration is not too difficult,
perhaps even without data loss.  What happens when the array is taken
offline and it has neither properly 7 nor 8 drives is a real question,
though.  Obviously, if the array can resume its expansion where it left off
after a failure event, then it is not an issue, but according to one of the
other correspondents, this feature is not available in my version of mdadm.

> A reshape should be considerably safer than
> doing a resync to a replacement drive, because in the reshape case if
> you get bad sectors md can regenerate the data from the parity info.

	Except that it takes many times longer, significantly increasing the
likelihood of such a failure during the event.

> Do you regularly run a check on your array? Or have you done one
> recently? And does the SMART info on all your drives look OK? These
> should be the case before attempting any reshape anyway,

	Yes, but that did not stop md from halting the array multiple times
during resyncs when NCQ was enabled.  Disabling NCQ seems to have alleviated
the issue, but I have no guarantees it won't happen again during the growth.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-17 17:53     ` Bill Davidsen
@ 2009-12-18  1:46       ` Leslie Rhorer
  2009-12-19  1:12       ` Leslie Rhorer
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-18  1:46 UTC (permalink / raw)
  To: 'Bill Davidsen'; +Cc: linux-raid

> > High enough?  Wouldn't a higher speed limit mean more stress on the
> systems?
> > Its value is 1000.
> >
> >
> >> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
> >>
> >
> > It's 200,000
> >
> >
> What are you actually seeing for rebuild speed? Pushing the min up

	About 5 Mbps read per drive and 3.8 MBps write per drive.  With NCQ
enabled, I was getting about 35MBps reads during the resync.  With NCQ
disabled, I was getting about 25 MBps reads.

> shouldn't matter with the max set high, but if you're not getting
> something like 200MB rebuild there's an issue of some kind. What's the
> stripe cache size?

Linux reports 256 on both systems.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-17  1:51       ` Leslie Rhorer
  2009-12-17  9:27         ` John Robinson
@ 2009-12-18 12:27         ` Goswin von Brederlow
  1 sibling, 0 replies; 27+ messages in thread
From: Goswin von Brederlow @ 2009-12-18 12:27 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: goswin-v-b, linux-raid

"Leslie Rhorer" <lrhorer@satx.rr.com> writes:

>> > High enough?  Wouldn't a higher speed limit mean more stress on the
>> systems?
>> > Its value is 1000.
>> 
>> A higher min value will block more normal IO (if there is
>> any).
>
> 	No, not really.  All the system that is currently being grown does
> is incrementally rsync the data from the main server every morning.  The
> main server, which will be grown after this server is done, is another
> matter.
>
>> Raising min is usefull to ensure the job gets done in a certain
>> time, to not let normal IO slow down a rebuild too much.
>
> 	OK, but like I said, for the most part there isn't any other I/O.
>  
>> >> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
>> >
>> > It's 200,000
>> 
>> I only ever had to tune this once when too much IO would deadlock an
>> external enclosure. Otherwise keep this really high so it uses all
>> idle IO there is.
>
> 	200MBps is far more than the system can handle.  These are consumer
> class drives on a relatively inexpensive 4 port controller feeding a Port
> Multiplier chassis.
>
>> As to your initial question: Being able to keep the filesystem mounted
>> and used is the whole point of having online growing of the raid
>> system. If that weren't save then there would be no point to it as you
>> could just as well stop the raid if you already umounted it and grow
>> it offline.
>
> 	I know that is the point of the utility.  My question boils down to,
> "How safe is it to avail one's self of the capability if it is not essential
> to have the array mounted for the duration?"  I don't particularly like
> having the array unavailable (especially not for nearly 5 days), but I
> prefer that to risking data loss, or especially risking irretrievably losing
> the entire array.  The question is particularly pertinent given the fact the
> growth is going to take nearly 5 days (a lot can happen in 5 days), and the
> fact the system was having the rather squirrelly issue a few days back which
> seems - emphasis on SEEMS - to have been resolved by disabling NCQ.  What
> happens if the system kicks a couple of drives, especially if one drive gets
> kicked, a bunch of data gets written and then a few minutes later another
> drive gets kicked?  In particular, what if neither of the two drives that
> get kicked are the new drive?

It should just keep going. I don't see how having the filesystem in
use will have any influence there other than adding a little more load
on the disks. Reshaping is heavy on the drive. Lots of reads, writes
and seeks. A daily rsync probably doesn't even register as extra load.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-17  9:27         ` John Robinson
  2009-12-18  1:33           ` Leslie Rhorer
@ 2009-12-19  1:11           ` Leslie Rhorer
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19  1:11 UTC (permalink / raw)
  To: 'John Robinson'; +Cc: 'Linux RAID'

> > the entire array.  The question is particularly pertinent given the fact
> the
> > growth is going to take nearly 5 days (a lot can happen in 5 days), and
> the
> > fact the system was having the rather squirrelly issue a few days back
> which
> > seems - emphasis on SEEMS - to have been resolved by disabling NCQ.
> What
> > happens if the system kicks a couple of drives, especially if one drive
> gets
> > kicked, a bunch of data gets written and then a few minutes later
> another
> > drive gets kicked?  In particular, what if neither of the two drives
> that
> > get kicked are the new drive?
> 
> Well, what happens if two drives get kicked in normal use over the
> course of 5 days?

	Nothing of any consequence, unless it happens in quick succession.
When drive A is kicked, if it is spurious, then the drive is simply added
back and a resync performed.  If the drive actually failed, then it is
replaced, and once again a resync is done.  Either way, it takes vastly less
time than a growth.  Assuming at least one of the kicks is not an
out-and-out drive failure, then recovering the bulk of the data is fairly
easy.  That may not be the case with two drives kicked during a growth,
since a big chunk of the data on the last drive will be completely missing.
What's more, one is left with an array which has neitehr properly N nor N +
1 drives, but is in the process of changing from one to the other.  Again,
recovering from a failed resync or a sudden non-drive failure (like a power
failure or a drive cable being accidentally yanked) is fairly easy.  I don't
know what will happen if one of the drive cables feeding three of the drives
is accidentally yanked.  That's why I am asking.

> I think you're being overly cautious, and I'll try to
> explain why.
 
> The reshape only reduces redundancy during the "critical section". After
> that, you're as redundant as usual and can tolerate a drive failure. On
> RAID-6, 2 drive failures.

	Yes, I know.  I've experienced a number of issues where two or more
drives have been taken offline by md, though.  As I say, recovering from
this when the array was in a stable configuration is not too difficult,
perhaps even without data loss.  What happens when the array is taken
offline and it has neither properly 7 nor 8 drives is a real question,
though.  Obviously, if the array can resume its expansion where it left off
after a failure event, then it is not an issue, but according to one of the
other correspondents, this feature is not available in my version of mdadm.

> A reshape should be considerably safer than
> doing a resync to a replacement drive, because in the reshape case if
> you get bad sectors md can regenerate the data from the parity info.

	Except that it takes many times longer, significantly increasing the
likelihood of such a failure during the event.

> Do you regularly run a check on your array? Or have you done one
> recently? And does the SMART info on all your drives look OK? These
> should be the case before attempting any reshape anyway,

	Yes, but that did not stop md from halting the array multiple times
during resyncs when NCQ was enabled.  Disabling NCQ seems to have alleviated
the issue, but I have no guarantees it won't happen again during the growth.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-17 17:53     ` Bill Davidsen
  2009-12-18  1:46       ` Leslie Rhorer
@ 2009-12-19  1:12       ` Leslie Rhorer
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19  1:12 UTC (permalink / raw)
  To: 'Bill Davidsen'; +Cc: linux-raid

> > High enough?  Wouldn't a higher speed limit mean more stress on the
> systems?
> > Its value is 1000.
> >
> >
> >> MB/s) along with /proc/sys/dev/raid/speed_limit_max?
> >>
> >
> > It's 200,000
> >
> >
> What are you actually seeing for rebuild speed? Pushing the min up

	About 5 Mbps read per drive and 3.8 MBps write per drive.  With NCQ
enabled, I was getting about 35MBps reads during the resync.  With NCQ
disabled, I was getting about 25 MBps reads.

> shouldn't matter with the max set high, but if you're not getting
> something like 200MB rebuild there's an issue of some kind. What's the
> stripe cache size?

Linux reports 256 on both systems.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-16 11:21     ` Majed B.
  2009-12-17  1:36       ` Leslie Rhorer
@ 2009-12-19  1:13       ` Leslie Rhorer
  2009-12-19 18:21         ` Leslie Rhorer
  1 sibling, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19  1:13 UTC (permalink / raw)
  To: 'Majed B.'; +Cc: linux-raid

> Oh, I thought this was a backup system. I always ramp up the speed on
> min & max to have the resync & reshapes finish fast, but at the loss
> of performance during that.

	One of them, the one resyncing right now, is a backup system.  When
it is done, I will grow the main server.  I don't want to have both
performing potentially sensitive operations simultaneously.

> You mentioned that you've unmounted the filesystem, so why does it
> make a different if you max the speed? Or do you have other services
> running?

	It probably doesn't matter on the backup, but the point is I would
rather not have it unmounted for the better part of 5 days.  I would rather
not go nearly 5 days without a valid backup, if I can avoid it.  Ordinarily
rsync runs every morning.

> I think at the time I needed a new mdadm, I downloaded 3.0 which is
> the stable one for now (3.1 was pulled -- donno about 3.1.1).

	Unless there is some compelling reason to download 3.0, I think I
will wait for it to be included in the stable distro of Debian.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-19  1:13       ` Leslie Rhorer
@ 2009-12-19 18:21         ` Leslie Rhorer
  2009-12-19 18:36           ` Majed B.
  0 siblings, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19 18:21 UTC (permalink / raw)
  To: linux-raid

Well, the RAID growth completed on the backup system, but now when I attempt
to mount the XFS file system, mount returns the error "mount: Structure
needs cleaning".  When I issue the xfs_repair command, I get this:

Backup:/boot/grub# xfs_repair -v /dev/md0
Phase 1 - find and verify superblock...
        - block cache size set to 28616 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 38584 tail block 38504
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

	The file system was shut down cleanly with umount prior to starting
the growth, but xfs still seems to think the file system is dirty and the
journal needs to be replayed.  Running xfs_repair with the -n command shows
the file system has a few errors.  Before I purge the log and potentially
lose a few files, is there something else I should try under mdadm to
possibly fix teh array structure?  (I doubt it, but I wanted to check with
others more knowledgeable than I before I proceed.)


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-19 18:21         ` Leslie Rhorer
@ 2009-12-19 18:36           ` Majed B.
  2009-12-19 19:02             ` Leslie Rhorer
  0 siblings, 1 reply; 27+ messages in thread
From: Majed B. @ 2009-12-19 18:36 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Do any of the disks report sector errors or ATA or read errors?
smartctl -a /dev/sdx should give the report, assuming you run smartd
and have it probe disks periodically.

I've seen newer versions of the xfsprogs and libs packages. Try using
those. I know you're against new packages that haven't been included
in "stable" -- but if this is a dev/backup system, it's worth the shot
in my opinion.

Good luck.

On Sat, Dec 19, 2009 at 9:21 PM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
> Well, the RAID growth completed on the backup system, but now when I attempt
> to mount the XFS file system, mount returns the error "mount: Structure
> needs cleaning".  When I issue the xfs_repair command, I get this:
>
> Backup:/boot/grub# xfs_repair -v /dev/md0
> Phase 1 - find and verify superblock...
>        - block cache size set to 28616 entries
> Phase 2 - using internal log
>        - zero log...
> zero_log: head block 38584 tail block 38504
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_repair.  If you are unable to mount the filesystem, then use
> the -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
>
>        The file system was shut down cleanly with umount prior to starting
> the growth, but xfs still seems to think the file system is dirty and the
> journal needs to be replayed.  Running xfs_repair with the -n command shows
> the file system has a few errors.  Before I purge the log and potentially
> lose a few files, is there something else I should try under mdadm to
> possibly fix teh array structure?  (I doubt it, but I wanted to check with
> others more knowledgeable than I before I proceed.)
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-19 18:36           ` Majed B.
@ 2009-12-19 19:02             ` Leslie Rhorer
  2009-12-19 19:55               ` Majed B.
  0 siblings, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19 19:02 UTC (permalink / raw)
  To: 'Majed B.'; +Cc: linux-raid

> Do any of the disks report sector errors or ATA or read errors?
> smartctl -a /dev/sdx should give the report, assuming you run smartd
> and have it probe disks periodically.

	No, they are clean.

> I've seen newer versions of the xfsprogs and libs packages. Try using
> those. I know you're against new packages that haven't been included
> in "stable" -- but if this is a dev/backup system, it's worth the shot
> in my opinion.

	Well, either way, that's not going to help in this situation.
Either there is something I can attempt to fix in the underlying array
structure, or else I am going to have to erase the log and continue.

	I'm not sure what you mean by dev/backup.  The file system is not
created as a device on any other system by udev, if that's what you mean.
It's just a Linux system dedicated solely to running rscync backups every
morning at 04:00.  The array does get mounted on other systems using SAMBA
or NFS, as the case may be, so I can easily copy over files to the main
systems lost or corrupted through whatever means.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-19 19:02             ` Leslie Rhorer
@ 2009-12-19 19:55               ` Majed B.
  2009-12-19 20:19                 ` Leslie Rhorer
  0 siblings, 1 reply; 27+ messages in thread
From: Majed B. @ 2009-12-19 19:55 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

With dev/backup I mean if the Linux machine you're running is a a
backup system or a development machine that you can at least test this
on, it'd be convenient.

The reason why you can't mount even though repair suggests you do
might be because of a bug in xfsprogs/libs not a complete corruption
in the filesystem itself.

Perhaps even the newer version of the progs & libs might be able to
handle the kind of corruption in the filesystem you're having now,
without the need to clear the log.

In case you resort to clearing the log, wouldn't running xfs_repair
result in eventually finding the lost inodes and putting them in
lost+found?

On Sat, Dec 19, 2009 at 10:02 PM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> Do any of the disks report sector errors or ATA or read errors?
>> smartctl -a /dev/sdx should give the report, assuming you run smartd
>> and have it probe disks periodically.
>
>        No, they are clean.
>
>> I've seen newer versions of the xfsprogs and libs packages. Try using
>> those. I know you're against new packages that haven't been included
>> in "stable" -- but if this is a dev/backup system, it's worth the shot
>> in my opinion.
>
>        Well, either way, that's not going to help in this situation.
> Either there is something I can attempt to fix in the underlying array
> structure, or else I am going to have to erase the log and continue.
>
>        I'm not sure what you mean by dev/backup.  The file system is not
> created as a device on any other system by udev, if that's what you mean.
> It's just a Linux system dedicated solely to running rscync backups every
> morning at 04:00.  The array does get mounted on other systems using SAMBA
> or NFS, as the case may be, so I can easily copy over files to the main
> systems lost or corrupted through whatever means.
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-19 19:55               ` Majed B.
@ 2009-12-19 20:19                 ` Leslie Rhorer
  2009-12-19 23:39                   ` John Robinson
  0 siblings, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19 20:19 UTC (permalink / raw)
  To: 'Majed B.'; +Cc: linux-raid

> With dev/backup I mean if the Linux machine you're running is a a
> backup system or a development machine that you can at least test this
> on, it'd be convenient.

	Oh, I see.  I'll think about it.  I have the machine offline
entirely right now, making some hardware changes.

> The reason why you can't mount even though repair suggests you do
> might be because of a bug in xfsprogs/libs not a complete corruption
> in the filesystem itself.

	It's possible, I guess, but the diagnostics seem pretty coherent.
Running xfs_repair in test mode finds a small but reasonable number of
issues.

> Perhaps even the newer version of the progs & libs might be able to
> handle the kind of corruption in the filesystem you're having now,
> without the need to clear the log.

	Again, it's possible, of course.  I'll take it under advisement.
The question at hand, however is, "Is there possibly something at a lower
level (md) that could potentially be addressed which could clear the issues
that xfs thinks it has?"

 
> In case you resort to clearing the log, wouldn't running xfs_repair
> result in eventually finding the lost inodes and putting them in
> lost+found?

	The lost inodes, yes.  Xfs is reporting a few other errors, as well.
They don't look to be too heinous, so it might be easier all the way around
to just clear the log and proceed.  An rsych should easily recover any lost
files.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
       [not found] <70ed7c3e0912191232k7deb3a3p40ddd6bc1bdfd3ae@mail.gmail.com>
@ 2009-12-19 21:05 ` Leslie Rhorer
  2009-12-21 12:33   ` Goswin von Brederlow
  0 siblings, 1 reply; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19 21:05 UTC (permalink / raw)
  To: linux-raid

> I understand what you're aiming at finding, unfortunately I can't
> think of anything that would cause a corruption apart from bad cables
> or bad sectors.

	Well, me, either, but the question at hand isn't so much, "What
caused it?" as it is, "Is there something (above and beyond the resync which
was already done) I can do to verify the health of the underlying array
before proceeding?"
> 
> Perhaps the file system was dirty before growing the array and the
> growth process emphasized the problems. You said you unmounted the
> filesystem, but did you try to mount it just to check, or run

	Yes,I did.

> xfs_repair before growing to make sure the filesystem is clean?

	No, it mounted just fine, and I could read the array, so I didn't go
any further.  I then shut down all the network accesses, did a `sync`, and
then umounted the array.  I didn't bother to actually check for dirty pages.

> > I'm out of ideas, for now. I haven't encountered such problems with
> XFS & software RAID before, and I hope someone has an answer to your
> problem, as I, and maybe others, might probably face it later.

	I've had similar problems when growing an array before, but this is
the first time I have grown an array under mdadm and also the first time
under xfs.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-19 20:19                 ` Leslie Rhorer
@ 2009-12-19 23:39                   ` John Robinson
  2009-12-19 23:49                     ` Leslie Rhorer
  2009-12-19 23:59                     ` Majed B.
  0 siblings, 2 replies; 27+ messages in thread
From: John Robinson @ 2009-12-19 23:39 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

On 19/12/2009 20:19, Leslie Rhorer wrote:
[...]
>> The reason why you can't mount even though repair suggests you do
>> might be because of a bug in xfsprogs/libs not a complete corruption
>> in the filesystem itself.
> 
> 	It's possible, I guess, but the diagnostics seem pretty coherent.
> Running xfs_repair in test mode finds a small but reasonable number of
> issues.

Please excuse me if this is a stupid question, but is it possible any of 
the apparent issues could be caused by the block device underneath the 
filesystem having changed size, so the filesystem's in some sense the 
wrong size now?

Cheers,

John.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: RAID5 / 6 Growth
  2009-12-19 23:39                   ` John Robinson
@ 2009-12-19 23:49                     ` Leslie Rhorer
  2009-12-19 23:59                     ` Majed B.
  1 sibling, 0 replies; 27+ messages in thread
From: Leslie Rhorer @ 2009-12-19 23:49 UTC (permalink / raw)
  To: 'John Robinson'; +Cc: linux-raid

> On 19/12/2009 20:19, Leslie Rhorer wrote:
> [...]
> >> The reason why you can't mount even though repair suggests you do
> >> might be because of a bug in xfsprogs/libs not a complete corruption
> >> in the filesystem itself.
> >
> > 	It's possible, I guess, but the diagnostics seem pretty coherent.
> > Running xfs_repair in test mode finds a small but reasonable number of
> > issues.
> 
> Please excuse me if this is a stupid question, but is it possible any of
> the apparent issues could be caused by the block device underneath the
> filesystem having changed size, so the filesystem's in some sense the
> wrong size now?

	I don't know.  It certainly shouldn't.  Xfs certainly supports
growing the file system, and presumably this would be often accomplished by
first growing the underlying device.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-19 23:39                   ` John Robinson
  2009-12-19 23:49                     ` Leslie Rhorer
@ 2009-12-19 23:59                     ` Majed B.
  1 sibling, 0 replies; 27+ messages in thread
From: Majed B. @ 2009-12-19 23:59 UTC (permalink / raw)
  To: linux-raid

John,

I grew software RAID arrays with XFS on top without issues before. I
tried with the filesystem both mounted and unmounted. Never had a
problem. And XFS can't be grown unless it's mounted, so the size of
the block device is not the one causing the issue here, as the size of
the block device (array) is larger than what the filesystem is
occupying, which is normal.

After mounting the filesystem, the admin can increase the size of the
FS to fit the newly expanded array size.

On Sun, Dec 20, 2009 at 2:39 AM, John Robinson
<john.robinson@anonymous.org.uk> wrote:
> On 19/12/2009 20:19, Leslie Rhorer wrote:
> [...]
>>>
>>> The reason why you can't mount even though repair suggests you do
>>> might be because of a bug in xfsprogs/libs not a complete corruption
>>> in the filesystem itself.
>>
>>        It's possible, I guess, but the diagnostics seem pretty coherent.
>> Running xfs_repair in test mode finds a small but reasonable number of
>> issues.
>
> Please excuse me if this is a stupid question, but is it possible any of the
> apparent issues could be caused by the block device underneath the
> filesystem having changed size, so the filesystem's in some sense the wrong
> size now?
>
> Cheers,
>
> John.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: RAID5 / 6 Growth
  2009-12-19 21:05 ` Leslie Rhorer
@ 2009-12-21 12:33   ` Goswin von Brederlow
  0 siblings, 0 replies; 27+ messages in thread
From: Goswin von Brederlow @ 2009-12-21 12:33 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

"Leslie Rhorer" <lrhorer@satx.rr.com> writes:

>> I understand what you're aiming at finding, unfortunately I can't
>> think of anything that would cause a corruption apart from bad cables
>> or bad sectors.
>
> 	Well, me, either, but the question at hand isn't so much, "What
> caused it?" as it is, "Is there something (above and beyond the resync which
> was already done) I can do to verify the health of the underlying array
> before proceeding?"

Run a check.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2009-12-21 12:33 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-16  5:40 RAID5 / 6 Growth Leslie Rhorer
2009-12-16  6:37 ` Majed B.
2009-12-16  8:06   ` Leslie Rhorer
2009-12-16  8:12     ` Michael Evans
2009-12-16  8:38       ` Leslie Rhorer
2009-12-16 11:21     ` Majed B.
2009-12-17  1:36       ` Leslie Rhorer
2009-12-19  1:13       ` Leslie Rhorer
2009-12-19 18:21         ` Leslie Rhorer
2009-12-19 18:36           ` Majed B.
2009-12-19 19:02             ` Leslie Rhorer
2009-12-19 19:55               ` Majed B.
2009-12-19 20:19                 ` Leslie Rhorer
2009-12-19 23:39                   ` John Robinson
2009-12-19 23:49                     ` Leslie Rhorer
2009-12-19 23:59                     ` Majed B.
2009-12-16 13:25     ` Goswin von Brederlow
2009-12-17  1:51       ` Leslie Rhorer
2009-12-17  9:27         ` John Robinson
2009-12-18  1:33           ` Leslie Rhorer
2009-12-19  1:11           ` Leslie Rhorer
2009-12-18 12:27         ` Goswin von Brederlow
2009-12-17 17:53     ` Bill Davidsen
2009-12-18  1:46       ` Leslie Rhorer
2009-12-19  1:12       ` Leslie Rhorer
     [not found] <70ed7c3e0912191232k7deb3a3p40ddd6bc1bdfd3ae@mail.gmail.com>
2009-12-19 21:05 ` Leslie Rhorer
2009-12-21 12:33   ` Goswin von Brederlow

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).