dm-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* RFC: multipath IO multiplex
@ 2010-11-05 18:39 Lars Marowsky-Bree
  2010-11-06  9:32 ` Neil Brown
  0 siblings, 1 reply; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-05 18:39 UTC (permalink / raw)
  To: dm-devel

Hi all,

this is a topic that came up during our HA miniconference at LPC. I
inherited the action item to code this, but before coding it, I thought
I'd get some validation on the design.

In a cluster environment, we occasionally have time critical IO - both
read and writes, for a mix of via-disk heartbeating, or the exchange of
poison pills.

MPIO plays hell with this, since an IO could potentially experience very
high latency during a path switch. Extending the timeouts to allow for
this is reasonably impractical.

However, our IO has certain properties that make it special - we have
rather careful patterns, they don't overlap, they are effectively single
page/single atomic write unit, and each node effectively writes to its
own area.

So the idea would be to, instead of relying on the active/passive access
pattern, to send the IO down all paths in parallel - and reporting
either the first success or the last failure.

(Clearly, this only works for active/active arrays; active/passive
setups still may have problems.)

Doing this in user-space is somewhat icky; short of scanning the devices
ourselves, or asking multipathd for each IO for the current list, we
have no good way to do that. But the kernel obviously has the correct
list at all times.

So, I think a special IO flag for block IO (ioctl, open() flag on the
device, whatever) that would cause dm-multipath to send the IO down all
paths (and, as mentioned, report either the last failure or first
success), seems to be the easiest way.

How would you prefer such a flag to be implemented and passed in, and
what do you think of the general use case?


Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-05 18:39 RFC: multipath IO multiplex Lars Marowsky-Bree
@ 2010-11-06  9:32 ` Neil Brown
  2010-11-06 11:51   ` Alasdair G Kergon
  2010-11-06 17:03   ` Lars Marowsky-Bree
  0 siblings, 2 replies; 12+ messages in thread
From: Neil Brown @ 2010-11-06  9:32 UTC (permalink / raw)
  To: device-mapper development; +Cc: lmb

On Fri, 5 Nov 2010 19:39:46 +0100
Lars Marowsky-Bree <lmb@novell.com> wrote:

> Hi all,
> 
> this is a topic that came up during our HA miniconference at LPC. I
> inherited the action item to code this, but before coding it, I thought
> I'd get some validation on the design.
> 
> In a cluster environment, we occasionally have time critical IO - both
> read and writes, for a mix of via-disk heartbeating, or the exchange of
> poison pills.
> 
> MPIO plays hell with this, since an IO could potentially experience very
> high latency during a path switch. Extending the timeouts to allow for
> this is reasonably impractical.
> 
> However, our IO has certain properties that make it special - we have
> rather careful patterns, they don't overlap, they are effectively single
> page/single atomic write unit, and each node effectively writes to its
> own area.
> 
> So the idea would be to, instead of relying on the active/passive access
> pattern, to send the IO down all paths in parallel - and reporting
> either the first success or the last failure.

Hi Lars,
 the only issue that occurs to me is that if you want to report the first
 success, then you need to copy the data to a private buffer before
 submitting the write.  Then wait for all writes to complete before freeing
 the buffer.  If you just return the first write the page would be unlocked
 and so could be changed will another path was still writing it out.

 Finding a way to signal 'write all paths sounds tricky.  This flag needs to
 be state of the filedescriptor, not the whole device, so it would need to be
 an fcntl rather than an ioctl.  And defining new fcntls is a lot harder
 because they need to be more generic - you cannot really make them device
 specific...
 Might it make sense to configure a range of the device where writes always
 went down all paths?  That would seem to fit with your problem description
 and might be easiest??

NeilBrown


> 
> (Clearly, this only works for active/active arrays; active/passive
> setups still may have problems.)
> 
> Doing this in user-space is somewhat icky; short of scanning the devices
> ourselves, or asking multipathd for each IO for the current list, we
> have no good way to do that. But the kernel obviously has the correct
> list at all times.
> 
> So, I think a special IO flag for block IO (ioctl, open() flag on the
> device, whatever) that would cause dm-multipath to send the IO down all
> paths (and, as mentioned, report either the last failure or first
> success), seems to be the easiest way.
> 
> How would you prefer such a flag to be implemented and passed in, and
> what do you think of the general use case?
> 
> 
> Regards,
>     Lars
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-06  9:32 ` Neil Brown
@ 2010-11-06 11:51   ` Alasdair G Kergon
  2010-11-06 16:57     ` Lars Marowsky-Bree
  2010-11-06 17:03   ` Lars Marowsky-Bree
  1 sibling, 1 reply; 12+ messages in thread
From: Alasdair G Kergon @ 2010-11-06 11:51 UTC (permalink / raw)
  To: Neil Brown, device-mapper development, lmb

On Sat, Nov 06, 2010 at 05:32:03AM -0400, Neil Brown wrote:
>  Might it make sense to configure a range of the device where writes always
>  went down all paths?  That would seem to fit with your problem description
>  and might be easiest??

Indeed - a persistent property of the device (even another interface with a
different minor number) not the I/O.

And what is the nature of the data being written, given that I/O to one path
might get delayed and arrive long after it was sent, overwriting data
sent later.  Successful stale writes will always be recognised as such
by readers - how?
 
Alasdair

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-06 11:51   ` Alasdair G Kergon
@ 2010-11-06 16:57     ` Lars Marowsky-Bree
  2010-11-07 10:30       ` Christophe Varoqui
  0 siblings, 1 reply; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-06 16:57 UTC (permalink / raw)
  To: Neil Brown, device-mapper development

On 2010-11-06T11:51:02, Alasdair G Kergon <agk@redhat.com> wrote:

Hi Neil, Alasdair,

thanks for the feedback. Answering your points in reverse order -

> >  Might it make sense to configure a range of the device where writes always
> >  went down all paths?  That would seem to fit with your problem description
> >  and might be easiest??
> Indeed - a persistent property of the device (even another interface with a
> different minor number) not the I/O.

I'm not so sure that would be required though. The equivalent of our
"mkfs" tool wouldn't need this. Also, typically, this would be a
partition (kpartx) on top of a regular MPIO mapping (that we want to be
managed by multipathd).

Handling this completely differently would complicate setup, no?

> And what is the nature of the data being written, given that I/O to one path
> might get delayed and arrive long after it was sent, overwriting data
> sent later.  Successful stale writes will always be recognised as such
> by readers - how?

The very particular use case I am thinking of is the "poison pill" for
node-level fencing. Nodes constantly monitor their slot (using direct
IO, bypassing all caching, etc), and either can successfully read it or
commit suicide (assisted by a hardware watchdog to protect against
stalls).

The writer knows that, once the message has been successfully written,
the target node will either have read it (and committed suicide), or
been self-fenced because of a timeout/read error.

Allowing for the additional timeouts incurred by MPIO here really slows
this mechanism down to the point of being unusable.

Now, even if a write was delayed - which is not very likely, it's more
likely that some of the IO will just fail if indeed one of the paths
happens to go down, and this would not resubmit it to other paths -, the
worst that could happen would be a double fence. (If it gets written
after the node has cycled once and cleared its message slot; that would
imply a significant delay already, since servers take a bit to boot.)

For the 'heartbeat' mechanism and others (if/when we get around for
adding them), we could ignore the exact contents that have been written
and just watch for changes; worst, the node death detection will take a
bit longer.

Basically, the thing we need to get around is the possible IO latency in
MPIO, for things like poison pill fencing ("storage-based death") or
qdisk-style plugins. I'm open for other suggestions as well.



Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-06  9:32 ` Neil Brown
  2010-11-06 11:51   ` Alasdair G Kergon
@ 2010-11-06 17:03   ` Lars Marowsky-Bree
  1 sibling, 0 replies; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-06 17:03 UTC (permalink / raw)
  To: Neil Brown, device-mapper development

On 2010-11-06T05:32:03, Neil Brown <neilb@suse.de> wrote:

> Hi Lars,
>  the only issue that occurs to me is that if you want to report the first
>  success, then you need to copy the data to a private buffer before
>  submitting the write.  Then wait for all writes to complete before freeing
>  the buffer.  If you just return the first write the page would be unlocked
>  and so could be changed will another path was still writing it out.

Right. This is, in a way, a mix of MPIO / RAID1 handling. We'd indeed
need to have the write block several times - thankfully, we write really
rarely and only one sector at a time, so the memory consumption is
trivial.

(However, we _really_ want to get those writes to disk. Right away.)

>  Finding a way to signal 'write all paths sounds tricky.  This flag needs to
>  be state of the filedescriptor, not the whole device, so it would need to be
>  an fcntl rather than an ioctl.  And defining new fcntls is a lot harder
>  because they need to be more generic - you cannot really make them device
>  specific...
>  Might it make sense to configure a range of the device where writes always
>  went down all paths?  That would seem to fit with your problem description
>  and might be easiest??

Technically, it'd be possible, because that section is contiguous on
the disk, yes.

(Note that we don't open a real file in a file system, but use a raw
block device; however, that could be a partition on top of MPIO.)

But I'm a bit unclear how we'd define that; clearly, we don't want to
by-pass multipathd management of the MPIO mapping, that being the whole
point why we don't just handle that in user-space ;-)

Hrm. I already have a dm-linear mapping (thanks to kpartx; otherwise
it's trivially introduced). I could modify that to include a special
flag that would mangle the bios that pass through - so I could set a bio
flag that multipath could then act on ...?

(There's precedent; the failfast bio flag.)


Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-06 16:57     ` Lars Marowsky-Bree
@ 2010-11-07 10:30       ` Christophe Varoqui
  2010-11-08 11:50         ` Lars Marowsky-Bree
  0 siblings, 1 reply; 12+ messages in thread
From: Christophe Varoqui @ 2010-11-07 10:30 UTC (permalink / raw)
  To: device-mapper development, Lars Marowsky-Bree, Neil Brown


[-- Attachment #1.1: Type: text/plain, Size: 3131 bytes --]

Wouldn't it practical to bypass mpio completely on submit your io to the paths instead ?

Cheers,
cvaroqui

----- Message d'origine -----
> On 2010-11-06T11:51:02, Alasdair G Kergon <agk@redhat.com> wrote:
> 
> Hi Neil, Alasdair,
> 
> thanks for the feedback. Answering your points in reverse order -
> 
> > > Might it make sense to configure a range of the device where writes
> > > always went down all paths?   That would seem to fit with your
> > > problem description and might be easiest??
> > Indeed - a persistent property of the device (even another interface
> > with a different minor number) not the I/O.
> 
> I'm not so sure that would be required though. The equivalent of our
> "mkfs" tool wouldn't need this. Also, typically, this would be a
> partition (kpartx) on top of a regular MPIO mapping (that we want to be
> managed by multipathd).
> 
> Handling this completely differently would complicate setup, no?
> 
> > And what is the nature of the data being written, given that I/O to
> > one path might get delayed and arrive long after it was sent,
> > overwriting data sent later.   Successful stale writes will always be
> > recognised as such by readers - how?
> 
> The very particular use case I am thinking of is the "poison pill" for
> node-level fencing. Nodes constantly monitor their slot (using direct
> IO, bypassing all caching, etc), and either can successfully read it or
> commit suicide (assisted by a hardware watchdog to protect against
> stalls).
> 
> The writer knows that, once the message has been successfully written,
> the target node will either have read it (and committed suicide), or
> been self-fenced because of a timeout/read error.
> 
> Allowing for the additional timeouts incurred by MPIO here really slows
> this mechanism down to the point of being unusable.
> 
> Now, even if a write was delayed - which is not very likely, it's more
> likely that some of the IO will just fail if indeed one of the paths
> happens to go down, and this would not resubmit it to other paths -, the
> worst that could happen would be a double fence. (If it gets written
> after the node has cycled once and cleared its message slot; that would
> imply a significant delay already, since servers take a bit to boot.)
> 
> For the 'heartbeat' mechanism and others (if/when we get around for
> adding them), we could ignore the exact contents that have been written
> and just watch for changes; worst, the node death detection will take a
> bit longer.
> 
> Basically, the thing we need to get around is the possible IO latency in
> MPIO, for things like poison pill fencing ("storage-based death") or
> qdisk-style plugins. I'm open for other suggestions as well.
> 
> 
> 
> Regards,
>         Lars
> 
> -- 
> Architect Storage/HA, OPS Engineering, Novell, Inc.
> SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel


[-- Attachment #1.2: Type: text/html, Size: 4148 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-07 10:30       ` Christophe Varoqui
@ 2010-11-08 11:50         ` Lars Marowsky-Bree
  2010-11-08 12:12           ` Alasdair G Kergon
  0 siblings, 1 reply; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-08 11:50 UTC (permalink / raw)
  To: Christophe Varoqui, device-mapper development, Neil Brown

On 2010-11-07T11:30:49, Christophe Varoqui <christophe.varoqui@gmail.com> wrote:

> Wouldn't it practical to bypass mpio completely on submit your io to the paths instead ?

Yes - and no.

Yes: I could do that, and send my IO down all paths via async IO. It was
actually the first direction I looked into, however I abandoned it after
a while (see below). And yes, it's the first thing everyone recommends
;-)

No: It would mean I'd have to query multipathd for every IO, to learn
which devices currently are active and linked to the right storage. (The
idea of hooking into udev myself or scanning partitions seems a bit of a
non-starter.) Alternatively, I could probably try and monitor the
mapping for changes, but then would have to parse that syntax. Not to
mention I'd have to handle my own partitioning, LVM mapping, etc too.

So, it seems somewhat inefficient and inelegant.

I think handling this at the dm-multipath level is cleaner; similarly
how we handle network bonding (which incidentally has a broadcast mode
too), instead of requiring every application to go out and open N
independent channels.


Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-08 11:50         ` Lars Marowsky-Bree
@ 2010-11-08 12:12           ` Alasdair G Kergon
  2010-11-08 12:19             ` Lars Marowsky-Bree
  0 siblings, 1 reply; 12+ messages in thread
From: Alasdair G Kergon @ 2010-11-08 12:12 UTC (permalink / raw)
  To: Lars Marowsky-Bree; +Cc: device-mapper development, Christophe Varoqui

On Mon, Nov 08, 2010 at 12:50:28PM +0100, Lars Marowsky-Bree wrote:
> I think handling this at the dm-multipath level is cleaner; similarly
> how we handle network bonding (which incidentally has a broadcast mode
> too), instead of requiring every application to go out and open N
> independent channels.
 
Or could it hook into the userspace multipath monitoring code which
already knows the state of the paths?

Well, I'm struggling to see anything clean, simple or generic about
a kernel-side solution here so far.  Seems like a lot of extra kernel
code for just one highly-specialised case: so far I'm unconvinced.

Alasdair

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-08 12:12           ` Alasdair G Kergon
@ 2010-11-08 12:19             ` Lars Marowsky-Bree
  2010-11-08 12:42               ` Hannes Reinecke
  2010-11-08 12:56               ` Alasdair G Kergon
  0 siblings, 2 replies; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-08 12:19 UTC (permalink / raw)
  To: Christophe Varoqui, device-mapper development, Neil Brown

On 2010-11-08T12:12:08, Alasdair G Kergon <agk@redhat.com> wrote:

> > I think handling this at the dm-multipath level is cleaner; similarly
> > how we handle network bonding (which incidentally has a broadcast mode
> > too), instead of requiring every application to go out and open N
> > independent channels.
> Or could it hook into the userspace multipath monitoring code which
> already knows the state of the paths?

... but not the translation (e.g., partitioning and logical volumes).
But yes, getting notified by multipathd might also work. Though the code
complexity in user-space arguably seems higher than handling it
in-kernel.

> Well, I'm struggling to see anything clean, simple or generic about
> a kernel-side solution here so far.  Seems like a lot of extra kernel
> code for just one highly-specialised case: so far I'm unconvinced.

I wonder how other latency-sensitive IO handles multipath? Maybe they
just haven't noticed yet they'd like a facility like this? ;-)

Also, Joel wanted to implement the heartbeating/poison pill mechanism
itself in kernel space; clearly, that'd require such an in-kernel
facility too, and could be shared between his code and mine.


Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-08 12:19             ` Lars Marowsky-Bree
@ 2010-11-08 12:42               ` Hannes Reinecke
  2010-11-08 12:56               ` Alasdair G Kergon
  1 sibling, 0 replies; 12+ messages in thread
From: Hannes Reinecke @ 2010-11-08 12:42 UTC (permalink / raw)
  To: device-mapper development; +Cc: Lars Marowsky-Bree, Christophe Varoqui

On 11/08/2010 01:19 PM, Lars Marowsky-Bree wrote:
> On 2010-11-08T12:12:08, Alasdair G Kergon <agk@redhat.com> wrote:
> 
>>> I think handling this at the dm-multipath level is cleaner; similarly
>>> how we handle network bonding (which incidentally has a broadcast mode
>>> too), instead of requiring every application to go out and open N
>>> independent channels.
>> Or could it hook into the userspace multipath monitoring code which
>> already knows the state of the paths?
> 
> ... but not the translation (e.g., partitioning and logical volumes).
> But yes, getting notified by multipathd might also work. Though the code
> complexity in user-space arguably seems higher than handling it
> in-kernel.
> 
>> Well, I'm struggling to see anything clean, simple or generic about
>> a kernel-side solution here so far.  Seems like a lot of extra kernel
>> code for just one highly-specialised case: so far I'm unconvinced.
> 
> I wonder how other latency-sensitive IO handles multipath? Maybe they
> just haven't noticed yet they'd like a facility like this? ;-)
> 
> Also, Joel wanted to implement the heartbeating/poison pill mechanism
> itself in kernel space; clearly, that'd require such an in-kernel
> facility too, and could be shared between his code and mine.
> 
A really daft idea:
Can't you implement a separate path checker for this?
Basically a wrapper around the existing ones (or even a hook into
the existing infrastructure) which would write the poison pill if
requested? The path checker will be called for every path, so you
only have to worry about the checker interval. But I daresay you can
shorten it on request.
Then you can implement a CLI callout and set the magic poison pill
flag, which would trigger an immediate path re-check and the path
checker to write the poison pill itself.

Hmm?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-08 12:19             ` Lars Marowsky-Bree
  2010-11-08 12:42               ` Hannes Reinecke
@ 2010-11-08 12:56               ` Alasdair G Kergon
  2010-11-08 14:18                 ` Lars Marowsky-Bree
  1 sibling, 1 reply; 12+ messages in thread
From: Alasdair G Kergon @ 2010-11-08 12:56 UTC (permalink / raw)
  To: Lars Marowsky-Bree; +Cc: device-mapper development, Christophe Varoqui

On Mon, Nov 08, 2010 at 01:19:41PM +0100, Lars Marowsky-Bree wrote:
> I wonder how other latency-sensitive IO handles multipath? Maybe they
> just haven't noticed yet they'd like a facility like this? ;-)
 
As usual, it's the lack of a cancellation interface for incomplete I/O
that seems to make the kernel-side option troublesome: we still have
to wait for timeouts.  Do we prevent more than one I/O being sent
in this mode, given that any other alternative would have races?

Alasdair

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RFC: multipath IO multiplex
  2010-11-08 12:56               ` Alasdair G Kergon
@ 2010-11-08 14:18                 ` Lars Marowsky-Bree
  0 siblings, 0 replies; 12+ messages in thread
From: Lars Marowsky-Bree @ 2010-11-08 14:18 UTC (permalink / raw)
  To: Christophe Varoqui, device-mapper development, Neil Brown

On 2010-11-08T12:56:38, Alasdair G Kergon <agk@redhat.com> wrote:

> > I wonder how other latency-sensitive IO handles multipath? Maybe they
> > just haven't noticed yet they'd like a facility like this? ;-)
> As usual, it's the lack of a cancellation interface for incomplete I/O
> that seems to make the kernel-side option troublesome: we still have
> to wait for timeouts.

That would apply just as well to user-space though, no?

Like I said, the goal would be to report the first successful IO
completion, or the last failure - any other failures or results simply
get discarded at the kernel level. (Or wherever.)

> Do we prevent more than one I/O being sent in this mode, given that
> any other alternative would have races?

I don't follow. What races?

Of course the application needs to know this, and can't just treat it
like a regular block device.


Regards,
    Lars

-- 
Architect Storage/HA, OPS Engineering, Novell, Inc.
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2010-11-08 14:18 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-05 18:39 RFC: multipath IO multiplex Lars Marowsky-Bree
2010-11-06  9:32 ` Neil Brown
2010-11-06 11:51   ` Alasdair G Kergon
2010-11-06 16:57     ` Lars Marowsky-Bree
2010-11-07 10:30       ` Christophe Varoqui
2010-11-08 11:50         ` Lars Marowsky-Bree
2010-11-08 12:12           ` Alasdair G Kergon
2010-11-08 12:19             ` Lars Marowsky-Bree
2010-11-08 12:42               ` Hannes Reinecke
2010-11-08 12:56               ` Alasdair G Kergon
2010-11-08 14:18                 ` Lars Marowsky-Bree
2010-11-06 17:03   ` Lars Marowsky-Bree

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).