linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Btrfs raid allocator
@ 2014-05-06 10:41 Hendrik Siedelmann
  2014-05-06 10:59 ` Hugo Mills
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Hendrik Siedelmann @ 2014-05-06 10:41 UTC (permalink / raw)
  To: linux-btrfs

Hello all!

I would like to use btrfs (or anyting else actually) to maximize raid0 
performance. Basically I have a relatively constant stream of data that 
simply has to be written out to disk. So my question is, how is the 
block allocator deciding on which device to write, can this decision be 
dynamic and could it incorporate timing/troughput decisions? I'm willing 
to write code, I just have no clue as to how this works right now. I 
read somewhere that the decision is based on free space, is this still true?

Cheers
Hendrik

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 10:41 Btrfs raid allocator Hendrik Siedelmann
@ 2014-05-06 10:59 ` Hugo Mills
  2014-05-06 11:14   ` Hendrik Siedelmann
  2014-05-06 20:59 ` Duncan
  2014-05-06 21:49 ` Chris Murphy
  2 siblings, 1 reply; 10+ messages in thread
From: Hugo Mills @ 2014-05-06 10:59 UTC (permalink / raw)
  To: Hendrik Siedelmann; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1590 bytes --]

On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
> Hello all!
> 
> I would like to use btrfs (or anyting else actually) to maximize raid0
> performance. Basically I have a relatively constant stream of data that
> simply has to be written out to disk. So my question is, how is the block
> allocator deciding on which device to write, can this decision be dynamic
> and could it incorporate timing/troughput decisions? I'm willing to write
> code, I just have no clue as to how this works right now. I read somewhere
> that the decision is based on free space, is this still true?

   For (current) RAID-0 allocation, the block group allocator will use
as many chunks as there are devices with free space (down to a minimum
of 2). Data is then striped across those chunks in 64 KiB stripes.
Thus, the first block group will be N GiB of usable space, striped
across N devices.

   There's a second level of allocation (which I haven't looked at at
all), which is how the FS decides where to put data within the
allocated block groups. I think it will almost certainly be beneficial
in your case to use prealloc extents, which will turn your continuous
write into large contiguous sections of striping.

   I would recommend thoroughly benchmarking your application with the
FS first though, just to see how it's going to behave for you.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
                   --- Ceci n'est pas une pipe:  | ---                   

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 10:59 ` Hugo Mills
@ 2014-05-06 11:14   ` Hendrik Siedelmann
  2014-05-06 11:19     ` Hugo Mills
  0 siblings, 1 reply; 10+ messages in thread
From: Hendrik Siedelmann @ 2014-05-06 11:14 UTC (permalink / raw)
  To: Hugo Mills, linux-btrfs

On 06.05.2014 12:59, Hugo Mills wrote:
> On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
>> Hello all!
>>
>> I would like to use btrfs (or anyting else actually) to maximize raid0
>> performance. Basically I have a relatively constant stream of data that
>> simply has to be written out to disk. So my question is, how is the block
>> allocator deciding on which device to write, can this decision be dynamic
>> and could it incorporate timing/troughput decisions? I'm willing to write
>> code, I just have no clue as to how this works right now. I read somewhere
>> that the decision is based on free space, is this still true?
>
>     For (current) RAID-0 allocation, the block group allocator will use
> as many chunks as there are devices with free space (down to a minimum
> of 2). Data is then striped across those chunks in 64 KiB stripes.
> Thus, the first block group will be N GiB of usable space, striped
> across N devices.

So do I understand this correctly that (assuming we have enough space) 
data will be spread equally between the disks independend of write 
speeds? So one slow device would slow down the whole raid?

>     There's a second level of allocation (which I haven't looked at at
> all), which is how the FS decides where to put data within the
> allocated block groups. I think it will almost certainly be beneficial
> in your case to use prealloc extents, which will turn your continuous
> write into large contiguous sections of striping.

Why does prealloc change anything? For me latency does not matter, only 
continuous troughput!

>     I would recommend thoroughly benchmarking your application with the
> FS first though, just to see how it's going to behave for you.
>
>     Hugo.
>

Of course - it's just that I do not yet have the hardware, but I plan to 
test with a small model - I just try to find out how it actually works 
first, so I know what look out for.

Hendrik


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 11:14   ` Hendrik Siedelmann
@ 2014-05-06 11:19     ` Hugo Mills
  2014-05-06 11:26       ` Hendrik Siedelmann
  0 siblings, 1 reply; 10+ messages in thread
From: Hugo Mills @ 2014-05-06 11:19 UTC (permalink / raw)
  To: Hendrik Siedelmann; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2565 bytes --]

On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
> On 06.05.2014 12:59, Hugo Mills wrote:
> >On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
> >>Hello all!
> >>
> >>I would like to use btrfs (or anyting else actually) to maximize raid0
> >>performance. Basically I have a relatively constant stream of data that
> >>simply has to be written out to disk. So my question is, how is the block
> >>allocator deciding on which device to write, can this decision be dynamic
> >>and could it incorporate timing/troughput decisions? I'm willing to write
> >>code, I just have no clue as to how this works right now. I read somewhere
> >>that the decision is based on free space, is this still true?
> >
> >    For (current) RAID-0 allocation, the block group allocator will use
> >as many chunks as there are devices with free space (down to a minimum
> >of 2). Data is then striped across those chunks in 64 KiB stripes.
> >Thus, the first block group will be N GiB of usable space, striped
> >across N devices.
> 
> So do I understand this correctly that (assuming we have enough space) data
> will be spread equally between the disks independend of write speeds? So one
> slow device would slow down the whole raid?

   Yes. Exactly the same as it would be with DM RAID-0 on the same
configuration. There's not a lot we can do about that at this point.

> >    There's a second level of allocation (which I haven't looked at at
> >all), which is how the FS decides where to put data within the
> >allocated block groups. I think it will almost certainly be beneficial
> >in your case to use prealloc extents, which will turn your continuous
> >write into large contiguous sections of striping.
> 
> Why does prealloc change anything? For me latency does not matter, only
> continuous troughput!

   It makes the extent allocation algorithm much simpler, because it
can then allocate in larger chunks and do more linear writes

> >    I would recommend thoroughly benchmarking your application with the
> >FS first though, just to see how it's going to behave for you.
> >
> >    Hugo.
> >
> 
> Of course - it's just that I do not yet have the hardware, but I plan to
> test with a small model - I just try to find out how it actually works
> first, so I know what look out for.

   Good luck. :)

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
     --- "I am the author. You are the audience. I outrank you!" ---     

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 11:19     ` Hugo Mills
@ 2014-05-06 11:26       ` Hendrik Siedelmann
  2014-05-06 11:46         ` Hugo Mills
  0 siblings, 1 reply; 10+ messages in thread
From: Hendrik Siedelmann @ 2014-05-06 11:26 UTC (permalink / raw)
  To: Hugo Mills, linux-btrfs

On 06.05.2014 13:19, Hugo Mills wrote:
> On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
>> On 06.05.2014 12:59, Hugo Mills wrote:
>>> On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
>>>> Hello all!
>>>>
>>>> I would like to use btrfs (or anyting else actually) to maximize raid0
>>>> performance. Basically I have a relatively constant stream of data that
>>>> simply has to be written out to disk. So my question is, how is the block
>>>> allocator deciding on which device to write, can this decision be dynamic
>>>> and could it incorporate timing/troughput decisions? I'm willing to write
>>>> code, I just have no clue as to how this works right now. I read somewhere
>>>> that the decision is based on free space, is this still true?
>>>
>>>     For (current) RAID-0 allocation, the block group allocator will use
>>> as many chunks as there are devices with free space (down to a minimum
>>> of 2). Data is then striped across those chunks in 64 KiB stripes.
>>> Thus, the first block group will be N GiB of usable space, striped
>>> across N devices.
>>
>> So do I understand this correctly that (assuming we have enough space) data
>> will be spread equally between the disks independend of write speeds? So one
>> slow device would slow down the whole raid?
>
>     Yes. Exactly the same as it would be with DM RAID-0 on the same
> configuration. There's not a lot we can do about that at this point.

So striping is fixed but which disk takes part with a chunk is dynamic? 
But for large workloads slower disks could 'skip a chunk' as chunk 
allocation is dynamic, correct?

>>>     There's a second level of allocation (which I haven't looked at at
>>> all), which is how the FS decides where to put data within the
>>> allocated block groups. I think it will almost certainly be beneficial
>>> in your case to use prealloc extents, which will turn your continuous
>>> write into large contiguous sections of striping.
>>
>> Why does prealloc change anything? For me latency does not matter, only
>> continuous troughput!
>
>     It makes the extent allocation algorithm much simpler, because it
> can then allocate in larger chunks and do more linear writes

Is this still true if I do very large writes? Or do those get broken 
down by the kernel somewhere?

>>>     I would recommend thoroughly benchmarking your application with the
>>> FS first though, just to see how it's going to behave for you.
>>>
>>>     Hugo.
>>>
>>
>> Of course - it's just that I do not yet have the hardware, but I plan to
>> test with a small model - I just try to find out how it actually works
>> first, so I know what look out for.
>
>     Good luck. :)
>
>     Hugo.
>

Thanks!
Hendrik


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 11:26       ` Hendrik Siedelmann
@ 2014-05-06 11:46         ` Hugo Mills
  2014-05-06 12:16           ` Hendrik Siedelmann
  0 siblings, 1 reply; 10+ messages in thread
From: Hugo Mills @ 2014-05-06 11:46 UTC (permalink / raw)
  To: Hendrik Siedelmann; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 3468 bytes --]

On Tue, May 06, 2014 at 01:26:44PM +0200, Hendrik Siedelmann wrote:
> On 06.05.2014 13:19, Hugo Mills wrote:
> >On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
> >>On 06.05.2014 12:59, Hugo Mills wrote:
> >>>On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
> >>>>Hello all!
> >>>>
> >>>>I would like to use btrfs (or anyting else actually) to maximize raid0
> >>>>performance. Basically I have a relatively constant stream of data that
> >>>>simply has to be written out to disk. So my question is, how is the block
> >>>>allocator deciding on which device to write, can this decision be dynamic
> >>>>and could it incorporate timing/troughput decisions? I'm willing to write
> >>>>code, I just have no clue as to how this works right now. I read somewhere
> >>>>that the decision is based on free space, is this still true?
> >>>
> >>>    For (current) RAID-0 allocation, the block group allocator will use
> >>>as many chunks as there are devices with free space (down to a minimum
> >>>of 2). Data is then striped across those chunks in 64 KiB stripes.
> >>>Thus, the first block group will be N GiB of usable space, striped
> >>>across N devices.
> >>
> >>So do I understand this correctly that (assuming we have enough space) data
> >>will be spread equally between the disks independend of write speeds? So one
> >>slow device would slow down the whole raid?
> >
> >    Yes. Exactly the same as it would be with DM RAID-0 on the same
> >configuration. There's not a lot we can do about that at this point.
> 
> So striping is fixed but which disk takes part with a chunk is dynamic? But
> for large workloads slower disks could 'skip a chunk' as chunk allocation is
> dynamic, correct?

   You'd have to rewrite the chunk allocator to do this, _and_ provide
different RAID levels for different subvolumes. The chunk/block group
allocator right now uses only one rule for allocating data, and one
for allocating metadata. Now, both of these are planned, and _might_
between them possibly cover the use-case you're talking about, but I'm
not certain it's necessarily a sensible thing to do in this case.

   My question is, if you actually care about the performance of this
system, why are you buying some slow devices to drag the performance
of your fast devices down? It seems like a recipe for disaster...

> >>>    There's a second level of allocation (which I haven't looked at at
> >>>all), which is how the FS decides where to put data within the
> >>>allocated block groups. I think it will almost certainly be beneficial
> >>>in your case to use prealloc extents, which will turn your continuous
> >>>write into large contiguous sections of striping.
> >>
> >>Why does prealloc change anything? For me latency does not matter, only
> >>continuous troughput!
> >
> >    It makes the extent allocation algorithm much simpler, because it
> >can then allocate in larger chunks and do more linear writes
> 
> Is this still true if I do very large writes? Or do those get broken down by
> the kernel somewhere?

   I guess it'll depend on the approach you use to do these "very
large" writes, and on the exact definition of "very large". This is
not an area I know a huge amount about.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
     --- "I am the author. You are the audience. I outrank you!" ---     

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 11:46         ` Hugo Mills
@ 2014-05-06 12:16           ` Hendrik Siedelmann
  0 siblings, 0 replies; 10+ messages in thread
From: Hendrik Siedelmann @ 2014-05-06 12:16 UTC (permalink / raw)
  To: Hugo Mills, linux-btrfs

On 06.05.2014 13:46, Hugo Mills wrote:
> On Tue, May 06, 2014 at 01:26:44PM +0200, Hendrik Siedelmann wrote:
>> On 06.05.2014 13:19, Hugo Mills wrote:
>>> On Tue, May 06, 2014 at 01:14:26PM +0200, Hendrik Siedelmann wrote:
>>>> On 06.05.2014 12:59, Hugo Mills wrote:
>>>>> On Tue, May 06, 2014 at 12:41:38PM +0200, Hendrik Siedelmann wrote:
>>>>>> Hello all!
>>>>>>
>>>>>> I would like to use btrfs (or anyting else actually) to maximize raid0
>>>>>> performance. Basically I have a relatively constant stream of data that
>>>>>> simply has to be written out to disk. So my question is, how is the block
>>>>>> allocator deciding on which device to write, can this decision be dynamic
>>>>>> and could it incorporate timing/troughput decisions? I'm willing to write
>>>>>> code, I just have no clue as to how this works right now. I read somewhere
>>>>>> that the decision is based on free space, is this still true?
>>>>>
>>>>>     For (current) RAID-0 allocation, the block group allocator will use
>>>>> as many chunks as there are devices with free space (down to a minimum
>>>>> of 2). Data is then striped across those chunks in 64 KiB stripes.
>>>>> Thus, the first block group will be N GiB of usable space, striped
>>>>> across N devices.
>>>>
>>>> So do I understand this correctly that (assuming we have enough space) data
>>>> will be spread equally between the disks independend of write speeds? So one
>>>> slow device would slow down the whole raid?
>>>
>>>     Yes. Exactly the same as it would be with DM RAID-0 on the same
>>> configuration. There's not a lot we can do about that at this point.
>>
>> So striping is fixed but which disk takes part with a chunk is dynamic? But
>> for large workloads slower disks could 'skip a chunk' as chunk allocation is
>> dynamic, correct?
>
>     You'd have to rewrite the chunk allocator to do this, _and_ provide
> different RAID levels for different subvolumes. The chunk/block group
> allocator right now uses only one rule for allocating data, and one
> for allocating metadata. Now, both of these are planned, and _might_
> between them possibly cover the use-case you're talking about, but I'm
> not certain it's necessarily a sensible thing to do in this case.

But what does the allocator currently do when one disk runs out of 
space? I thought those disks do not get used but we can still write 
data. So the mechanism is already there, it just needs to be invoked 
when a drive is too busy instead of too full.

>     My question is, if you actually care about the performance of this
> system, why are you buying some slow devices to drag the performance
> of your fast devices down? It seems like a recipe for disaster...

Even the speed of a single hdd varies depending on where I write the 
data. So actually there is not much choice :-D.
I'm aware that this could be a case of overengineering. Actually my 
first thought was to write a simple fuse module which only handles data 
and puts metadata on a regular filesystem. But then I thought that it 
would be nice to have this in btrfs - and not just for raid0.

>>>>>     There's a second level of allocation (which I haven't looked at at
>>>>> all), which is how the FS decides where to put data within the
>>>>> allocated block groups. I think it will almost certainly be beneficial
>>>>> in your case to use prealloc extents, which will turn your continuous
>>>>> write into large contiguous sections of striping.
>>>>
>>>> Why does prealloc change anything? For me latency does not matter, only
>>>> continuous troughput!
>>>
>>>     It makes the extent allocation algorithm much simpler, because it
>>> can then allocate in larger chunks and do more linear writes
>>
>> Is this still true if I do very large writes? Or do those get broken down by
>> the kernel somewhere?
>
>     I guess it'll depend on the approach you use to do these "very
> large" writes, and on the exact definition of "very large". This is
> not an area I know a huge amount about.
>
>     Hugo.
>
Never mind I'll just try it out!

Hendrik


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 10:41 Btrfs raid allocator Hendrik Siedelmann
  2014-05-06 10:59 ` Hugo Mills
@ 2014-05-06 20:59 ` Duncan
  2014-05-06 21:49 ` Chris Murphy
  2 siblings, 0 replies; 10+ messages in thread
From: Duncan @ 2014-05-06 20:59 UTC (permalink / raw)
  To: linux-btrfs

Hendrik Siedelmann posted on Tue, 06 May 2014 12:41:38 +0200 as excerpted:

> I would like to use btrfs (or anyting else actually) to maximize raid0
> performance. Basically I have a relatively constant stream of data that
> simply has to be written out to disk.

If flexible parallelization is all you're worried about, not data 
integrity or the other things btrfs does, I'd suggest looking at a more 
mature solution such as md- or dm-raid.  They're more mature and less 
complex than btrfs, and if you're not using the other features of btrfs 
anyway, they should simply work better for your use-case.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 10:41 Btrfs raid allocator Hendrik Siedelmann
  2014-05-06 10:59 ` Hugo Mills
  2014-05-06 20:59 ` Duncan
@ 2014-05-06 21:49 ` Chris Murphy
  2014-05-06 22:45   ` Hendrik Siedelmann
  2 siblings, 1 reply; 10+ messages in thread
From: Chris Murphy @ 2014-05-06 21:49 UTC (permalink / raw)
  To: Hendrik Siedelmann; +Cc: linux-btrfs


On May 6, 2014, at 4:41 AM, Hendrik Siedelmann <hendrik.siedelmann@googlemail.com> wrote:

> Hello all!
> 
> I would like to use btrfs (or anyting else actually) to maximize raid0 performance. Basically I have a relatively constant stream of data that simply has to be written out to disk. 

I think the only way to know what works best for your workload is to test configurations with the actual workload. For optimization of multiple device file systems, it's hard to beat XFS on raid0 or even linear/concat due to its parallelization, if you have more than one stream (or a stream that produces a lot of files that XFS can allocate into separate allocation groups). Also mdadm supports use specified strip/chunk sizes, whereas currently on Btrfs this is fixed to 64KiB. Depending on the file size for your workload, it's possible a much larger strip will yield better performance.

Another optimization is hardware RAID with a battery backed write cache (the drives' write cache are disabled) and using nobarrier mount option. If your workload supports linear/concat then it's fine to use md linear for this. What I'm not sure of is if it's an OK practice to disable barriers if the system is on a UPS (rather than a battery backed hardware RAID cache). You should post the workload and hardware details on the XFS list to get suggestions about such things. They'll also likely recommend the deadline scheduler over cfq.

Unless you have a workload really familiar to the responder, they'll tell you any benchmarking you do needs to approximate the actual workflow. A mismatched benchmark to the workload will lead you to the wrong conclusions. Typically when you optimize for a particular workload, other workloads suffer.

Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Btrfs raid allocator
  2014-05-06 21:49 ` Chris Murphy
@ 2014-05-06 22:45   ` Hendrik Siedelmann
  0 siblings, 0 replies; 10+ messages in thread
From: Hendrik Siedelmann @ 2014-05-06 22:45 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

On 06.05.2014 23:49, Chris Murphy wrote:
>
> On May 6, 2014, at 4:41 AM, Hendrik Siedelmann
> <hendrik.siedelmann@googlemail.com> wrote:
>
>> Hello all!
>>
>> I would like to use btrfs (or anyting else actually) to maximize
>> raid0 performance. Basically I have a relatively constant stream of
>> data that simply has to be written out to disk.
>
> I think the only way to know what works best for your workload is to
> test configurations with the actual workload. For optimization of
> multiple device file systems, it's hard to beat XFS on raid0 or even
> linear/concat due to its parallelization, if you have more than one
> stream (or a stream that produces a lot of files that XFS can
> allocate into separate allocation groups). Also mdadm supports use
> specified strip/chunk sizes, whereas currently on Btrfs this is fixed
> to 64KiB. Depending on the file size for your workload, it's possible
> a much larger strip will yield better performance.

Thanks, that's quite a few knobs I can try out - I just have a lot of 
data - with a rate up to 450MB/s that I want to write out in time, 
preferably without having to rely on too expensive hardware.

> Another optimization is hardware RAID with a battery backed write
> cache (the drives' write cache are disabled) and using nobarrier
> mount option. If your workload supports linear/concat then it's fine
> to use md linear for this. What I'm not sure of is if it's an OK
> practice to disable barriers if the system is on a UPS (rather than a
> battery backed hardware RAID cache). You should post the workload and
> hardware details on the XFS list to get suggestions about such
> things. They'll also likely recommend the deadline scheduler over
> cfq.

Actually data integrity does not matter for the workload. If everything 
is succesfull the result will be backed up - before that full filesystem 
corruption is acceptable as a failure mode.

> Unless you have a workload really familiar to the responder, they'll
> tell you any benchmarking you do needs to approximate the actual
> workflow. A mismatched benchmark to the workload will lead you to the
> wrong conclusions. Typically when you optimize for a particular
> workload, other workloads suffer.
>
> Chris Murphy
>

Thanks again for all the infos! I'll get back if everything works fine - 
or if it doesn't ;-)

Cheers
Hendrik

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-05-06 22:45 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-06 10:41 Btrfs raid allocator Hendrik Siedelmann
2014-05-06 10:59 ` Hugo Mills
2014-05-06 11:14   ` Hendrik Siedelmann
2014-05-06 11:19     ` Hugo Mills
2014-05-06 11:26       ` Hendrik Siedelmann
2014-05-06 11:46         ` Hugo Mills
2014-05-06 12:16           ` Hendrik Siedelmann
2014-05-06 20:59 ` Duncan
2014-05-06 21:49 ` Chris Murphy
2014-05-06 22:45   ` Hendrik Siedelmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).