linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
@ 2017-01-09 16:38 Coly Li
  2017-01-12 15:09 ` [Lsf-pc] " Sagi Grimberg
  2017-01-16  3:33 ` Guoqing Jiang
  0 siblings, 2 replies; 5+ messages in thread
From: Coly Li @ 2017-01-09 16:38 UTC (permalink / raw)
  To: lsf-pc
  Cc: open list:SOFTWARE RAID (Multiple Disks) SUPPORT, linux-block,
	linux-kernel, linux-nvme, Shaohua Li, NeilBrown, songliubraving,
	Guoqing Jiang, pawel.baldysiak, mariusz.dabrowski,
	artur.paszkiewicz, Jes.Sorensen, Hannes Reinecke

Hi Folks,

I'd like to propose a general md raid discussion, it is quite necessary
for most of active md raid developers sit together to discuss current
challenge of Linux software raid and development trends.

In the last years, we have many development activities in md raid, e.g.
raid5 cache, raid1 clustering, partial parity log, fast fail
upstreaming, and some effort for raid1 & raid0 performance improvement.

I see there are some kind of functionality overlap between r5cache
(raid5 cache) and PPL (partial parity log), currently I have no idea
where we will go for these two development activities.
Also I receive reports from users that raid1 performance is desired when
it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
working on some raid1 performance improvement (e.g. new raid1 I/O
barrier and lockless raid1 I/O submit), and have some more ideas to discuss.

Therefore, if md raid developers may have a chance to sit together,
discuss how to efficiently collaborate in next year, it will be much
more productive then communicating on mailing list.

Finally let me introduce myself for people don't know me. My name is
Coly Li, I used to work on OCFS2, Ext4 for SUSE Linux, now I work with
Neil Brown and Hannes Reinecke to maintain block layer code for SUSE
Linux, mostly focus on drivers/md/*

Thanks.

Coly Li

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
  2017-01-09 16:38 [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion Coly Li
@ 2017-01-12 15:09 ` Sagi Grimberg
  2017-01-13  4:00   ` Coly Li
  2017-01-16  3:33 ` Guoqing Jiang
  1 sibling, 1 reply; 5+ messages in thread
From: Sagi Grimberg @ 2017-01-12 15:09 UTC (permalink / raw)
  To: Coly Li, lsf-pc
  Cc: linux-block, songliubraving, pawel.baldysiak, linux-kernel,
	linux-nvme, NeilBrown,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT,
	artur.paszkiewicz, Hannes Reinecke, Guoqing Jiang, Jes.Sorensen,
	mariusz.dabrowski, Shaohua Li

Hey Coly,

> Also I receive reports from users that raid1 performance is desired when
> it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
> working on some raid1 performance improvement (e.g. new raid1 I/O
> barrier and lockless raid1 I/O submit), and have some more ideas to discuss.

Do you have some performance measurements to share?

Mike used null devices to simulate very fast devices which
led to nice performance enhancements in dm-multipath code.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
  2017-01-12 15:09 ` [Lsf-pc] " Sagi Grimberg
@ 2017-01-13  4:00   ` Coly Li
  0 siblings, 0 replies; 5+ messages in thread
From: Coly Li @ 2017-01-13  4:00 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: lsf-pc, linux-block, songliubraving, pawel.baldysiak,
	linux-kernel, linux-nvme, NeilBrown,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT,
	artur.paszkiewicz, Hannes Reinecke, Guoqing Jiang, Jes.Sorensen,
	mariusz.dabrowski, Shaohua Li

On 2017/1/12 下午11:09, Sagi Grimberg wrote:
> Hey Coly,
> 
>> Also I receive reports from users that raid1 performance is desired when
>> it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
>> working on some raid1 performance improvement (e.g. new raid1 I/O
>> barrier and lockless raid1 I/O submit), and have some more ideas to
>> discuss.
> 
> Do you have some performance measurements to share?
> 
> Mike used null devices to simulate very fast devices which
> led to nice performance enhancements in dm-multipath code.

I have several performance data of raid1 and raid0, which is still work
in progress.

- md raid1
  Current md raid1 read performance is not ideal. A raid1 with 2 NVMe
SSD, only observe 2.6GB/s throughput for multi I/O and depth reading.
Most of the time spending on I/O barrier locking. Now I am working on a
lockless I/O submit patch (the original idea is from Hannes Reinecke),
which improves reading throughput to 4.7~5GB/s. When using md raid1 as a
cache device, reading performance improvement is critical.
  On my hardware, the ideal reading throughput of 2 NVMe is 6GB/s,
currently the reading performance number is 4.7~5GB/s, still have a
little some space to improve.
- md raid0
  People reports on linux-raid mailing list that DISCARD/TRIM
performance on raid0 is slow. In my reproducing, a raid0 built by 4x3TB
NVMe SSD, formatting a XFS volume on top of it takes 306 seconds. Most
of the time is inside md raid0 code to issue DISCARD/TRIM request in
chunk size range. I compose a POC patch to re-combine a large
DISCARD/TRIM command into per-device request, which reduces the
formatting time to 15 seconds. Now I work on patch simplifying by the
suggestions from upstream maintainers.

For raid1, currently most of feed backs are from read performance.

Coly

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
  2017-01-09 16:38 [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion Coly Li
  2017-01-12 15:09 ` [Lsf-pc] " Sagi Grimberg
@ 2017-01-16  3:33 ` Guoqing Jiang
  2017-01-16  5:40   ` [Lsf-pc] " James Bottomley
  1 sibling, 1 reply; 5+ messages in thread
From: Guoqing Jiang @ 2017-01-16  3:33 UTC (permalink / raw)
  To: Coly Li, lsf-pc
  Cc: open list:SOFTWARE RAID (Multiple Disks) SUPPORT, linux-block,
	linux-kernel, linux-nvme, Shaohua Li, NeilBrown, songliubraving,
	pawel.baldysiak, mariusz.dabrowski, artur.paszkiewicz,
	Jes.Sorensen, Hannes Reinecke



On 01/10/2017 12:38 AM, Coly Li wrote:
> Hi Folks,
>
> I'd like to propose a general md raid discussion, it is quite necessary
> for most of active md raid developers sit together to discuss current
> challenge of Linux software raid and development trends.
>
> In the last years, we have many development activities in md raid, e.g.
> raid5 cache, raid1 clustering, partial parity log, fast fail
> upstreaming, and some effort for raid1 & raid0 performance improvement.
>
> I see there are some kind of functionality overlap between r5cache
> (raid5 cache) and PPL (partial parity log), currently I have no idea
> where we will go for these two development activities.
> Also I receive reports from users that raid1 performance is desired when
> it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
> working on some raid1 performance improvement (e.g. new raid1 I/O
> barrier and lockless raid1 I/O submit), and have some more ideas to discuss.
>
> Therefore, if md raid developers may have a chance to sit together,
> discuss how to efficiently collaborate in next year, it will be much
> more productive then communicating on mailing list.

I would like to attend raid discussion, besides above topics I think we
can talk about improve the test suite of mdadm to make it more robust
(I can share related test suite which is used for clustered raid).

And I could share  the status of clustered raid about what we have done
and what we can do in the future. Finally, I'd like to know/discuss about
the roadmap of RAID.

Thanks a lot!
Guoqing

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
  2017-01-16  3:33 ` Guoqing Jiang
@ 2017-01-16  5:40   ` James Bottomley
  0 siblings, 0 replies; 5+ messages in thread
From: James Bottomley @ 2017-01-16  5:40 UTC (permalink / raw)
  To: Guoqing Jiang, Coly Li, lsf-pc
  Cc: linux-block, songliubraving, pawel.baldysiak, linux-kernel,
	linux-nvme, NeilBrown,
	open list:SOFTWARE RAID (Multiple Disks) SUPPORT,
	artur.paszkiewicz, Hannes Reinecke, Jes.Sorensen,
	mariusz.dabrowski, Shaohua Li

On Mon, 2017-01-16 at 11:33 +0800, Guoqing Jiang wrote:
> 
> On 01/10/2017 12:38 AM, Coly Li wrote:
> > Hi Folks,
> > 
> > I'd like to propose a general md raid discussion, it is quite 
> > necessary for most of active md raid developers sit together to 
> > discuss current challenge of Linux software raid and development
> > trends.
> > 
> > In the last years, we have many development activities in md raid, 
> > e.g. raid5 cache, raid1 clustering, partial parity log, fast fail
> > upstreaming, and some effort for raid1 & raid0 performance
> > improvement.
> > 
> > I see there are some kind of functionality overlap between r5cache
> > (raid5 cache) and PPL (partial parity log), currently I have no 
> > idea where we will go for these two development activities.
> > Also I receive reports from users that raid1 performance is desired 
> > when it is built on NVMe SSDs as a cache (maybe bcache or dm
> > -cache). I am working on some raid1 performance improvement (e.g. 
> > new raid1 I/O barrier and lockless raid1 I/O submit), and have some 
> > more ideas to discuss.
> > 
> > Therefore, if md raid developers may have a chance to sit together,
> > discuss how to efficiently collaborate in next year, it will be 
> > much more productive then communicating on mailing list.
> 
> I would like to attend raid discussion, besides above topics I think 
> we can talk about improve the test suite of mdadm to make it more 
> robust (I can share related test suite which is used for clustered
> raid).

Just so you know ... and just in case others are watching.  You're not
going to be getting an invite to LSF/MM unless you send an attend or
topic request in as the CFP asks:

http://marc.info/?l=linux-fsdevel&m=148285919408577

The rationale is simple: it's to difficult to track all the "me too"
reply emails and even if we could, it's not actually clear what the
intention of the sender is.  So you taking the time to compose an
official email as the CFP requests allows the programme committee to
distinguish.

James


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-01-16  5:40 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-01-09 16:38 [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion Coly Li
2017-01-12 15:09 ` [Lsf-pc] " Sagi Grimberg
2017-01-13  4:00   ` Coly Li
2017-01-16  3:33 ` Guoqing Jiang
2017-01-16  5:40   ` [Lsf-pc] " James Bottomley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).