public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Guoqing Jiang <gqjiang@suse.com>
To: Coly Li <colyli@suse.de>, lsf-pc@lists.linux-foundation.org
Cc: "open list:SOFTWARE RAID (Multiple Disks) SUPPORT" 
	<linux-raid@vger.kernel.org>,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-nvme@lists.infradead.org, Shaohua Li <shli@kernel.org>,
	NeilBrown <neilb@suse.com>,
	songliubraving@fb.com, pawel.baldysiak@intel.com,
	mariusz.dabrowski@intel.com, artur.paszkiewicz@intel.com,
	Jes.Sorensen@redhat.com, Hannes Reinecke <hare@suse.de>
Subject: Re: [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion
Date: Mon, 16 Jan 2017 11:33:05 +0800	[thread overview]
Message-ID: <587C3EF1.3020401@suse.com> (raw)
In-Reply-To: <79796ea4-2631-c762-b8a1-50bcdcbc602e@suse.de>



On 01/10/2017 12:38 AM, Coly Li wrote:
> Hi Folks,
>
> I'd like to propose a general md raid discussion, it is quite necessary
> for most of active md raid developers sit together to discuss current
> challenge of Linux software raid and development trends.
>
> In the last years, we have many development activities in md raid, e.g.
> raid5 cache, raid1 clustering, partial parity log, fast fail
> upstreaming, and some effort for raid1 & raid0 performance improvement.
>
> I see there are some kind of functionality overlap between r5cache
> (raid5 cache) and PPL (partial parity log), currently I have no idea
> where we will go for these two development activities.
> Also I receive reports from users that raid1 performance is desired when
> it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
> working on some raid1 performance improvement (e.g. new raid1 I/O
> barrier and lockless raid1 I/O submit), and have some more ideas to discuss.
>
> Therefore, if md raid developers may have a chance to sit together,
> discuss how to efficiently collaborate in next year, it will be much
> more productive then communicating on mailing list.

I would like to attend raid discussion, besides above topics I think we
can talk about improve the test suite of mdadm to make it more robust
(I can share related test suite which is used for clustered raid).

And I could share  the status of clustered raid about what we have done
and what we can do in the future. Finally, I'd like to know/discuss about
the roadmap of RAID.

Thanks a lot!
Guoqing

  parent reply	other threads:[~2017-01-16  3:33 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-09 16:38 [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion Coly Li
2017-01-12 15:09 ` [Lsf-pc] " Sagi Grimberg
2017-01-13  4:00   ` Coly Li
2017-01-16  3:33 ` Guoqing Jiang [this message]
2017-01-16  5:40   ` James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=587C3EF1.3020401@suse.com \
    --to=gqjiang@suse.com \
    --cc=Jes.Sorensen@redhat.com \
    --cc=artur.paszkiewicz@intel.com \
    --cc=colyli@suse.de \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mariusz.dabrowski@intel.com \
    --cc=neilb@suse.com \
    --cc=pawel.baldysiak@intel.com \
    --cc=shli@kernel.org \
    --cc=songliubraving@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox