linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: lsf-pc@lists.linux-foundation.org,
	Linux RDMA Mailing List <linux-rdma@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Remote access to pmem on storage targets
Date: Tue, 26 Jan 2016 09:25:33 +0100	[thread overview]
Message-ID: <20160126082533.GR24938@quack.suse.cz> (raw)
In-Reply-To: <06414D5A-0632-4C74-B76C-038093E8AED3@oracle.com>

Hello,

On Mon 25-01-16 16:19:24, Chuck Lever wrote:
> I'd like to propose a discussion of how to take advantage of
> persistent memory in network-attached storage scenarios.
> 
> RDMA runs on high speed network fabrics and offloads data
> transfer from host CPUs. Thus it is a good match to the
> performance characteristics of persistent memory.
> 
> Today Linux supports iSER, SRP, and NFS/RDMA on RDMA
> fabrics. What kind of changes are needed in the Linux I/O
> stack (in particular, storage targets) and in these storage
> protocols to get the most benefit from ultra-low latency
> storage?
> 
> There have been recent proposals about how storage protocols
> and implementations might need to change (eg. Tom Talpey's
> SNIA proposals for changing to a push data transfer model,
> Sagi's proposal to utilize DAX under the NFS/RDMA server,
> and my proposal for a new pNFS layout to drive RDMA data
> transfer directly).
> 
> The outcome of the discussion would be to understand what
> people are working on now and what is the desired
> architectural approach in order to determine where storage
> developers should be focused.
> 
> This could be either a BoF or a session during the main
> tracks. There is sure to be a narrow segment of each
> track's attendees that would have interest in this topic.

So hashing out details of pNFS layout isn't interesting to many people.
But if you want a broader architectural discussion about how to overcome
issues (and what those issues actually are) with the use of persistent
memory for NAS, then that may be interesting. So what do you actually want?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2016-01-26  8:25 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-25 21:19 [LSF/MM TOPIC] Remote access to pmem on storage targets Chuck Lever
2016-01-26  8:25 ` Jan Kara [this message]
2016-01-26 15:58   ` [Lsf-pc] " Chuck Lever
2016-01-27  0:04     ` Dave Chinner
2016-01-27 15:55       ` Chuck Lever
2016-01-28 21:10         ` Dave Chinner
2016-01-27 10:52     ` Sagi Grimberg
2016-01-26 15:25 ` Atchley, Scott
2016-01-26 15:29   ` Chuck Lever
2016-01-26 17:00     ` Christoph Hellwig
2016-01-27 16:54 ` [LSF/MM TOPIC/ATTEND] RDMA passive target Boaz Harrosh
2016-01-27 17:02   ` [Lsf-pc] " James Bottomley
2016-01-27 17:27   ` Sagi Grimberg
2016-01-31 14:20     ` Boaz Harrosh
2016-01-31 16:55       ` Yigal Korman
2016-02-01 10:36         ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160126082533.GR24938@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=chuck.lever@oracle.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).