linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Matias Bjørling" <m@bjorling.me>
To: Slava Dubeyko <Vyacheslav.Dubeyko@wdc.com>,
	Damien Le Moal <Damien.LeMoal@wdc.com>,
	Viacheslav Dubeyko <slava@dubeyko.com>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>,
	Theodore Ts'o <tytso@mit.edu>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [LSF/MM TOPIC][LSF/MM ATTEND] OCSSDs - SMR, Hierarchical Interface, and Vector I/Os
Date: Fri, 6 Jan 2017 14:05:34 +0100	[thread overview]
Message-ID: <6ef654fa-292e-ed9d-b8b6-fe4282fb1ef1@bjorling.me> (raw)
In-Reply-To: <SN2PR04MB2191BE43398C84C4D262960488600@SN2PR04MB2191.namprd04.prod.outlook.com>

On 01/05/2017 11:58 PM, Slava Dubeyko wrote:
> Next point is read disturbance. If BER of physical page/block achieves some threshold then
> we need to move data from one page/block into another one. What subsystem will be
> responsible for this activity? The drive-managed case expects that device's GC will manage
> read disturbance issue. But what's about host-aware or host-managed case? If the host side
> hasn't information about BER then the host's software is unable to manage this issue. Finally,
> it sounds that we will have GC subsystem as on file system side as on device side. As a result,
> it means possible unpredictable performance degradation and decreasing device lifetime.
> Let's imagine that host-aware case could be unaware about read disturbance management.
> But how host-managed case can manage this issue?

The OCSSD interface uses a couple of methods:

1) Piggy back soft ECC errors onto the completion entry. Tells the host
that a block properly should be refreshed when appropriate.
2) Use an asynchronous interface, e.g., NVMe get log page. Report blocks
through this interface that has been read disturbed. This may be coupled
with the various processes running on the SSD.
3) (That Ted suggested). Expose a "reset" bit in the Report Zones
command to let the host know which blocks should be reset. If the
plumbing for 2) is not available, or the information has been lost on
the host side, this method can be used to "resync".

> 
> Bad block management... So, drive-managed and host-aware cases should be completely unaware
> about  bad blocks. But what's about host-managed case? If a device will hide bad blocks from
> the host then it means mapping table presence, access to logical pages/blocks and so on. If the host
> hasn't access to the bad block management then it's not host-managed model. And it sounds as
> completely unmanageable situation for the host-managed model. Because if the host has access
> to bad block management (but how?) then we have really simple model. Otherwise, the host
> has access to logical pages/blocks only and device should have internal GC. As a result,
> it means possible unpredictable performance degradation and decreasing device lifetime because
> of competition of GC on device side and GC on the host side.

Agree. depending on the use-case, one may expose a "perfect" interface
to the host, or one may expose an interface where media errors may be
reported to the host. The former case are great for consumer units,
where I/O predictability isn't critical, and similarly if I/O
predictability is critical, the media errors can be reported, and the
host may deal with them appropriately.

  parent reply	other threads:[~2017-01-06 13:06 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-02 21:06 [LSF/MM TOPIC][LSF/MM ATTEND] OCSSDs - SMR, Hierarchical Interface, and Vector I/Os Matias Bjørling
2017-01-02 23:12 ` Viacheslav Dubeyko
2017-01-03  8:56   ` Matias Bjørling
2017-01-03 17:35     ` Viacheslav Dubeyko
2017-01-03 19:10       ` Matias Bjørling
2017-01-04  2:59         ` Slava Dubeyko
2017-01-04  7:24           ` Damien Le Moal
2017-01-04 12:39             ` Matias Bjørling
2017-01-04 16:57             ` Theodore Ts'o
2017-01-10  1:42               ` Damien Le Moal
2017-01-10  4:24                 ` Theodore Ts'o
2017-01-10 13:06                   ` Matias Bjorling
2017-01-11  4:07                     ` Damien Le Moal
2017-01-11  6:06                       ` Matias Bjorling
2017-01-11  7:49                       ` Hannes Reinecke
2017-01-05 22:58             ` Slava Dubeyko
2017-01-06  1:11               ` Theodore Ts'o
2017-01-06 12:51                 ` Matias Bjørling
2017-01-09  6:49                 ` Slava Dubeyko
2017-01-09 14:55                   ` Theodore Ts'o
2017-01-06 13:05               ` Matias Bjørling [this message]
2017-01-06  1:09             ` Jaegeuk Kim
2017-01-06 12:55               ` Matias Bjørling
2017-01-12  1:33 ` [LSF/MM " Damien Le Moal
2017-01-12  2:18   ` [Lsf-pc] " James Bottomley
2017-01-12  2:35     ` Damien Le Moal
2017-01-12  2:38       ` James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6ef654fa-292e-ed9d-b8b6-fe4282fb1ef1@bjorling.me \
    --to=m@bjorling.me \
    --cc=Damien.LeMoal@wdc.com \
    --cc=Vyacheslav.Dubeyko@wdc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=slava@dubeyko.com \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).