linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Matias Bjørling" <mb@lightnvm.io>
To: "Javier González" <jg@lightnvm.io>
Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	"Javier González" <javier@cnexlabs.com>
Subject: Re: [RFC 1/4] lightnvm: precalculate controller write boundaries
Date: Fri, 5 Feb 2016 15:53:31 +0100	[thread overview]
Message-ID: <56B4B76B.6010007@lightnvm.io> (raw)
In-Reply-To: <1454591299-30305-2-git-send-email-javier@javigon.com>

On 02/04/2016 02:08 PM, Javier González wrote:
> Flash controllers typically define flash pages as a collection of flash
> sectors of typically 4K. Moreover, flash controllers might program flash
> pages across several planes. This defines the write granurality at which
> flash can be programmed. This is different for each flash controller.
> 
> In order to simplify calculations, and avoid repeating them in a per-I/O
> basis, this patch pre-calculates write granurality values as part of the
> device characteristics in the bring up.
> 
> Signed-off-by: Javier González <javier@cnexlabs.com>
> ---
>  drivers/lightnvm/rrpc.c | 4 ++++
>  drivers/lightnvm/rrpc.h | 3 +++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
> index 775bf6c2..8187bf3 100644
> --- a/drivers/lightnvm/rrpc.c
> +++ b/drivers/lightnvm/rrpc.c
> @@ -1149,6 +1149,10 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
>  	if (!rrpc->luns)
>  		return -ENOMEM;
>  
> +	rrpc->min_write_pgs = dev->sec_per_pl * (dev->sec_size / PAGE_SIZE);
> +	/* assume max_phys_sect % dev->min_write_pgs == 0 */
> +	rrpc->max_write_pgs = dev->ops->max_phys_sect;
> +
>  	/* 1:1 mapping */
>  	for (i = 0; i < rrpc->nr_luns; i++) {
>  		struct nvm_lun *lun = dev->mt->get_lun(dev, lun_begin + i);
> diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
> index 3989d65..868e91a 100644
> --- a/drivers/lightnvm/rrpc.h
> +++ b/drivers/lightnvm/rrpc.h
> @@ -107,6 +107,9 @@ struct rrpc {
>  	unsigned long long nr_sects;
>  	unsigned long total_blocks;
>  
> +	int min_write_pgs; /* minimum amount of pages required by controller */
> +	int max_write_pgs; /* maximum amount of pages supported by controller */
> +
>  	/* Write strategy variables. Move these into each for structure for each
>  	 * strategy
>  	 */
> 

This belongs to the write buffer patch.

  reply	other threads:[~2016-02-05 14:53 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-04 13:08 [RFC 0/4] lightnvm: add write buffering to rrpc Javier González
2016-02-04 13:08 ` [RFC 1/4] lightnvm: precalculate controller write boundaries Javier González
2016-02-05 14:53   ` Matias Bjørling [this message]
2016-02-04 13:08 ` [RFC 2/4] lightnvm: add write buffering for rrpc Javier González
2016-02-05 14:52   ` Matias Bjørling
2016-02-08  7:31     ` Javier González
2016-02-04 13:08 ` [RFC 3/4] lightnvm: read from rrpc write buffer if possible Javier González
2016-02-05 14:54   ` Matias Bjørling
2016-02-04 13:08 ` [RFC 4/4] lightnvm: add debug info for rrpc target Javier González

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56B4B76B.6010007@lightnvm.io \
    --to=mb@lightnvm.io \
    --cc=javier@cnexlabs.com \
    --cc=jg@lightnvm.io \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).