* [PATCH 1/4] lightnvm: add sync and close block I/O types
@ 2016-05-04 15:31 Javier González
2016-05-04 15:31 ` [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq Javier González
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Javier González @ 2016-05-04 15:31 UTC (permalink / raw)
To: mb; +Cc: linux-kernel, linux-block, Javier González
Within a target, I/O requests stem from different paths, which might vary
in terms of the data structures being allocated, context, etc. This
might impact how the request is treated, or how memory is freed once
the bio is completed.
Add two different types of I/Os: (i) NVM_IOTYPE_SYNC, which indicates
that the I/O is synchronous; and (ii) NVM_IOTYPE_CLOSE_BLK, which
indicates that the I/O closes the block to which all the ppas on the
request belong to.
Signed-off-by: Javier González <javier@cnexlabs.com>
---
include/linux/lightnvm.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 29a6890..6c02209 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -11,6 +11,8 @@ enum {
NVM_IOTYPE_NONE = 0,
NVM_IOTYPE_GC = 1,
+ NVM_IOTYPE_SYNC = 2,
+ NVM_IOTYPE_CLOSE_BLK = 4,
};
#define NVM_BLK_BITS (16)
--
2.5.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq 2016-05-04 15:31 [PATCH 1/4] lightnvm: add sync and close block I/O types Javier González @ 2016-05-04 15:31 ` Javier González 2016-05-05 9:34 ` Matias Bjørling 2016-05-04 15:31 ` [PATCH 3/4] lightnvm: eliminate redundant variable Javier González ` (2 subsequent siblings) 3 siblings, 1 reply; 11+ messages in thread From: Javier González @ 2016-05-04 15:31 UTC (permalink / raw) To: mb; +Cc: linux-kernel, linux-block, Javier González The number of ppas contained on a request is not necessarily the number of pages that it maps to neither on the target nor on the device side. In order to avoid confusion, rename nr_pages to nr_ppas since it is what the variable actually contains. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 16 ++++++++-------- drivers/lightnvm/gennvm.c | 2 +- drivers/lightnvm/rrpc.c | 6 +++--- drivers/lightnvm/rrpc.h | 2 +- drivers/lightnvm/sysblk.c | 2 +- drivers/nvme/host/lightnvm.c | 4 ++-- include/linux/lightnvm.h | 2 +- 7 files changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 32375b6..4cd9803 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -254,8 +254,8 @@ void nvm_addr_to_generic_mode(struct nvm_dev *dev, struct nvm_rq *rqd) { int i; - if (rqd->nr_pages > 1) { - for (i = 0; i < rqd->nr_pages; i++) + if (rqd->nr_ppas > 1) { + for (i = 0; i < rqd->nr_ppas; i++) rqd->ppa_list[i] = dev_to_generic_addr(dev, rqd->ppa_list[i]); } else { @@ -268,8 +268,8 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd) { int i; - if (rqd->nr_pages > 1) { - for (i = 0; i < rqd->nr_pages; i++) + if (rqd->nr_ppas > 1) { + for (i = 0; i < rqd->nr_ppas; i++) rqd->ppa_list[i] = generic_to_dev_addr(dev, rqd->ppa_list[i]); } else { @@ -284,13 +284,13 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, int i, plane_cnt, pl_idx; if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) { - rqd->nr_pages = nr_ppas; + rqd->nr_ppas = nr_ppas; rqd->ppa_addr = ppas[0]; return 0; } - rqd->nr_pages = nr_ppas; + rqd->nr_ppas = nr_ppas; rqd->ppa_list = nvm_dev_dma_alloc(dev, GFP_KERNEL, &rqd->dma_ppa_list); if (!rqd->ppa_list) { pr_err("nvm: failed to allocate dma memory\n"); @@ -302,7 +302,7 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, rqd->ppa_list[i] = ppas[i]; } else { plane_cnt = dev->plane_mode; - rqd->nr_pages *= plane_cnt; + rqd->nr_ppas *= plane_cnt; for (i = 0; i < nr_ppas; i++) { for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) { @@ -423,7 +423,7 @@ int nvm_submit_ppa_list(struct nvm_dev *dev, struct ppa_addr *ppa_list, memset(&rqd, 0, sizeof(struct nvm_rq)); - rqd.nr_pages = nr_ppas; + rqd.nr_ppas = nr_ppas; if (nr_ppas > 1) rqd.ppa_list = ppa_list; else diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c index 211d7f7..dc726d9 100644 --- a/drivers/lightnvm/gennvm.c +++ b/drivers/lightnvm/gennvm.c @@ -446,7 +446,7 @@ static void gennvm_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd) nvm_addr_to_generic_mode(dev, rqd); /* look up blocks and mark them as bad */ - if (rqd->nr_pages == 1) { + if (rqd->nr_ppas == 1) { gennvm_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD); return; } diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 48862ead..72aca96 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -695,7 +695,7 @@ static void rrpc_end_io(struct nvm_rq *rqd) { struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance); struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd); - uint8_t npages = rqd->nr_pages; + uint8_t npages = rqd->nr_ppas; sector_t laddr = rrpc_get_laddr(rqd->bio) - npages; if (bio_data_dir(rqd->bio) == WRITE) @@ -883,7 +883,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio, bio_get(bio); rqd->bio = bio; rqd->ins = &rrpc->instance; - rqd->nr_pages = nr_pages; + rqd->nr_ppas = nr_pages; rrq->flags = flags; err = nvm_submit_io(rrpc->dev, rqd); @@ -892,7 +892,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio, bio_put(bio); if (!(flags & NVM_IOTYPE_GC)) { rrpc_unlock_rq(rrpc, rqd); - if (rqd->nr_pages > 1) + if (rqd->nr_ppas > 1) nvm_dev_dma_free(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list); } diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h index 2653484..87e84b5 100644 --- a/drivers/lightnvm/rrpc.h +++ b/drivers/lightnvm/rrpc.h @@ -251,7 +251,7 @@ static inline void rrpc_unlock_laddr(struct rrpc *rrpc, static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd) { struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd); - uint8_t pages = rqd->nr_pages; + uint8_t pages = rqd->nr_ppas; BUG_ON((r->l_start + pages) > rrpc->nr_sects); diff --git a/drivers/lightnvm/sysblk.c b/drivers/lightnvm/sysblk.c index b98ca19..994697a 100644 --- a/drivers/lightnvm/sysblk.c +++ b/drivers/lightnvm/sysblk.c @@ -280,7 +280,7 @@ static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type) nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas, 1); nvm_generic_to_addr_mode(dev, &rqd); - ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_pages, type); + ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type); nvm_free_rqd_ppalist(dev, &rqd); if (ret) { pr_err("nvm: sysblk failed bb mark\n"); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 65de1e5..a0af055 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -471,7 +471,7 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd, c->ph_rw.spba = cpu_to_le64(rqd->ppa_addr.ppa); c->ph_rw.metadata = cpu_to_le64(rqd->dma_meta_list); c->ph_rw.control = cpu_to_le16(rqd->flags); - c->ph_rw.length = cpu_to_le16(rqd->nr_pages - 1); + c->ph_rw.length = cpu_to_le16(rqd->nr_ppas - 1); if (rqd->opcode == NVM_OP_HBWRITE || rqd->opcode == NVM_OP_HBREAD) c->hb_rw.slba = cpu_to_le64(nvme_block_nr(ns, @@ -542,7 +542,7 @@ static int nvme_nvm_erase_block(struct nvm_dev *dev, struct nvm_rq *rqd) c.erase.opcode = NVM_OP_ERASE; c.erase.nsid = cpu_to_le32(ns->ns_id); c.erase.spba = cpu_to_le64(rqd->ppa_addr.ppa); - c.erase.length = cpu_to_le16(rqd->nr_pages - 1); + c.erase.length = cpu_to_le16(rqd->nr_ppas - 1); return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0); } diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 6c02209..272a98b 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -246,7 +246,7 @@ struct nvm_rq { nvm_end_io_fn *end_io; uint8_t opcode; - uint16_t nr_pages; + uint16_t nr_ppas; uint16_t flags; u64 ppa_status; /* ppa media status */ -- 2.5.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq 2016-05-04 15:31 ` [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq Javier González @ 2016-05-05 9:34 ` Matias Bjørling 0 siblings, 0 replies; 11+ messages in thread From: Matias Bjørling @ 2016-05-05 9:34 UTC (permalink / raw) To: Javier González; +Cc: linux-kernel, linux-block, Javier González On 05/04/2016 05:31 PM, Javier González wrote: > The number of ppas contained on a request is not necessarily the number > of pages that it maps to neither on the target nor on the device side. > In order to avoid confusion, rename nr_pages to nr_ppas since it is what > the variable actually contains. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/core.c | 16 ++++++++-------- > drivers/lightnvm/gennvm.c | 2 +- > drivers/lightnvm/rrpc.c | 6 +++--- > drivers/lightnvm/rrpc.h | 2 +- > drivers/lightnvm/sysblk.c | 2 +- > drivers/nvme/host/lightnvm.c | 4 ++-- > include/linux/lightnvm.h | 2 +- > 7 files changed, 17 insertions(+), 17 deletions(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index 32375b6..4cd9803 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -254,8 +254,8 @@ void nvm_addr_to_generic_mode(struct nvm_dev *dev, struct nvm_rq *rqd) > { > int i; > > - if (rqd->nr_pages > 1) { > - for (i = 0; i < rqd->nr_pages; i++) > + if (rqd->nr_ppas > 1) { > + for (i = 0; i < rqd->nr_ppas; i++) > rqd->ppa_list[i] = dev_to_generic_addr(dev, > rqd->ppa_list[i]); > } else { > @@ -268,8 +268,8 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd) > { > int i; > > - if (rqd->nr_pages > 1) { > - for (i = 0; i < rqd->nr_pages; i++) > + if (rqd->nr_ppas > 1) { > + for (i = 0; i < rqd->nr_ppas; i++) > rqd->ppa_list[i] = generic_to_dev_addr(dev, > rqd->ppa_list[i]); > } else { > @@ -284,13 +284,13 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, > int i, plane_cnt, pl_idx; > > if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) { > - rqd->nr_pages = nr_ppas; > + rqd->nr_ppas = nr_ppas; > rqd->ppa_addr = ppas[0]; > > return 0; > } > > - rqd->nr_pages = nr_ppas; > + rqd->nr_ppas = nr_ppas; > rqd->ppa_list = nvm_dev_dma_alloc(dev, GFP_KERNEL, &rqd->dma_ppa_list); > if (!rqd->ppa_list) { > pr_err("nvm: failed to allocate dma memory\n"); > @@ -302,7 +302,7 @@ int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd, > rqd->ppa_list[i] = ppas[i]; > } else { > plane_cnt = dev->plane_mode; > - rqd->nr_pages *= plane_cnt; > + rqd->nr_ppas *= plane_cnt; > > for (i = 0; i < nr_ppas; i++) { > for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) { > @@ -423,7 +423,7 @@ int nvm_submit_ppa_list(struct nvm_dev *dev, struct ppa_addr *ppa_list, > > memset(&rqd, 0, sizeof(struct nvm_rq)); > > - rqd.nr_pages = nr_ppas; > + rqd.nr_ppas = nr_ppas; > if (nr_ppas > 1) > rqd.ppa_list = ppa_list; > else > diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c > index 211d7f7..dc726d9 100644 > --- a/drivers/lightnvm/gennvm.c > +++ b/drivers/lightnvm/gennvm.c > @@ -446,7 +446,7 @@ static void gennvm_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd) > nvm_addr_to_generic_mode(dev, rqd); > > /* look up blocks and mark them as bad */ > - if (rqd->nr_pages == 1) { > + if (rqd->nr_ppas == 1) { > gennvm_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD); > return; > } > diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c > index 48862ead..72aca96 100644 > --- a/drivers/lightnvm/rrpc.c > +++ b/drivers/lightnvm/rrpc.c > @@ -695,7 +695,7 @@ static void rrpc_end_io(struct nvm_rq *rqd) > { > struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance); > struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd); > - uint8_t npages = rqd->nr_pages; > + uint8_t npages = rqd->nr_ppas; > sector_t laddr = rrpc_get_laddr(rqd->bio) - npages; > > if (bio_data_dir(rqd->bio) == WRITE) > @@ -883,7 +883,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio, > bio_get(bio); > rqd->bio = bio; > rqd->ins = &rrpc->instance; > - rqd->nr_pages = nr_pages; > + rqd->nr_ppas = nr_pages; > rrq->flags = flags; > > err = nvm_submit_io(rrpc->dev, rqd); > @@ -892,7 +892,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio, > bio_put(bio); > if (!(flags & NVM_IOTYPE_GC)) { > rrpc_unlock_rq(rrpc, rqd); > - if (rqd->nr_pages > 1) > + if (rqd->nr_ppas > 1) > nvm_dev_dma_free(rrpc->dev, > rqd->ppa_list, rqd->dma_ppa_list); > } > diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h > index 2653484..87e84b5 100644 > --- a/drivers/lightnvm/rrpc.h > +++ b/drivers/lightnvm/rrpc.h > @@ -251,7 +251,7 @@ static inline void rrpc_unlock_laddr(struct rrpc *rrpc, > static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd) > { > struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd); > - uint8_t pages = rqd->nr_pages; > + uint8_t pages = rqd->nr_ppas; > > BUG_ON((r->l_start + pages) > rrpc->nr_sects); > > diff --git a/drivers/lightnvm/sysblk.c b/drivers/lightnvm/sysblk.c > index b98ca19..994697a 100644 > --- a/drivers/lightnvm/sysblk.c > +++ b/drivers/lightnvm/sysblk.c > @@ -280,7 +280,7 @@ static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type) > nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas, 1); > nvm_generic_to_addr_mode(dev, &rqd); > > - ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_pages, type); > + ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type); > nvm_free_rqd_ppalist(dev, &rqd); > if (ret) { > pr_err("nvm: sysblk failed bb mark\n"); > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index 65de1e5..a0af055 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -471,7 +471,7 @@ static inline void nvme_nvm_rqtocmd(struct request *rq, struct nvm_rq *rqd, > c->ph_rw.spba = cpu_to_le64(rqd->ppa_addr.ppa); > c->ph_rw.metadata = cpu_to_le64(rqd->dma_meta_list); > c->ph_rw.control = cpu_to_le16(rqd->flags); > - c->ph_rw.length = cpu_to_le16(rqd->nr_pages - 1); > + c->ph_rw.length = cpu_to_le16(rqd->nr_ppas - 1); > > if (rqd->opcode == NVM_OP_HBWRITE || rqd->opcode == NVM_OP_HBREAD) > c->hb_rw.slba = cpu_to_le64(nvme_block_nr(ns, > @@ -542,7 +542,7 @@ static int nvme_nvm_erase_block(struct nvm_dev *dev, struct nvm_rq *rqd) > c.erase.opcode = NVM_OP_ERASE; > c.erase.nsid = cpu_to_le32(ns->ns_id); > c.erase.spba = cpu_to_le64(rqd->ppa_addr.ppa); > - c.erase.length = cpu_to_le16(rqd->nr_pages - 1); > + c.erase.length = cpu_to_le16(rqd->nr_ppas - 1); > > return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0); > } > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 6c02209..272a98b 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -246,7 +246,7 @@ struct nvm_rq { > nvm_end_io_fn *end_io; > > uint8_t opcode; > - uint16_t nr_pages; > + uint16_t nr_ppas; > uint16_t flags; > > u64 ppa_status; /* ppa media status */ > Thanks, applied for 4.7. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/4] lightnvm: eliminate redundant variable 2016-05-04 15:31 [PATCH 1/4] lightnvm: add sync and close block I/O types Javier González 2016-05-04 15:31 ` [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq Javier González @ 2016-05-04 15:31 ` Javier González 2016-05-05 9:52 ` Matias Bjørling 2016-05-04 15:31 ` [PATCH 4/4] lightnvm: Precalculate max/min sectors per req Javier González 2016-05-05 9:21 ` [PATCH 1/4] lightnvm: add sync and close block I/O types Matias Bjørling 3 siblings, 1 reply; 11+ messages in thread From: Javier González @ 2016-05-04 15:31 UTC (permalink / raw) To: mb; +Cc: linux-kernel, linux-block, Javier González Eliminate redundant variable that has been superseded by the new variables emerging from the Open-Channel SSD spec. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/rrpc.c | 2 +- include/linux/lightnvm.h | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 72aca96..2103e97 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -1264,7 +1264,7 @@ static sector_t rrpc_capacity(void *private) sector_t reserved, provisioned; /* cur, gc, and two emergency blocks for each lun */ - reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4; + reserved = rrpc->nr_luns * dev->sec_per_blk * 4; provisioned = rrpc->nr_sects - reserved; if (reserved > rrpc->nr_sects) { diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 272a98b..67e72f5 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -365,7 +365,6 @@ struct nvm_dev { unsigned long total_blocks; unsigned long total_secs; int nr_luns; - unsigned max_pages_per_blk; unsigned long *lun_map; void *dma_pool; -- 2.5.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 3/4] lightnvm: eliminate redundant variable 2016-05-04 15:31 ` [PATCH 3/4] lightnvm: eliminate redundant variable Javier González @ 2016-05-05 9:52 ` Matias Bjørling 0 siblings, 0 replies; 11+ messages in thread From: Matias Bjørling @ 2016-05-05 9:52 UTC (permalink / raw) To: Javier González; +Cc: linux-kernel, linux-block, Javier González On 05/04/2016 05:31 PM, Javier González wrote: > Eliminate redundant variable that has been superseded by the new > variables emerging from the Open-Channel SSD spec. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/rrpc.c | 2 +- > include/linux/lightnvm.h | 1 - > 2 files changed, 1 insertion(+), 2 deletions(-) > > diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c > index 72aca96..2103e97 100644 > --- a/drivers/lightnvm/rrpc.c > +++ b/drivers/lightnvm/rrpc.c > @@ -1264,7 +1264,7 @@ static sector_t rrpc_capacity(void *private) > sector_t reserved, provisioned; > > /* cur, gc, and two emergency blocks for each lun */ > - reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4; > + reserved = rrpc->nr_luns * dev->sec_per_blk * 4; > provisioned = rrpc->nr_sects - reserved; > > if (reserved > rrpc->nr_sects) { > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 272a98b..67e72f5 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -365,7 +365,6 @@ struct nvm_dev { > unsigned long total_blocks; > unsigned long total_secs; > int nr_luns; > - unsigned max_pages_per_blk; > > unsigned long *lun_map; > void *dma_pool; > Thanks Javier. Applied for 4.7. I also updated the patch description. ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/4] lightnvm: Precalculate max/min sectors per req. 2016-05-04 15:31 [PATCH 1/4] lightnvm: add sync and close block I/O types Javier González 2016-05-04 15:31 ` [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq Javier González 2016-05-04 15:31 ` [PATCH 3/4] lightnvm: eliminate redundant variable Javier González @ 2016-05-04 15:31 ` Javier González 2016-05-05 9:54 ` Matias Bjørling 2016-05-05 9:21 ` [PATCH 1/4] lightnvm: add sync and close block I/O types Matias Bjørling 3 siblings, 1 reply; 11+ messages in thread From: Javier González @ 2016-05-04 15:31 UTC (permalink / raw) To: mb; +Cc: linux-kernel, linux-block, Javier González Add two precalculated values to nvm_rq: (i) maximum number of sectors per general request; and (ii) minimum number of sectors per write request. Signed-off-by: Javier González <javier@cnexlabs.com> --- drivers/lightnvm/core.c | 3 +++ include/linux/lightnvm.h | 4 +++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 4cd9803..85682d91 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -573,6 +573,9 @@ static int nvm_core_init(struct nvm_dev *dev) dev->plane_mode = NVM_PLANE_SINGLE; dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size; + dev->max_sec_rq = dev->ops->max_phys_sect; + /* assume max_phys_sect % dev->min_write_pgs == 0 */ + dev->min_sec_w_rq = dev->sec_per_pl * (dev->sec_size / PAGE_SIZE); if (grp->mpos & 0x020202) dev->plane_mode = NVM_PLANE_DOUBLE; diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 67e72f5..c2dfd0c 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -351,7 +351,9 @@ struct nvm_dev { /* Calculated/Cached values. These do not reflect the actual usable * blocks at run-time. */ - int max_rq_size; + int max_rq_size; /* maximum size of a single request */ + int max_sec_rq; /* maximum amount of sectors that fit in one req. */ + int min_sec_w_rq; /* minimum amount of sectors required on write req. */ int plane_mode; /* drive device in single, double or quad mode */ int sec_per_pl; /* all sectors across planes */ -- 2.5.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 4/4] lightnvm: Precalculate max/min sectors per req. 2016-05-04 15:31 ` [PATCH 4/4] lightnvm: Precalculate max/min sectors per req Javier González @ 2016-05-05 9:54 ` Matias Bjørling 2016-05-05 10:02 ` Javier González 0 siblings, 1 reply; 11+ messages in thread From: Matias Bjørling @ 2016-05-05 9:54 UTC (permalink / raw) To: Javier González; +Cc: linux-kernel, linux-block, Javier González On 05/04/2016 05:31 PM, Javier González wrote: > Add two precalculated values to nvm_rq: (i) maximum number of sectors > per general request; and (ii) minimum number of sectors per write > request. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > drivers/lightnvm/core.c | 3 +++ > include/linux/lightnvm.h | 4 +++- > 2 files changed, 6 insertions(+), 1 deletion(-) > > diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c > index 4cd9803..85682d91 100644 > --- a/drivers/lightnvm/core.c > +++ b/drivers/lightnvm/core.c > @@ -573,6 +573,9 @@ static int nvm_core_init(struct nvm_dev *dev) > > dev->plane_mode = NVM_PLANE_SINGLE; > dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size; > + dev->max_sec_rq = dev->ops->max_phys_sect; > + /* assume max_phys_sect % dev->min_write_pgs == 0 */ > + dev->min_sec_w_rq = dev->sec_per_pl * (dev->sec_size / PAGE_SIZE); > > if (grp->mpos & 0x020202) > dev->plane_mode = NVM_PLANE_DOUBLE; > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 67e72f5..c2dfd0c 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -351,7 +351,9 @@ struct nvm_dev { > /* Calculated/Cached values. These do not reflect the actual usable > * blocks at run-time. > */ > - int max_rq_size; > + int max_rq_size; /* maximum size of a single request */ > + int max_sec_rq; /* maximum amount of sectors that fit in one req. */ > + int min_sec_w_rq; /* minimum amount of sectors required on write req. */ > int plane_mode; /* drive device in single, double or quad mode */ > > int sec_per_pl; /* all sectors across planes */ > I think this is best kept within pblk for now. The min_sec_w_rq is not enough to describe the pages to be written. It must also follow a specific order, which is not communicated only with minimum page writes. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 4/4] lightnvm: Precalculate max/min sectors per req. 2016-05-05 9:54 ` Matias Bjørling @ 2016-05-05 10:02 ` Javier González 0 siblings, 0 replies; 11+ messages in thread From: Javier González @ 2016-05-05 10:02 UTC (permalink / raw) To: Matias Bjørling Cc: Javier González, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org > On 05 May 2016, at 11:54, Matias Bjørling <mb@lightnvm.io> wrote: > >> On 05/04/2016 05:31 PM, Javier González wrote: >> Add two precalculated values to nvm_rq: (i) maximum number of sectors >> per general request; and (ii) minimum number of sectors per write >> request. >> >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> drivers/lightnvm/core.c | 3 +++ >> include/linux/lightnvm.h | 4 +++- >> 2 files changed, 6 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c >> index 4cd9803..85682d91 100644 >> --- a/drivers/lightnvm/core.c >> +++ b/drivers/lightnvm/core.c >> @@ -573,6 +573,9 @@ static int nvm_core_init(struct nvm_dev *dev) >> >> dev->plane_mode = NVM_PLANE_SINGLE; >> dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size; >> + dev->max_sec_rq = dev->ops->max_phys_sect; >> + /* assume max_phys_sect % dev->min_write_pgs == 0 */ >> + dev->min_sec_w_rq = dev->sec_per_pl * (dev->sec_size / PAGE_SIZE); >> >> if (grp->mpos & 0x020202) >> dev->plane_mode = NVM_PLANE_DOUBLE; >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index 67e72f5..c2dfd0c 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -351,7 +351,9 @@ struct nvm_dev { >> /* Calculated/Cached values. These do not reflect the actual usable >> * blocks at run-time. >> */ >> - int max_rq_size; >> + int max_rq_size; /* maximum size of a single request */ >> + int max_sec_rq; /* maximum amount of sectors that fit in one req. */ >> + int min_sec_w_rq; /* minimum amount of sectors required on write req. */ >> int plane_mode; /* drive device in single, double or quad mode */ >> >> int sec_per_pl; /* all sectors across planes */ > > I think this is best kept within pblk for now. The min_sec_w_rq is not enough to describe the pages to be written. It must also follow a specific order, which is not communicated only with minimum page writes. Yes, you're right. Let us wait until we can communicate everything to the target. Thanks! Javier ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] lightnvm: add sync and close block I/O types 2016-05-04 15:31 [PATCH 1/4] lightnvm: add sync and close block I/O types Javier González ` (2 preceding siblings ...) 2016-05-04 15:31 ` [PATCH 4/4] lightnvm: Precalculate max/min sectors per req Javier González @ 2016-05-05 9:21 ` Matias Bjørling 2016-05-05 9:38 ` Javier González 3 siblings, 1 reply; 11+ messages in thread From: Matias Bjørling @ 2016-05-05 9:21 UTC (permalink / raw) To: Javier González; +Cc: linux-kernel, linux-block, Javier González On 05/04/2016 05:31 PM, Javier González wrote: > Within a target, I/O requests stem from different paths, which might vary > in terms of the data structures being allocated, context, etc. This > might impact how the request is treated, or how memory is freed once > the bio is completed. > > Add two different types of I/Os: (i) NVM_IOTYPE_SYNC, which indicates > that the I/O is synchronous; and (ii) NVM_IOTYPE_CLOSE_BLK, which > indicates that the I/O closes the block to which all the ppas on the > request belong to. > > Signed-off-by: Javier González <javier@cnexlabs.com> > --- > include/linux/lightnvm.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h > index 29a6890..6c02209 100644 > --- a/include/linux/lightnvm.h > +++ b/include/linux/lightnvm.h > @@ -11,6 +11,8 @@ enum { > > NVM_IOTYPE_NONE = 0, > NVM_IOTYPE_GC = 1, > + NVM_IOTYPE_SYNC = 2, > + NVM_IOTYPE_CLOSE_BLK = 4, > }; > > #define NVM_BLK_BITS (16) > The sync should not be necessary when the read path is implemented using bio_clone. Similarly for NVM_IOTYPE_CLOSE_BLK. The write completion can be handled in the bio completion path. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] lightnvm: add sync and close block I/O types 2016-05-05 9:21 ` [PATCH 1/4] lightnvm: add sync and close block I/O types Matias Bjørling @ 2016-05-05 9:38 ` Javier González 2016-05-05 10:08 ` Matias Bjørling 0 siblings, 1 reply; 11+ messages in thread From: Javier González @ 2016-05-05 9:38 UTC (permalink / raw) To: Matias Bjørling; +Cc: linux-kernel, linux-block [-- Attachment #1: Type: text/plain, Size: 2026 bytes --] > On 05 May 2016, at 11:21, Matias Bjørling <mb@lightnvm.io> wrote: > > On 05/04/2016 05:31 PM, Javier González wrote: >> Within a target, I/O requests stem from different paths, which might vary >> in terms of the data structures being allocated, context, etc. This >> might impact how the request is treated, or how memory is freed once >> the bio is completed. >> >> Add two different types of I/Os: (i) NVM_IOTYPE_SYNC, which indicates >> that the I/O is synchronous; and (ii) NVM_IOTYPE_CLOSE_BLK, which >> indicates that the I/O closes the block to which all the ppas on the >> request belong to. >> >> Signed-off-by: Javier González <javier@cnexlabs.com> >> --- >> include/linux/lightnvm.h | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >> index 29a6890..6c02209 100644 >> --- a/include/linux/lightnvm.h >> +++ b/include/linux/lightnvm.h >> @@ -11,6 +11,8 @@ enum { >> >> NVM_IOTYPE_NONE = 0, >> NVM_IOTYPE_GC = 1, >> + NVM_IOTYPE_SYNC = 2, >> + NVM_IOTYPE_CLOSE_BLK = 4, >> }; >> >> #define NVM_BLK_BITS (16) > > The sync should not be necessary when the read path is implemented > using bio_clone. Similarly for NVM_IOTYPE_CLOSE_BLK. The write > completion can be handled in the bio completion path. We need to know where the request comes from; we cannot do it just from having the bio. This is because we allocate different structures depending on the type of bio we send. It is not only which bio->end_io function we have, but which memory needs to be released. Sync is necessary for the read path when we have a partial bio (data both on write buffer and disk) that we need to fill up. Also for GC.. In this case, the bio is to be freed differently. In the case of close the case is similarly; we do not free memory on the end_io path, but on the caller. You can see how these flags are used on pblk. Maybe there is a better way of doing it that I could not see... Javier [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 842 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/4] lightnvm: add sync and close block I/O types 2016-05-05 9:38 ` Javier González @ 2016-05-05 10:08 ` Matias Bjørling 0 siblings, 0 replies; 11+ messages in thread From: Matias Bjørling @ 2016-05-05 10:08 UTC (permalink / raw) To: Javier González; +Cc: linux-kernel, linux-block On 05/05/2016 11:38 AM, Javier González wrote: > >> On 05 May 2016, at 11:21, Matias Bjørling <mb@lightnvm.io> wrote: >> >> On 05/04/2016 05:31 PM, Javier González wrote: >>> Within a target, I/O requests stem from different paths, which might vary >>> in terms of the data structures being allocated, context, etc. This >>> might impact how the request is treated, or how memory is freed once >>> the bio is completed. >>> >>> Add two different types of I/Os: (i) NVM_IOTYPE_SYNC, which indicates >>> that the I/O is synchronous; and (ii) NVM_IOTYPE_CLOSE_BLK, which >>> indicates that the I/O closes the block to which all the ppas on the >>> request belong to. >>> >>> Signed-off-by: Javier González <javier@cnexlabs.com> >>> --- >>> include/linux/lightnvm.h | 2 ++ >>> 1 file changed, 2 insertions(+) >>> >>> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h >>> index 29a6890..6c02209 100644 >>> --- a/include/linux/lightnvm.h >>> +++ b/include/linux/lightnvm.h >>> @@ -11,6 +11,8 @@ enum { >>> >>> NVM_IOTYPE_NONE = 0, >>> NVM_IOTYPE_GC = 1, >>> + NVM_IOTYPE_SYNC = 2, >>> + NVM_IOTYPE_CLOSE_BLK = 4, >>> }; >>> >>> #define NVM_BLK_BITS (16) >> >> The sync should not be necessary when the read path is implemented >> using bio_clone. Similarly for NVM_IOTYPE_CLOSE_BLK. The write >> completion can be handled in the bio completion path. > > We need to know where the request comes from; we cannot do it just from > having the bio. This is because we allocate different structures > depending on the type of bio we send. It is not only which bio->end_io > function we have, but which memory needs to be released. Sync is > necessary for the read path when we have a partial bio (data both on > write buffer and disk) that we need to fill up. Also for GC.. In this > case, the bio is to be freed differently. In the case of close the case > is similarly; we do not free memory on the end_io path, but on the caller. > > You can see how these flags are used on pblk. Maybe there is a better > way of doing it that I could not see... > Use the bio completion path for both. For bio completion, free the data that is specific to the pblk flow and let the nvm_rq->end_io() completion path clean up the nvm_rq specific data. That will clean up the completion paths. > Javier > ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-05-05 10:18 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2016-05-04 15:31 [PATCH 1/4] lightnvm: add sync and close block I/O types Javier González 2016-05-04 15:31 ` [PATCH 2/4] lightnvm: rename nr_pages to nr_ppas on nvm_rq Javier González 2016-05-05 9:34 ` Matias Bjørling 2016-05-04 15:31 ` [PATCH 3/4] lightnvm: eliminate redundant variable Javier González 2016-05-05 9:52 ` Matias Bjørling 2016-05-04 15:31 ` [PATCH 4/4] lightnvm: Precalculate max/min sectors per req Javier González 2016-05-05 9:54 ` Matias Bjørling 2016-05-05 10:02 ` Javier González 2016-05-05 9:21 ` [PATCH 1/4] lightnvm: add sync and close block I/O types Matias Bjørling 2016-05-05 9:38 ` Javier González 2016-05-05 10:08 ` Matias Bjørling
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox