linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Some null_blk cleanups
@ 2024-04-11  8:54 Damien Le Moal
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Damien Le Moal @ 2024-04-11  8:54 UTC (permalink / raw)
  To: Jens Axboe, linux-block

3 patches to cleanup null_blk main code and improve zone device support.
With the last 2 patches, some performance improvements (up to +1.7%) can
be measured for a null zoned device with no zone resource limits. The
maximum IOPS measured with zone write plugging with a multi-stream 4K
sequential write workload (32 jobs) is:

Before patches:
 - mq-deadline: 596 KIOPS
 - none: 2406 KIOPS

With patches applied:
 - mq-deadline: 600 KIOPS
 - none: 2447 KIOPS

Overall, there is no functional change.

Damien Le Moal (3):
  null_blk: Have all null_handle_xxx() return a blk_status_t
  null_blk: Do zone resource management only if necessary
  null_blk: Simplify null_zone_write()

 drivers/block/null_blk/main.c  |  18 +-
 drivers/block/null_blk/zoned.c | 343 +++++++++++++++++----------------
 2 files changed, 187 insertions(+), 174 deletions(-)

-- 
2.44.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t
  2024-04-11  8:54 [PATCH 0/3] Some null_blk cleanups Damien Le Moal
@ 2024-04-11  8:55 ` Damien Le Moal
  2024-04-11 12:37   ` Nitesh Shetty
                     ` (2 more replies)
  2024-04-11  8:55 ` [PATCH 2/3] null_blk: Do zone resource management only if necessary Damien Le Moal
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 9+ messages in thread
From: Damien Le Moal @ 2024-04-11  8:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block

Modify the null_handle_flush() and null_handle_rq() functions to return
a blk_status_t instead of an errno to simplify the call sites of these
functions and to be consistant with other null_handle_xxx() functions.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/block/null_blk/main.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 993e9e8a7e69..4c8089a47850 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1167,7 +1167,7 @@ blk_status_t null_handle_discard(struct nullb_device *dev,
 	return BLK_STS_OK;
 }
 
-static int null_handle_flush(struct nullb *nullb)
+static blk_status_t null_handle_flush(struct nullb *nullb)
 {
 	int err;
 
@@ -1184,7 +1184,7 @@ static int null_handle_flush(struct nullb *nullb)
 
 	WARN_ON(!radix_tree_empty(&nullb->dev->cache));
 	spin_unlock_irq(&nullb->lock);
-	return err;
+	return errno_to_blk_status(err);
 }
 
 static int null_transfer(struct nullb *nullb, struct page *page,
@@ -1222,7 +1222,7 @@ static int null_handle_rq(struct nullb_cmd *cmd)
 {
 	struct request *rq = blk_mq_rq_from_pdu(cmd);
 	struct nullb *nullb = cmd->nq->dev->nullb;
-	int err;
+	int err = 0;
 	unsigned int len;
 	sector_t sector = blk_rq_pos(rq);
 	struct req_iterator iter;
@@ -1234,15 +1234,13 @@ static int null_handle_rq(struct nullb_cmd *cmd)
 		err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
 				     op_is_write(req_op(rq)), sector,
 				     rq->cmd_flags & REQ_FUA);
-		if (err) {
-			spin_unlock_irq(&nullb->lock);
-			return err;
-		}
+		if (err)
+			break;
 		sector += len >> SECTOR_SHIFT;
 	}
 	spin_unlock_irq(&nullb->lock);
 
-	return 0;
+	return errno_to_blk_status(err);
 }
 
 static inline blk_status_t null_handle_throttled(struct nullb_cmd *cmd)
@@ -1289,8 +1287,8 @@ static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd,
 
 	if (op == REQ_OP_DISCARD)
 		return null_handle_discard(dev, sector, nr_sectors);
-	return errno_to_blk_status(null_handle_rq(cmd));
 
+	return null_handle_rq(cmd);
 }
 
 static void nullb_zero_read_cmd_buffer(struct nullb_cmd *cmd)
@@ -1359,7 +1357,7 @@ static void null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
 	blk_status_t sts;
 
 	if (op == REQ_OP_FLUSH) {
-		cmd->error = errno_to_blk_status(null_handle_flush(nullb));
+		cmd->error = null_handle_flush(nullb);
 		goto out;
 	}
 
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] null_blk: Do zone resource management only if necessary
  2024-04-11  8:54 [PATCH 0/3] Some null_blk cleanups Damien Le Moal
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
@ 2024-04-11  8:55 ` Damien Le Moal
  2024-04-11  8:55 ` [PATCH 3/3] null_blk: Simplify null_zone_write() Damien Le Moal
  2024-04-17 14:45 ` [PATCH 0/3] Some null_blk cleanups Jens Axboe
  3 siblings, 0 replies; 9+ messages in thread
From: Damien Le Moal @ 2024-04-11  8:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block

For zoned null_blk devices setup without any limit on the maximum number
of open and active zones, there is no need to count the number of zones
that are implicitly open, explicitly open and closed. This is indicated
by the boolean field need_zone_res_mgmt of sturct nullb_device.

Modify the zone management functions null_reset_zone(),
null_finish_zone(), null_open_zone() and null_close_zone() to manage
the zone condition counters only if the device need_zone_res_mgmt field
is true. With this change, the function __null_close_zone() is removed
and integrated into the 2 caller sites directly, with the
null_close_imp_open_zone() call site greatly simplified as this function
closes zones that are known to be in the implicit open condition.

null_zone_write() is modified in a similar manner to do zone condition
accouting only when the device need_zone_res_mgmt field is true.

With these changes, the inline helpers null_lock_zone_res() and
null_unlock_zone_res() are removed and replaced with direct calls to
spin_lock()/spin_unlock().

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/block/null_blk/zoned.c | 311 +++++++++++++++++----------------
 1 file changed, 165 insertions(+), 146 deletions(-)

diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index 4ddd84752557..c31ad2620eb0 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -19,18 +19,6 @@ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
 	return sect >> ilog2(dev->zone_size_sects);
 }
 
-static inline void null_lock_zone_res(struct nullb_device *dev)
-{
-	if (dev->need_zone_res_mgmt)
-		spin_lock_irq(&dev->zone_res_lock);
-}
-
-static inline void null_unlock_zone_res(struct nullb_device *dev)
-{
-	if (dev->need_zone_res_mgmt)
-		spin_unlock_irq(&dev->zone_res_lock);
-}
-
 static inline void null_init_zone_lock(struct nullb_device *dev,
 				       struct nullb_zone *zone)
 {
@@ -251,35 +239,6 @@ size_t null_zone_valid_read_len(struct nullb *nullb,
 	return (zone->wp - sector) << SECTOR_SHIFT;
 }
 
-static blk_status_t __null_close_zone(struct nullb_device *dev,
-				      struct nullb_zone *zone)
-{
-	switch (zone->cond) {
-	case BLK_ZONE_COND_CLOSED:
-		/* close operation on closed is not an error */
-		return BLK_STS_OK;
-	case BLK_ZONE_COND_IMP_OPEN:
-		dev->nr_zones_imp_open--;
-		break;
-	case BLK_ZONE_COND_EXP_OPEN:
-		dev->nr_zones_exp_open--;
-		break;
-	case BLK_ZONE_COND_EMPTY:
-	case BLK_ZONE_COND_FULL:
-	default:
-		return BLK_STS_IOERR;
-	}
-
-	if (zone->wp == zone->start) {
-		zone->cond = BLK_ZONE_COND_EMPTY;
-	} else {
-		zone->cond = BLK_ZONE_COND_CLOSED;
-		dev->nr_zones_closed++;
-	}
-
-	return BLK_STS_OK;
-}
-
 static void null_close_imp_open_zone(struct nullb_device *dev)
 {
 	struct nullb_zone *zone;
@@ -296,7 +255,13 @@ static void null_close_imp_open_zone(struct nullb_device *dev)
 			zno = dev->zone_nr_conv;
 
 		if (zone->cond == BLK_ZONE_COND_IMP_OPEN) {
-			__null_close_zone(dev, zone);
+			dev->nr_zones_imp_open--;
+			if (zone->wp == zone->start) {
+				zone->cond = BLK_ZONE_COND_EMPTY;
+			} else {
+				zone->cond = BLK_ZONE_COND_CLOSED;
+				dev->nr_zones_closed++;
+			}
 			dev->imp_close_zone_no = zno;
 			return;
 		}
@@ -392,7 +357,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
 	    zone->cond == BLK_ZONE_COND_OFFLINE) {
 		/* Cannot write to the zone */
 		ret = BLK_STS_IOERR;
-		goto unlock;
+		goto unlock_zone;
 	}
 
 	/*
@@ -406,54 +371,57 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
 		blk_mq_rq_from_pdu(cmd)->__sector = sector;
 	} else if (sector != zone->wp) {
 		ret = BLK_STS_IOERR;
-		goto unlock;
+		goto unlock_zone;
 	}
 
 	if (zone->wp + nr_sectors > zone->start + zone->capacity) {
 		ret = BLK_STS_IOERR;
-		goto unlock;
+		goto unlock_zone;
 	}
 
 	if (zone->cond == BLK_ZONE_COND_CLOSED ||
 	    zone->cond == BLK_ZONE_COND_EMPTY) {
-		null_lock_zone_res(dev);
+		if (dev->need_zone_res_mgmt) {
+			spin_lock(&dev->zone_res_lock);
 
-		ret = null_check_zone_resources(dev, zone);
-		if (ret != BLK_STS_OK) {
-			null_unlock_zone_res(dev);
-			goto unlock;
-		}
-		if (zone->cond == BLK_ZONE_COND_CLOSED) {
-			dev->nr_zones_closed--;
-			dev->nr_zones_imp_open++;
-		} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
-			dev->nr_zones_imp_open++;
-		}
+			ret = null_check_zone_resources(dev, zone);
+			if (ret != BLK_STS_OK) {
+				spin_unlock(&dev->zone_res_lock);
+				goto unlock_zone;
+			}
+			if (zone->cond == BLK_ZONE_COND_CLOSED) {
+				dev->nr_zones_closed--;
+				dev->nr_zones_imp_open++;
+			} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
+				dev->nr_zones_imp_open++;
+			}
 
-		if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
-			zone->cond = BLK_ZONE_COND_IMP_OPEN;
+			spin_unlock(&dev->zone_res_lock);
+		}
 
-		null_unlock_zone_res(dev);
+		zone->cond = BLK_ZONE_COND_IMP_OPEN;
 	}
 
 	ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
 	if (ret != BLK_STS_OK)
-		goto unlock;
+		goto unlock_zone;
 
 	zone->wp += nr_sectors;
 	if (zone->wp == zone->start + zone->capacity) {
-		null_lock_zone_res(dev);
-		if (zone->cond == BLK_ZONE_COND_EXP_OPEN)
-			dev->nr_zones_exp_open--;
-		else if (zone->cond == BLK_ZONE_COND_IMP_OPEN)
-			dev->nr_zones_imp_open--;
+		if (dev->need_zone_res_mgmt) {
+			spin_lock(&dev->zone_res_lock);
+			if (zone->cond == BLK_ZONE_COND_EXP_OPEN)
+				dev->nr_zones_exp_open--;
+			else if (zone->cond == BLK_ZONE_COND_IMP_OPEN)
+				dev->nr_zones_imp_open--;
+			spin_unlock(&dev->zone_res_lock);
+		}
 		zone->cond = BLK_ZONE_COND_FULL;
-		null_unlock_zone_res(dev);
 	}
 
 	ret = BLK_STS_OK;
 
-unlock:
+unlock_zone:
 	null_unlock_zone(dev, zone);
 
 	return ret;
@@ -467,54 +435,100 @@ static blk_status_t null_open_zone(struct nullb_device *dev,
 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
 		return BLK_STS_IOERR;
 
-	null_lock_zone_res(dev);
-
 	switch (zone->cond) {
 	case BLK_ZONE_COND_EXP_OPEN:
-		/* open operation on exp open is not an error */
-		goto unlock;
+		/* Open operation on exp open is not an error */
+		return BLK_STS_OK;
 	case BLK_ZONE_COND_EMPTY:
-		ret = null_check_zone_resources(dev, zone);
-		if (ret != BLK_STS_OK)
-			goto unlock;
-		break;
 	case BLK_ZONE_COND_IMP_OPEN:
-		dev->nr_zones_imp_open--;
-		break;
 	case BLK_ZONE_COND_CLOSED:
-		ret = null_check_zone_resources(dev, zone);
-		if (ret != BLK_STS_OK)
-			goto unlock;
-		dev->nr_zones_closed--;
 		break;
 	case BLK_ZONE_COND_FULL:
 	default:
-		ret = BLK_STS_IOERR;
-		goto unlock;
+		return BLK_STS_IOERR;
 	}
 
-	zone->cond = BLK_ZONE_COND_EXP_OPEN;
-	dev->nr_zones_exp_open++;
+	if (dev->need_zone_res_mgmt) {
+		spin_lock(&dev->zone_res_lock);
 
-unlock:
-	null_unlock_zone_res(dev);
+		switch (zone->cond) {
+		case BLK_ZONE_COND_EMPTY:
+			ret = null_check_zone_resources(dev, zone);
+			if (ret != BLK_STS_OK) {
+				spin_unlock(&dev->zone_res_lock);
+				return ret;
+			}
+			break;
+		case BLK_ZONE_COND_IMP_OPEN:
+			dev->nr_zones_imp_open--;
+			break;
+		case BLK_ZONE_COND_CLOSED:
+			ret = null_check_zone_resources(dev, zone);
+			if (ret != BLK_STS_OK) {
+				spin_unlock(&dev->zone_res_lock);
+				return ret;
+			}
+			dev->nr_zones_closed--;
+			break;
+		default:
+			break;
+		}
 
-	return ret;
+		dev->nr_zones_exp_open++;
+
+		spin_unlock(&dev->zone_res_lock);
+	}
+
+	zone->cond = BLK_ZONE_COND_EXP_OPEN;
+
+	return BLK_STS_OK;
 }
 
 static blk_status_t null_close_zone(struct nullb_device *dev,
 				    struct nullb_zone *zone)
 {
-	blk_status_t ret;
-
 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
 		return BLK_STS_IOERR;
 
-	null_lock_zone_res(dev);
-	ret = __null_close_zone(dev, zone);
-	null_unlock_zone_res(dev);
+	switch (zone->cond) {
+	case BLK_ZONE_COND_CLOSED:
+		/* close operation on closed is not an error */
+		return BLK_STS_OK;
+	case BLK_ZONE_COND_IMP_OPEN:
+	case BLK_ZONE_COND_EXP_OPEN:
+		break;
+	case BLK_ZONE_COND_EMPTY:
+	case BLK_ZONE_COND_FULL:
+	default:
+		return BLK_STS_IOERR;
+	}
+
+	if (dev->need_zone_res_mgmt) {
+		spin_lock(&dev->zone_res_lock);
 
-	return ret;
+		switch (zone->cond) {
+		case BLK_ZONE_COND_IMP_OPEN:
+			dev->nr_zones_imp_open--;
+			break;
+		case BLK_ZONE_COND_EXP_OPEN:
+			dev->nr_zones_exp_open--;
+			break;
+		default:
+			break;
+		}
+
+		if (zone->wp > zone->start)
+			dev->nr_zones_closed++;
+
+		spin_unlock(&dev->zone_res_lock);
+	}
+
+	if (zone->wp == zone->start)
+		zone->cond = BLK_ZONE_COND_EMPTY;
+	else
+		zone->cond = BLK_ZONE_COND_CLOSED;
+
+	return BLK_STS_OK;
 }
 
 static blk_status_t null_finish_zone(struct nullb_device *dev,
@@ -525,41 +539,47 @@ static blk_status_t null_finish_zone(struct nullb_device *dev,
 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
 		return BLK_STS_IOERR;
 
-	null_lock_zone_res(dev);
+	if (dev->need_zone_res_mgmt) {
+		spin_lock(&dev->zone_res_lock);
 
-	switch (zone->cond) {
-	case BLK_ZONE_COND_FULL:
-		/* finish operation on full is not an error */
-		goto unlock;
-	case BLK_ZONE_COND_EMPTY:
-		ret = null_check_zone_resources(dev, zone);
-		if (ret != BLK_STS_OK)
-			goto unlock;
-		break;
-	case BLK_ZONE_COND_IMP_OPEN:
-		dev->nr_zones_imp_open--;
-		break;
-	case BLK_ZONE_COND_EXP_OPEN:
-		dev->nr_zones_exp_open--;
-		break;
-	case BLK_ZONE_COND_CLOSED:
-		ret = null_check_zone_resources(dev, zone);
-		if (ret != BLK_STS_OK)
-			goto unlock;
-		dev->nr_zones_closed--;
-		break;
-	default:
-		ret = BLK_STS_IOERR;
-		goto unlock;
+		switch (zone->cond) {
+		case BLK_ZONE_COND_FULL:
+			/* Finish operation on full is not an error */
+			spin_unlock(&dev->zone_res_lock);
+			return BLK_STS_OK;
+		case BLK_ZONE_COND_EMPTY:
+			ret = null_check_zone_resources(dev, zone);
+			if (ret != BLK_STS_OK) {
+				spin_unlock(&dev->zone_res_lock);
+				return ret;
+			}
+			break;
+		case BLK_ZONE_COND_IMP_OPEN:
+			dev->nr_zones_imp_open--;
+			break;
+		case BLK_ZONE_COND_EXP_OPEN:
+			dev->nr_zones_exp_open--;
+			break;
+		case BLK_ZONE_COND_CLOSED:
+			ret = null_check_zone_resources(dev, zone);
+			if (ret != BLK_STS_OK) {
+				spin_unlock(&dev->zone_res_lock);
+				return ret;
+			}
+			dev->nr_zones_closed--;
+			break;
+		default:
+			spin_unlock(&dev->zone_res_lock);
+			return BLK_STS_IOERR;
+		}
+
+		spin_unlock(&dev->zone_res_lock);
 	}
 
 	zone->cond = BLK_ZONE_COND_FULL;
 	zone->wp = zone->start + zone->len;
 
-unlock:
-	null_unlock_zone_res(dev);
-
-	return ret;
+	return BLK_STS_OK;
 }
 
 static blk_status_t null_reset_zone(struct nullb_device *dev,
@@ -568,34 +588,33 @@ static blk_status_t null_reset_zone(struct nullb_device *dev,
 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
 		return BLK_STS_IOERR;
 
-	null_lock_zone_res(dev);
+	if (dev->need_zone_res_mgmt) {
+		spin_lock(&dev->zone_res_lock);
 
-	switch (zone->cond) {
-	case BLK_ZONE_COND_EMPTY:
-		/* reset operation on empty is not an error */
-		null_unlock_zone_res(dev);
-		return BLK_STS_OK;
-	case BLK_ZONE_COND_IMP_OPEN:
-		dev->nr_zones_imp_open--;
-		break;
-	case BLK_ZONE_COND_EXP_OPEN:
-		dev->nr_zones_exp_open--;
-		break;
-	case BLK_ZONE_COND_CLOSED:
-		dev->nr_zones_closed--;
-		break;
-	case BLK_ZONE_COND_FULL:
-		break;
-	default:
-		null_unlock_zone_res(dev);
-		return BLK_STS_IOERR;
+		switch (zone->cond) {
+		case BLK_ZONE_COND_IMP_OPEN:
+			dev->nr_zones_imp_open--;
+			break;
+		case BLK_ZONE_COND_EXP_OPEN:
+			dev->nr_zones_exp_open--;
+			break;
+		case BLK_ZONE_COND_CLOSED:
+			dev->nr_zones_closed--;
+			break;
+		case BLK_ZONE_COND_EMPTY:
+		case BLK_ZONE_COND_FULL:
+			break;
+		default:
+			spin_unlock(&dev->zone_res_lock);
+			return BLK_STS_IOERR;
+		}
+
+		spin_unlock(&dev->zone_res_lock);
 	}
 
 	zone->cond = BLK_ZONE_COND_EMPTY;
 	zone->wp = zone->start;
 
-	null_unlock_zone_res(dev);
-
 	if (dev->memory_backed)
 		return null_handle_discard(dev, zone->start, zone->len);
 
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] null_blk: Simplify null_zone_write()
  2024-04-11  8:54 [PATCH 0/3] Some null_blk cleanups Damien Le Moal
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
  2024-04-11  8:55 ` [PATCH 2/3] null_blk: Do zone resource management only if necessary Damien Le Moal
@ 2024-04-11  8:55 ` Damien Le Moal
  2024-04-17  9:42   ` Johannes Thumshirn
  2024-04-17 14:45 ` [PATCH 0/3] Some null_blk cleanups Jens Axboe
  3 siblings, 1 reply; 9+ messages in thread
From: Damien Le Moal @ 2024-04-11  8:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block

In null_zone_write, we do not need to first check if the target zone
condition is FULL, READONLY or OFFLINE: for theses conditions, the check
of the command sector against the zone write pointer will always result
in the command failing. Remove these checks.

We still however need to check that the target zone write pointer is not
invalid for zone append operations. To do so, add the macro
NULL_ZONE_INVALID_WP and use it in null_set_zone_cond() when changing a
zone to READONLY or OFFLINE condition.

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/block/null_blk/zoned.c | 36 +++++++++++++++-------------------
 1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index c31ad2620eb0..5b5a63adacc1 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -9,6 +9,8 @@
 #undef pr_fmt
 #define pr_fmt(fmt)	"null_blk: " fmt
 
+#define NULL_ZONE_INVALID_WP	((sector_t)-1)
+
 static inline sector_t mb_to_sects(unsigned long mb)
 {
 	return ((sector_t)mb * SZ_1M) >> SECTOR_SHIFT;
@@ -341,9 +343,6 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
 
 	trace_nullb_zone_op(cmd, zno, zone->cond);
 
-	if (WARN_ON_ONCE(append && !dev->zone_append_max_sectors))
-		return BLK_STS_IOERR;
-
 	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
 		if (append)
 			return BLK_STS_IOERR;
@@ -352,29 +351,26 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
 
 	null_lock_zone(dev, zone);
 
-	if (zone->cond == BLK_ZONE_COND_FULL ||
-	    zone->cond == BLK_ZONE_COND_READONLY ||
-	    zone->cond == BLK_ZONE_COND_OFFLINE) {
-		/* Cannot write to the zone */
-		ret = BLK_STS_IOERR;
-		goto unlock_zone;
-	}
-
 	/*
-	 * Regular writes must be at the write pointer position.
-	 * Zone append writes are automatically issued at the write
-	 * pointer and the position returned using the request or BIO
-	 * sector.
+	 * Regular writes must be at the write pointer position. Zone append
+	 * writes are automatically issued at the write pointer and the position
+	 * returned using the request sector. Note that we do not check the zone
+	 * condition because for FULL, READONLY and OFFLINE zones, the sector
+	 * check against the zone write pointer will always result in failing
+	 * the command.
 	 */
 	if (append) {
+		if (WARN_ON_ONCE(!dev->zone_append_max_sectors) ||
+		    zone->wp == NULL_ZONE_INVALID_WP) {
+			ret = BLK_STS_IOERR;
+			goto unlock_zone;
+		}
 		sector = zone->wp;
 		blk_mq_rq_from_pdu(cmd)->__sector = sector;
-	} else if (sector != zone->wp) {
-		ret = BLK_STS_IOERR;
-		goto unlock_zone;
 	}
 
-	if (zone->wp + nr_sectors > zone->start + zone->capacity) {
+	if (sector != zone->wp ||
+	    zone->wp + nr_sectors > zone->start + zone->capacity) {
 		ret = BLK_STS_IOERR;
 		goto unlock_zone;
 	}
@@ -743,7 +739,7 @@ static void null_set_zone_cond(struct nullb_device *dev,
 		    zone->cond != BLK_ZONE_COND_OFFLINE)
 			null_finish_zone(dev, zone);
 		zone->cond = cond;
-		zone->wp = (sector_t)-1;
+		zone->wp = NULL_ZONE_INVALID_WP;
 	}
 
 	null_unlock_zone(dev, zone);
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
@ 2024-04-11 12:37   ` Nitesh Shetty
  2024-04-15 20:59   ` Chaitanya Kulkarni
  2024-04-17  9:42   ` Johannes Thumshirn
  2 siblings, 0 replies; 9+ messages in thread
From: Nitesh Shetty @ 2024-04-11 12:37 UTC (permalink / raw)
  To: Damien Le Moal; +Cc: Jens Axboe, linux-block

[-- Attachment #1: Type: text/plain, Size: 370 bytes --]

On 11/04/24 05:55PM, Damien Le Moal wrote:
>Modify the null_handle_flush() and null_handle_rq() functions to return
>a blk_status_t instead of an errno to simplify the call sites of these
>functions and to be consistant with other null_handle_xxx() functions.
>
>Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
>---
Reviewed-by: Nitesh Shetty <nj.shetty@samsung.com>

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
  2024-04-11 12:37   ` Nitesh Shetty
@ 2024-04-15 20:59   ` Chaitanya Kulkarni
  2024-04-17  9:42   ` Johannes Thumshirn
  2 siblings, 0 replies; 9+ messages in thread
From: Chaitanya Kulkarni @ 2024-04-15 20:59 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe, linux-block@vger.kernel.org

On 4/11/2024 1:55 AM, Damien Le Moal wrote:
> Modify the null_handle_flush() and null_handle_rq() functions to return
> a blk_status_t instead of an errno to simplify the call sites of these
> functions and to be consistant with other null_handle_xxx() functions.
> 
> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
> ---


Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] null_blk: Simplify null_zone_write()
  2024-04-11  8:55 ` [PATCH 3/3] null_blk: Simplify null_zone_write() Damien Le Moal
@ 2024-04-17  9:42   ` Johannes Thumshirn
  0 siblings, 0 replies; 9+ messages in thread
From: Johannes Thumshirn @ 2024-04-17  9:42 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe, linux-block@vger.kernel.org

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t
  2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
  2024-04-11 12:37   ` Nitesh Shetty
  2024-04-15 20:59   ` Chaitanya Kulkarni
@ 2024-04-17  9:42   ` Johannes Thumshirn
  2 siblings, 0 replies; 9+ messages in thread
From: Johannes Thumshirn @ 2024-04-17  9:42 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe, linux-block@vger.kernel.org

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] Some null_blk cleanups
  2024-04-11  8:54 [PATCH 0/3] Some null_blk cleanups Damien Le Moal
                   ` (2 preceding siblings ...)
  2024-04-11  8:55 ` [PATCH 3/3] null_blk: Simplify null_zone_write() Damien Le Moal
@ 2024-04-17 14:45 ` Jens Axboe
  3 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-04-17 14:45 UTC (permalink / raw)
  To: linux-block, Damien Le Moal


On Thu, 11 Apr 2024 17:54:59 +0900, Damien Le Moal wrote:
> 3 patches to cleanup null_blk main code and improve zone device support.
> With the last 2 patches, some performance improvements (up to +1.7%) can
> be measured for a null zoned device with no zone resource limits. The
> maximum IOPS measured with zone write plugging with a multi-stream 4K
> sequential write workload (32 jobs) is:
> 
> Before patches:
>  - mq-deadline: 596 KIOPS
>  - none: 2406 KIOPS
> 
> [...]

Applied, thanks!

[1/3] null_blk: Have all null_handle_xxx() return a blk_status_t
      commit: cb9e5273f6d97830c44aeecd28cf5f5723ef2ba3
[2/3] null_blk: Do zone resource management only if necessary
      commit: 3bdde0701e5f819df0fa0e32fa8d5df7dc184cb5
[3/3] null_blk: Simplify null_zone_write()
      commit: e994ff5b55e3bf30ecb6ed354eaec77c989aa4ae

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-17 14:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-11  8:54 [PATCH 0/3] Some null_blk cleanups Damien Le Moal
2024-04-11  8:55 ` [PATCH 1/3] null_blk: Have all null_handle_xxx() return a blk_status_t Damien Le Moal
2024-04-11 12:37   ` Nitesh Shetty
2024-04-15 20:59   ` Chaitanya Kulkarni
2024-04-17  9:42   ` Johannes Thumshirn
2024-04-11  8:55 ` [PATCH 2/3] null_blk: Do zone resource management only if necessary Damien Le Moal
2024-04-11  8:55 ` [PATCH 3/3] null_blk: Simplify null_zone_write() Damien Le Moal
2024-04-17  9:42   ` Johannes Thumshirn
2024-04-17 14:45 ` [PATCH 0/3] Some null_blk cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).