public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* queue_lock fixups
@ 2018-11-16  8:10 Christoph Hellwig
  2018-11-16  8:10 ` [PATCH 1/6] block: remove the rq_alloc_data request_queue field Christoph Hellwig
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

Hi Jens,

a few fixups for the queue_lock conversion, drop a few more bogus
queue_lock uses in drivers, and clean up the mmc use of the queue_lock
as suggested by Ulf.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] block: remove the rq_alloc_data request_queue field
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:33   ` Omar Sandoval
  2018-11-16  8:10 ` [PATCH 2/6] floppy: remove queue_lock around floppy_end_request Christoph Hellwig
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/blkdev.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1d185f1fc333..5c5ef461845f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -567,7 +567,6 @@ struct request_queue {
 	bool			mq_sysfs_init_done;
 
 	size_t			cmd_size;
-	void			*rq_alloc_data;
 
 	struct work_struct	release_work;
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/6] floppy: remove queue_lock around floppy_end_request
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
  2018-11-16  8:10 ` [PATCH 1/6] block: remove the rq_alloc_data request_queue field Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:33   ` Omar Sandoval
  2018-11-16  8:10 ` [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors Christoph Hellwig
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

There is nothing the queue_lock could protect inside floppy_end_request,
so remove it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/floppy.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index eeb4be8d000b..218099dd8e44 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -2254,10 +2254,7 @@ static void request_done(int uptodate)
 		if (block > _floppy->sect)
 			DRS->maxtrack = 1;
 
-		/* unlock chained buffers */
-		spin_lock_irqsave(&q->queue_lock, flags);
 		floppy_end_request(req, 0);
-		spin_unlock_irqrestore(&q->queue_lock, flags);
 	} else {
 		if (rq_data_dir(req) == WRITE) {
 			/* record write error information */
@@ -2269,9 +2266,7 @@ static void request_done(int uptodate)
 			DRWE->last_error_sector = blk_rq_pos(req);
 			DRWE->last_error_generation = DRS->generation;
 		}
-		spin_lock_irqsave(&q->queue_lock, flags);
 		floppy_end_request(req, BLK_STS_IOERR);
-		spin_unlock_irqrestore(&q->queue_lock, flags);
 	}
 }
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
  2018-11-16  8:10 ` [PATCH 1/6] block: remove the rq_alloc_data request_queue field Christoph Hellwig
  2018-11-16  8:10 ` [PATCH 2/6] floppy: remove queue_lock around floppy_end_request Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:33   ` Omar Sandoval
  2018-11-16  8:10 ` [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq Christoph Hellwig
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

blk_queue_max_hw_sectors can't do anything with queue_lock protection
so don't hold it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/pktcdvd.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 4adf4c8861cd..f5a71023f76c 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2203,9 +2203,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write)
 		 * Some CDRW drives can not handle writes larger than one packet,
 		 * even if the size is a multiple of the packet size.
 		 */
-		spin_lock_irq(&q->queue_lock);
 		blk_queue_max_hw_sectors(q, pd->settings.size);
-		spin_unlock_irq(&q->queue_lock);
 		set_bit(PACKET_WRITABLE, &pd->flags);
 	} else {
 		pkt_set_speed(pd, MAX_SPEED, MAX_SPEED);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
                   ` (2 preceding siblings ...)
  2018-11-16  8:10 ` [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:34   ` Omar Sandoval
  2018-11-16  8:10 ` [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq Christoph Hellwig
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

There is nothing we can synchronize against over a call to
blk_queue_dying.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/ide/ide-pm.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 51fe10ac02fa..56690f523100 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -44,15 +44,12 @@ static int ide_pm_execute_rq(struct request *rq)
 {
 	struct request_queue *q = rq->q;
 
-	spin_lock_irq(&q->queue_lock);
 	if (unlikely(blk_queue_dying(q))) {
 		rq->rq_flags |= RQF_QUIET;
 		scsi_req(rq)->result = -ENXIO;
-		spin_unlock_irq(&q->queue_lock);
 		blk_mq_end_request(rq, BLK_STS_OK);
 		return -ENXIO;
 	}
-	spin_unlock_irq(&q->queue_lock);
 	blk_execute_rq(q, NULL, rq, true);
 
 	return scsi_req(rq)->result ? -EIO : 0;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
                   ` (3 preceding siblings ...)
  2018-11-16  8:10 ` [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:37   ` Omar Sandoval
  2018-11-16  8:10 ` [PATCH 6/6] mmc: stop abusing the request queue_lock pointer Christoph Hellwig
  2018-11-16 16:17 ` queue_lock fixups Jens Axboe
  6 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

blk_mq_stop_hw_queues doesn't need any locking, and the ide
dev_flags field isn't protected by it either.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/ide/ide-pm.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 56690f523100..192e6c65d34e 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -201,7 +201,6 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq)
 {
 	struct request_queue *q = drive->queue;
 	struct ide_pm_state *pm = ide_req(rq)->special;
-	unsigned long flags;
 
 	ide_complete_power_step(drive, rq);
 	if (pm->pm_step != IDE_PM_COMPLETED)
@@ -211,12 +210,10 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq)
 	printk("%s: completing PM request, %s\n", drive->name,
 	       (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND) ? "suspend" : "resume");
 #endif
-	spin_lock_irqsave(&q->queue_lock, flags);
 	if (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND)
 		blk_mq_stop_hw_queues(q);
 	else
 		drive->dev_flags &= ~IDE_DFLAG_BLOCKED;
-	spin_unlock_irqrestore(&q->queue_lock, flags);
 
 	drive->hwif->rq = NULL;
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/6] mmc: stop abusing the request queue_lock pointer
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
                   ` (4 preceding siblings ...)
  2018-11-16  8:10 ` [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq Christoph Hellwig
@ 2018-11-16  8:10 ` Christoph Hellwig
  2018-11-16  8:51   ` Omar Sandoval
  2018-11-17 10:12   ` Ulf Hansson
  2018-11-16 16:17 ` queue_lock fixups Jens Axboe
  6 siblings, 2 replies; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Omar Sandoval, linux-block, linux-mmc

Replace the lock in mmc_blk_data that is only used through a pointer
in struct mmc_queue and to protect fields in that structure with
an actual lock in struct mmc_queue.

Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/mmc/core/block.c | 24 +++++++++++-------------
 drivers/mmc/core/queue.c | 31 +++++++++++++++----------------
 drivers/mmc/core/queue.h |  4 ++--
 3 files changed, 28 insertions(+), 31 deletions(-)

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 70ec465beb69..2c329a3e3fdb 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -100,7 +100,6 @@ static DEFINE_IDA(mmc_rpmb_ida);
  * There is one mmc_blk_data per slot.
  */
 struct mmc_blk_data {
-	spinlock_t	lock;
 	struct device	*parent;
 	struct gendisk	*disk;
 	struct mmc_queue queue;
@@ -1483,7 +1482,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
 		blk_mq_end_request(req, BLK_STS_OK);
 	}
 
-	spin_lock_irqsave(mq->lock, flags);
+	spin_lock_irqsave(&mq->lock, flags);
 
 	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
 
@@ -1491,7 +1490,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
 
 	mmc_cqe_check_busy(mq);
 
-	spin_unlock_irqrestore(mq->lock, flags);
+	spin_unlock_irqrestore(&mq->lock, flags);
 
 	if (!mq->cqe_busy)
 		blk_mq_run_hw_queues(q, true);
@@ -1991,13 +1990,13 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
 	unsigned long flags;
 	bool put_card;
 
-	spin_lock_irqsave(mq->lock, flags);
+	spin_lock_irqsave(&mq->lock, flags);
 
 	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
 
 	put_card = (mmc_tot_in_flight(mq) == 0);
 
-	spin_unlock_irqrestore(mq->lock, flags);
+	spin_unlock_irqrestore(&mq->lock, flags);
 
 	if (put_card)
 		mmc_put_card(mq->card, &mq->ctx);
@@ -2093,11 +2092,11 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
 		 * request does not need to wait (although it does need to
 		 * complete complete_req first).
 		 */
-		spin_lock_irqsave(mq->lock, flags);
+		spin_lock_irqsave(&mq->lock, flags);
 		mq->complete_req = req;
 		mq->rw_wait = false;
 		waiting = mq->waiting;
-		spin_unlock_irqrestore(mq->lock, flags);
+		spin_unlock_irqrestore(&mq->lock, flags);
 
 		/*
 		 * If 'waiting' then the waiting task will complete this
@@ -2116,10 +2115,10 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
 	/* Take the recovery path for errors or urgent background operations */
 	if (mmc_blk_rq_error(&mqrq->brq) ||
 	    mmc_blk_urgent_bkops_needed(mq, mqrq)) {
-		spin_lock_irqsave(mq->lock, flags);
+		spin_lock_irqsave(&mq->lock, flags);
 		mq->recovery_needed = true;
 		mq->recovery_req = req;
-		spin_unlock_irqrestore(mq->lock, flags);
+		spin_unlock_irqrestore(&mq->lock, flags);
 		wake_up(&mq->wait);
 		schedule_work(&mq->recovery_work);
 		return;
@@ -2142,7 +2141,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
 	 * Wait while there is another request in progress, but not if recovery
 	 * is needed. Also indicate whether there is a request waiting to start.
 	 */
-	spin_lock_irqsave(mq->lock, flags);
+	spin_lock_irqsave(&mq->lock, flags);
 	if (mq->recovery_needed) {
 		*err = -EBUSY;
 		done = true;
@@ -2150,7 +2149,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
 		done = !mq->rw_wait;
 	}
 	mq->waiting = !done;
-	spin_unlock_irqrestore(mq->lock, flags);
+	spin_unlock_irqrestore(&mq->lock, flags);
 
 	return done;
 }
@@ -2327,12 +2326,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 		goto err_kfree;
 	}
 
-	spin_lock_init(&md->lock);
 	INIT_LIST_HEAD(&md->part);
 	INIT_LIST_HEAD(&md->rpmbs);
 	md->usage = 1;
 
-	ret = mmc_init_queue(&md->queue, card, &md->lock);
+	ret = mmc_init_queue(&md->queue, card);
 	if (ret)
 		goto err_putdisk;
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 4485cf12218c..35cc138b096d 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -89,9 +89,9 @@ void mmc_cqe_recovery_notifier(struct mmc_request *mrq)
 	struct mmc_queue *mq = q->queuedata;
 	unsigned long flags;
 
-	spin_lock_irqsave(mq->lock, flags);
+	spin_lock_irqsave(&mq->lock, flags);
 	__mmc_cqe_recovery_notifier(mq);
-	spin_unlock_irqrestore(mq->lock, flags);
+	spin_unlock_irqrestore(&mq->lock, flags);
 }
 
 static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
@@ -128,14 +128,14 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(mq->lock, flags);
+	spin_lock_irqsave(&mq->lock, flags);
 
 	if (mq->recovery_needed || !mq->use_cqe)
 		ret = BLK_EH_RESET_TIMER;
 	else
 		ret = mmc_cqe_timed_out(req);
 
-	spin_unlock_irqrestore(mq->lock, flags);
+	spin_unlock_irqrestore(&mq->lock, flags);
 
 	return ret;
 }
@@ -157,9 +157,9 @@ static void mmc_mq_recovery_handler(struct work_struct *work)
 
 	mq->in_recovery = false;
 
-	spin_lock_irq(mq->lock);
+	spin_lock_irq(&mq->lock);
 	mq->recovery_needed = false;
-	spin_unlock_irq(mq->lock);
+	spin_unlock_irq(&mq->lock);
 
 	mmc_put_card(mq->card, &mq->ctx);
 
@@ -258,10 +258,10 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	issue_type = mmc_issue_type(mq, req);
 
-	spin_lock_irq(mq->lock);
+	spin_lock_irq(&mq->lock);
 
 	if (mq->recovery_needed || mq->busy) {
-		spin_unlock_irq(mq->lock);
+		spin_unlock_irq(&mq->lock);
 		return BLK_STS_RESOURCE;
 	}
 
@@ -269,7 +269,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 	case MMC_ISSUE_DCMD:
 		if (mmc_cqe_dcmd_busy(mq)) {
 			mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
-			spin_unlock_irq(mq->lock);
+			spin_unlock_irq(&mq->lock);
 			return BLK_STS_RESOURCE;
 		}
 		break;
@@ -294,7 +294,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 	get_card = (mmc_tot_in_flight(mq) == 1);
 	cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
 
-	spin_unlock_irq(mq->lock);
+	spin_unlock_irq(&mq->lock);
 
 	if (!(req->rq_flags & RQF_DONTPREP)) {
 		req_to_mmc_queue_req(req)->retries = 0;
@@ -328,12 +328,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (issued != MMC_REQ_STARTED) {
 		bool put_card = false;
 
-		spin_lock_irq(mq->lock);
+		spin_lock_irq(&mq->lock);
 		mq->in_flight[issue_type] -= 1;
 		if (mmc_tot_in_flight(mq) == 0)
 			put_card = true;
 		mq->busy = false;
-		spin_unlock_irq(mq->lock);
+		spin_unlock_irq(&mq->lock);
 		if (put_card)
 			mmc_put_card(card, &mq->ctx);
 	} else {
@@ -385,19 +385,18 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
  * mmc_init_queue - initialise a queue structure.
  * @mq: mmc queue
  * @card: mmc card to attach this queue
- * @lock: queue lock
  *
  * Initialise a MMC card request queue.
  */
-int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
-		   spinlock_t *lock)
+int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
 {
 	struct mmc_host *host = card->host;
 	int ret;
 
 	mq->card = card;
-	mq->lock = lock;
 	mq->use_cqe = host->cqe_enabled;
+	
+	spin_lock_init(&mq->lock);
 
 	memset(&mq->tag_set, 0, sizeof(mq->tag_set));
 	mq->tag_set.ops = &mmc_mq_ops;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 5421f1542e71..fd11491ced9f 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -73,11 +73,11 @@ struct mmc_queue_req {
 
 struct mmc_queue {
 	struct mmc_card		*card;
-	spinlock_t		*lock;
 	struct mmc_ctx		ctx;
 	struct blk_mq_tag_set	tag_set;
 	struct mmc_blk_data	*blkdata;
 	struct request_queue	*queue;
+	spinlock_t		lock;
 	int			in_flight[MMC_ISSUE_MAX];
 	unsigned int		cqe_busy;
 #define MMC_CQE_DCMD_BUSY	BIT(0)
@@ -96,7 +96,7 @@ struct mmc_queue {
 	struct work_struct	complete_work;
 };
 
-extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *);
+extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/6] block: remove the rq_alloc_data request_queue field
  2018-11-16  8:10 ` [PATCH 1/6] block: remove the rq_alloc_data request_queue field Christoph Hellwig
@ 2018-11-16  8:33   ` Omar Sandoval
  0 siblings, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:01AM +0100, Christoph Hellwig wrote:

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/blkdev.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 1d185f1fc333..5c5ef461845f 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -567,7 +567,6 @@ struct request_queue {
>  	bool			mq_sysfs_init_done;
>  
>  	size_t			cmd_size;
> -	void			*rq_alloc_data;
>  
>  	struct work_struct	release_work;
>  
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/6] floppy: remove queue_lock around floppy_end_request
  2018-11-16  8:10 ` [PATCH 2/6] floppy: remove queue_lock around floppy_end_request Christoph Hellwig
@ 2018-11-16  8:33   ` Omar Sandoval
  0 siblings, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:02AM +0100, Christoph Hellwig wrote:
> There is nothing the queue_lock could protect inside floppy_end_request,
> so remove it.

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/floppy.c | 5 -----
>  1 file changed, 5 deletions(-)
> 
> diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
> index eeb4be8d000b..218099dd8e44 100644
> --- a/drivers/block/floppy.c
> +++ b/drivers/block/floppy.c
> @@ -2254,10 +2254,7 @@ static void request_done(int uptodate)
>  		if (block > _floppy->sect)
>  			DRS->maxtrack = 1;
>  
> -		/* unlock chained buffers */
> -		spin_lock_irqsave(&q->queue_lock, flags);
>  		floppy_end_request(req, 0);
> -		spin_unlock_irqrestore(&q->queue_lock, flags);
>  	} else {
>  		if (rq_data_dir(req) == WRITE) {
>  			/* record write error information */
> @@ -2269,9 +2266,7 @@ static void request_done(int uptodate)
>  			DRWE->last_error_sector = blk_rq_pos(req);
>  			DRWE->last_error_generation = DRS->generation;
>  		}
> -		spin_lock_irqsave(&q->queue_lock, flags);
>  		floppy_end_request(req, BLK_STS_IOERR);
> -		spin_unlock_irqrestore(&q->queue_lock, flags);
>  	}
>  }
>  
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors
  2018-11-16  8:10 ` [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors Christoph Hellwig
@ 2018-11-16  8:33   ` Omar Sandoval
  0 siblings, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:03AM +0100, Christoph Hellwig wrote:
> blk_queue_max_hw_sectors can't do anything with queue_lock protection
> so don't hold it.

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/pktcdvd.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
> index 4adf4c8861cd..f5a71023f76c 100644
> --- a/drivers/block/pktcdvd.c
> +++ b/drivers/block/pktcdvd.c
> @@ -2203,9 +2203,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write)
>  		 * Some CDRW drives can not handle writes larger than one packet,
>  		 * even if the size is a multiple of the packet size.
>  		 */
> -		spin_lock_irq(&q->queue_lock);
>  		blk_queue_max_hw_sectors(q, pd->settings.size);
> -		spin_unlock_irq(&q->queue_lock);
>  		set_bit(PACKET_WRITABLE, &pd->flags);
>  	} else {
>  		pkt_set_speed(pd, MAX_SPEED, MAX_SPEED);
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq
  2018-11-16  8:10 ` [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq Christoph Hellwig
@ 2018-11-16  8:34   ` Omar Sandoval
  0 siblings, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:34 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:04AM +0100, Christoph Hellwig wrote:
> There is nothing we can synchronize against over a call to
> blk_queue_dying.

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/ide/ide-pm.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
> index 51fe10ac02fa..56690f523100 100644
> --- a/drivers/ide/ide-pm.c
> +++ b/drivers/ide/ide-pm.c
> @@ -44,15 +44,12 @@ static int ide_pm_execute_rq(struct request *rq)
>  {
>  	struct request_queue *q = rq->q;
>  
> -	spin_lock_irq(&q->queue_lock);
>  	if (unlikely(blk_queue_dying(q))) {
>  		rq->rq_flags |= RQF_QUIET;
>  		scsi_req(rq)->result = -ENXIO;
> -		spin_unlock_irq(&q->queue_lock);
>  		blk_mq_end_request(rq, BLK_STS_OK);
>  		return -ENXIO;
>  	}
> -	spin_unlock_irq(&q->queue_lock);
>  	blk_execute_rq(q, NULL, rq, true);
>  
>  	return scsi_req(rq)->result ? -EIO : 0;
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq
  2018-11-16  8:10 ` [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq Christoph Hellwig
@ 2018-11-16  8:37   ` Omar Sandoval
  2018-11-16  8:40     ` Christoph Hellwig
  0 siblings, 1 reply; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:37 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> blk_mq_stop_hw_queues doesn't need any locking, and the ide
> dev_flags field isn't protected by it either.

Is it a bug that dev_flags is no longer protected by queue_lock after
the mq conversion?

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/ide/ide-pm.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
> index 56690f523100..192e6c65d34e 100644
> --- a/drivers/ide/ide-pm.c
> +++ b/drivers/ide/ide-pm.c
> @@ -201,7 +201,6 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq)
>  {
>  	struct request_queue *q = drive->queue;
>  	struct ide_pm_state *pm = ide_req(rq)->special;
> -	unsigned long flags;
>  
>  	ide_complete_power_step(drive, rq);
>  	if (pm->pm_step != IDE_PM_COMPLETED)
> @@ -211,12 +210,10 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq)
>  	printk("%s: completing PM request, %s\n", drive->name,
>  	       (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND) ? "suspend" : "resume");
>  #endif
> -	spin_lock_irqsave(&q->queue_lock, flags);
>  	if (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND)
>  		blk_mq_stop_hw_queues(q);
>  	else
>  		drive->dev_flags &= ~IDE_DFLAG_BLOCKED;
> -	spin_unlock_irqrestore(&q->queue_lock, flags);
>  
>  	drive->hwif->rq = NULL;
>  
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq
  2018-11-16  8:37   ` Omar Sandoval
@ 2018-11-16  8:40     ` Christoph Hellwig
  2018-11-16  8:42       ` Omar Sandoval
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2018-11-16  8:40 UTC (permalink / raw)
  To: Omar Sandoval
  Cc: Christoph Hellwig, Jens Axboe, Omar Sandoval, linux-block,
	linux-mmc

On Fri, Nov 16, 2018 at 12:37:32AM -0800, Omar Sandoval wrote:
> On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> > blk_mq_stop_hw_queues doesn't need any locking, and the ide
> > dev_flags field isn't protected by it either.
> 
> Is it a bug that dev_flags is no longer protected by queue_lock after
> the mq conversion?

Most if not all users weren't under queue_lock before.  As far as I can
tell it generally is set in the probe path, and without any obvious
locking.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq
  2018-11-16  8:40     ` Christoph Hellwig
@ 2018-11-16  8:42       ` Omar Sandoval
  0 siblings, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:40:04AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 12:37:32AM -0800, Omar Sandoval wrote:
> > On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> > > blk_mq_stop_hw_queues doesn't need any locking, and the ide
> > > dev_flags field isn't protected by it either.
> > 
> > Is it a bug that dev_flags is no longer protected by queue_lock after
> > the mq conversion?
> 
> Most if not all users weren't under queue_lock before.  As far as I can
> tell it generally is set in the probe path, and without any obvious
> locking.

Good enough for me, you can add

Reviewed-by: Omar Sandoval <osandov@fb.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/6] mmc: stop abusing the request queue_lock pointer
  2018-11-16  8:10 ` [PATCH 6/6] mmc: stop abusing the request queue_lock pointer Christoph Hellwig
@ 2018-11-16  8:51   ` Omar Sandoval
  2018-11-17 10:12   ` Ulf Hansson
  1 sibling, 0 replies; 17+ messages in thread
From: Omar Sandoval @ 2018-11-16  8:51 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc

On Fri, Nov 16, 2018 at 09:10:06AM +0100, Christoph Hellwig wrote:
> Replace the lock in mmc_blk_data that is only used through a pointer
> in struct mmc_queue and to protect fields in that structure with
> an actual lock in struct mmc_queue.

Looks sane to me, but I'll let the mmc people ack.

> Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/mmc/core/block.c | 24 +++++++++++-------------
>  drivers/mmc/core/queue.c | 31 +++++++++++++++----------------
>  drivers/mmc/core/queue.h |  4 ++--
>  3 files changed, 28 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 70ec465beb69..2c329a3e3fdb 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -100,7 +100,6 @@ static DEFINE_IDA(mmc_rpmb_ida);
>   * There is one mmc_blk_data per slot.
>   */
>  struct mmc_blk_data {
> -	spinlock_t	lock;
>  	struct device	*parent;
>  	struct gendisk	*disk;
>  	struct mmc_queue queue;
> @@ -1483,7 +1482,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
>  		blk_mq_end_request(req, BLK_STS_OK);
>  	}
>  
> -	spin_lock_irqsave(mq->lock, flags);
> +	spin_lock_irqsave(&mq->lock, flags);
>  
>  	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
>  
> @@ -1491,7 +1490,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
>  
>  	mmc_cqe_check_busy(mq);
>  
> -	spin_unlock_irqrestore(mq->lock, flags);
> +	spin_unlock_irqrestore(&mq->lock, flags);
>  
>  	if (!mq->cqe_busy)
>  		blk_mq_run_hw_queues(q, true);
> @@ -1991,13 +1990,13 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
>  	unsigned long flags;
>  	bool put_card;
>  
> -	spin_lock_irqsave(mq->lock, flags);
> +	spin_lock_irqsave(&mq->lock, flags);
>  
>  	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
>  
>  	put_card = (mmc_tot_in_flight(mq) == 0);
>  
> -	spin_unlock_irqrestore(mq->lock, flags);
> +	spin_unlock_irqrestore(&mq->lock, flags);
>  
>  	if (put_card)
>  		mmc_put_card(mq->card, &mq->ctx);
> @@ -2093,11 +2092,11 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>  		 * request does not need to wait (although it does need to
>  		 * complete complete_req first).
>  		 */
> -		spin_lock_irqsave(mq->lock, flags);
> +		spin_lock_irqsave(&mq->lock, flags);
>  		mq->complete_req = req;
>  		mq->rw_wait = false;
>  		waiting = mq->waiting;
> -		spin_unlock_irqrestore(mq->lock, flags);
> +		spin_unlock_irqrestore(&mq->lock, flags);
>  
>  		/*
>  		 * If 'waiting' then the waiting task will complete this
> @@ -2116,10 +2115,10 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>  	/* Take the recovery path for errors or urgent background operations */
>  	if (mmc_blk_rq_error(&mqrq->brq) ||
>  	    mmc_blk_urgent_bkops_needed(mq, mqrq)) {
> -		spin_lock_irqsave(mq->lock, flags);
> +		spin_lock_irqsave(&mq->lock, flags);
>  		mq->recovery_needed = true;
>  		mq->recovery_req = req;
> -		spin_unlock_irqrestore(mq->lock, flags);
> +		spin_unlock_irqrestore(&mq->lock, flags);
>  		wake_up(&mq->wait);
>  		schedule_work(&mq->recovery_work);
>  		return;
> @@ -2142,7 +2141,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>  	 * Wait while there is another request in progress, but not if recovery
>  	 * is needed. Also indicate whether there is a request waiting to start.
>  	 */
> -	spin_lock_irqsave(mq->lock, flags);
> +	spin_lock_irqsave(&mq->lock, flags);
>  	if (mq->recovery_needed) {
>  		*err = -EBUSY;
>  		done = true;
> @@ -2150,7 +2149,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>  		done = !mq->rw_wait;
>  	}
>  	mq->waiting = !done;
> -	spin_unlock_irqrestore(mq->lock, flags);
> +	spin_unlock_irqrestore(&mq->lock, flags);
>  
>  	return done;
>  }
> @@ -2327,12 +2326,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>  		goto err_kfree;
>  	}
>  
> -	spin_lock_init(&md->lock);
>  	INIT_LIST_HEAD(&md->part);
>  	INIT_LIST_HEAD(&md->rpmbs);
>  	md->usage = 1;
>  
> -	ret = mmc_init_queue(&md->queue, card, &md->lock);
> +	ret = mmc_init_queue(&md->queue, card);
>  	if (ret)
>  		goto err_putdisk;
>  
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 4485cf12218c..35cc138b096d 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -89,9 +89,9 @@ void mmc_cqe_recovery_notifier(struct mmc_request *mrq)
>  	struct mmc_queue *mq = q->queuedata;
>  	unsigned long flags;
>  
> -	spin_lock_irqsave(mq->lock, flags);
> +	spin_lock_irqsave(&mq->lock, flags);
>  	__mmc_cqe_recovery_notifier(mq);
> -	spin_unlock_irqrestore(mq->lock, flags);
> +	spin_unlock_irqrestore(&mq->lock, flags);
>  }
>  
>  static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
> @@ -128,14 +128,14 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
>  	unsigned long flags;
>  	int ret;
>  
> -	spin_lock_irqsave(mq->lock, flags);
> +	spin_lock_irqsave(&mq->lock, flags);
>  
>  	if (mq->recovery_needed || !mq->use_cqe)
>  		ret = BLK_EH_RESET_TIMER;
>  	else
>  		ret = mmc_cqe_timed_out(req);
>  
> -	spin_unlock_irqrestore(mq->lock, flags);
> +	spin_unlock_irqrestore(&mq->lock, flags);
>  
>  	return ret;
>  }
> @@ -157,9 +157,9 @@ static void mmc_mq_recovery_handler(struct work_struct *work)
>  
>  	mq->in_recovery = false;
>  
> -	spin_lock_irq(mq->lock);
> +	spin_lock_irq(&mq->lock);
>  	mq->recovery_needed = false;
> -	spin_unlock_irq(mq->lock);
> +	spin_unlock_irq(&mq->lock);
>  
>  	mmc_put_card(mq->card, &mq->ctx);
>  
> @@ -258,10 +258,10 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>  
>  	issue_type = mmc_issue_type(mq, req);
>  
> -	spin_lock_irq(mq->lock);
> +	spin_lock_irq(&mq->lock);
>  
>  	if (mq->recovery_needed || mq->busy) {
> -		spin_unlock_irq(mq->lock);
> +		spin_unlock_irq(&mq->lock);
>  		return BLK_STS_RESOURCE;
>  	}
>  
> @@ -269,7 +269,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	case MMC_ISSUE_DCMD:
>  		if (mmc_cqe_dcmd_busy(mq)) {
>  			mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
> -			spin_unlock_irq(mq->lock);
> +			spin_unlock_irq(&mq->lock);
>  			return BLK_STS_RESOURCE;
>  		}
>  		break;
> @@ -294,7 +294,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	get_card = (mmc_tot_in_flight(mq) == 1);
>  	cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
>  
> -	spin_unlock_irq(mq->lock);
> +	spin_unlock_irq(&mq->lock);
>  
>  	if (!(req->rq_flags & RQF_DONTPREP)) {
>  		req_to_mmc_queue_req(req)->retries = 0;
> @@ -328,12 +328,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	if (issued != MMC_REQ_STARTED) {
>  		bool put_card = false;
>  
> -		spin_lock_irq(mq->lock);
> +		spin_lock_irq(&mq->lock);
>  		mq->in_flight[issue_type] -= 1;
>  		if (mmc_tot_in_flight(mq) == 0)
>  			put_card = true;
>  		mq->busy = false;
> -		spin_unlock_irq(mq->lock);
> +		spin_unlock_irq(&mq->lock);
>  		if (put_card)
>  			mmc_put_card(card, &mq->ctx);
>  	} else {
> @@ -385,19 +385,18 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
>   * mmc_init_queue - initialise a queue structure.
>   * @mq: mmc queue
>   * @card: mmc card to attach this queue
> - * @lock: queue lock
>   *
>   * Initialise a MMC card request queue.
>   */
> -int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> -		   spinlock_t *lock)
> +int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
>  {
>  	struct mmc_host *host = card->host;
>  	int ret;
>  
>  	mq->card = card;
> -	mq->lock = lock;
>  	mq->use_cqe = host->cqe_enabled;
> +	
> +	spin_lock_init(&mq->lock);
>  
>  	memset(&mq->tag_set, 0, sizeof(mq->tag_set));
>  	mq->tag_set.ops = &mmc_mq_ops;
> diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
> index 5421f1542e71..fd11491ced9f 100644
> --- a/drivers/mmc/core/queue.h
> +++ b/drivers/mmc/core/queue.h
> @@ -73,11 +73,11 @@ struct mmc_queue_req {
>  
>  struct mmc_queue {
>  	struct mmc_card		*card;
> -	spinlock_t		*lock;
>  	struct mmc_ctx		ctx;
>  	struct blk_mq_tag_set	tag_set;
>  	struct mmc_blk_data	*blkdata;
>  	struct request_queue	*queue;
> +	spinlock_t		lock;
>  	int			in_flight[MMC_ISSUE_MAX];
>  	unsigned int		cqe_busy;
>  #define MMC_CQE_DCMD_BUSY	BIT(0)
> @@ -96,7 +96,7 @@ struct mmc_queue {
>  	struct work_struct	complete_work;
>  };
>  
> -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *);
> +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *);
>  extern void mmc_cleanup_queue(struct mmc_queue *);
>  extern void mmc_queue_suspend(struct mmc_queue *);
>  extern void mmc_queue_resume(struct mmc_queue *);
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: queue_lock fixups
  2018-11-16  8:10 queue_lock fixups Christoph Hellwig
                   ` (5 preceding siblings ...)
  2018-11-16  8:10 ` [PATCH 6/6] mmc: stop abusing the request queue_lock pointer Christoph Hellwig
@ 2018-11-16 16:17 ` Jens Axboe
  6 siblings, 0 replies; 17+ messages in thread
From: Jens Axboe @ 2018-11-16 16:17 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Omar Sandoval, linux-block, linux-mmc

On 11/16/18 1:10 AM, Christoph Hellwig wrote:
> Hi Jens,
> 
> a few fixups for the queue_lock conversion, drop a few more bogus
> queue_lock uses in drivers, and clean up the mmc use of the queue_lock
> as suggested by Ulf.

Applied 1-5 for now, giving Ulf a chance to checkout #6.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/6] mmc: stop abusing the request queue_lock pointer
  2018-11-16  8:10 ` [PATCH 6/6] mmc: stop abusing the request queue_lock pointer Christoph Hellwig
  2018-11-16  8:51   ` Omar Sandoval
@ 2018-11-17 10:12   ` Ulf Hansson
  1 sibling, 0 replies; 17+ messages in thread
From: Ulf Hansson @ 2018-11-17 10:12 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Omar Sandoval, linux-block, linux-mmc@vger.kernel.org

On 16 November 2018 at 09:10, Christoph Hellwig <hch@lst.de> wrote:
> Replace the lock in mmc_blk_data that is only used through a pointer
> in struct mmc_queue and to protect fields in that structure with
> an actual lock in struct mmc_queue.
>
> Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good, thanks!

Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>

Kind regards
Uffe

> ---
>  drivers/mmc/core/block.c | 24 +++++++++++-------------
>  drivers/mmc/core/queue.c | 31 +++++++++++++++----------------
>  drivers/mmc/core/queue.h |  4 ++--
>  3 files changed, 28 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 70ec465beb69..2c329a3e3fdb 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -100,7 +100,6 @@ static DEFINE_IDA(mmc_rpmb_ida);
>   * There is one mmc_blk_data per slot.
>   */
>  struct mmc_blk_data {
> -       spinlock_t      lock;
>         struct device   *parent;
>         struct gendisk  *disk;
>         struct mmc_queue queue;
> @@ -1483,7 +1482,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
>                 blk_mq_end_request(req, BLK_STS_OK);
>         }
>
> -       spin_lock_irqsave(mq->lock, flags);
> +       spin_lock_irqsave(&mq->lock, flags);
>
>         mq->in_flight[mmc_issue_type(mq, req)] -= 1;
>
> @@ -1491,7 +1490,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
>
>         mmc_cqe_check_busy(mq);
>
> -       spin_unlock_irqrestore(mq->lock, flags);
> +       spin_unlock_irqrestore(&mq->lock, flags);
>
>         if (!mq->cqe_busy)
>                 blk_mq_run_hw_queues(q, true);
> @@ -1991,13 +1990,13 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req)
>         unsigned long flags;
>         bool put_card;
>
> -       spin_lock_irqsave(mq->lock, flags);
> +       spin_lock_irqsave(&mq->lock, flags);
>
>         mq->in_flight[mmc_issue_type(mq, req)] -= 1;
>
>         put_card = (mmc_tot_in_flight(mq) == 0);
>
> -       spin_unlock_irqrestore(mq->lock, flags);
> +       spin_unlock_irqrestore(&mq->lock, flags);
>
>         if (put_card)
>                 mmc_put_card(mq->card, &mq->ctx);
> @@ -2093,11 +2092,11 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>                  * request does not need to wait (although it does need to
>                  * complete complete_req first).
>                  */
> -               spin_lock_irqsave(mq->lock, flags);
> +               spin_lock_irqsave(&mq->lock, flags);
>                 mq->complete_req = req;
>                 mq->rw_wait = false;
>                 waiting = mq->waiting;
> -               spin_unlock_irqrestore(mq->lock, flags);
> +               spin_unlock_irqrestore(&mq->lock, flags);
>
>                 /*
>                  * If 'waiting' then the waiting task will complete this
> @@ -2116,10 +2115,10 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>         /* Take the recovery path for errors or urgent background operations */
>         if (mmc_blk_rq_error(&mqrq->brq) ||
>             mmc_blk_urgent_bkops_needed(mq, mqrq)) {
> -               spin_lock_irqsave(mq->lock, flags);
> +               spin_lock_irqsave(&mq->lock, flags);
>                 mq->recovery_needed = true;
>                 mq->recovery_req = req;
> -               spin_unlock_irqrestore(mq->lock, flags);
> +               spin_unlock_irqrestore(&mq->lock, flags);
>                 wake_up(&mq->wait);
>                 schedule_work(&mq->recovery_work);
>                 return;
> @@ -2142,7 +2141,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>          * Wait while there is another request in progress, but not if recovery
>          * is needed. Also indicate whether there is a request waiting to start.
>          */
> -       spin_lock_irqsave(mq->lock, flags);
> +       spin_lock_irqsave(&mq->lock, flags);
>         if (mq->recovery_needed) {
>                 *err = -EBUSY;
>                 done = true;
> @@ -2150,7 +2149,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>                 done = !mq->rw_wait;
>         }
>         mq->waiting = !done;
> -       spin_unlock_irqrestore(mq->lock, flags);
> +       spin_unlock_irqrestore(&mq->lock, flags);
>
>         return done;
>  }
> @@ -2327,12 +2326,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>                 goto err_kfree;
>         }
>
> -       spin_lock_init(&md->lock);
>         INIT_LIST_HEAD(&md->part);
>         INIT_LIST_HEAD(&md->rpmbs);
>         md->usage = 1;
>
> -       ret = mmc_init_queue(&md->queue, card, &md->lock);
> +       ret = mmc_init_queue(&md->queue, card);
>         if (ret)
>                 goto err_putdisk;
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 4485cf12218c..35cc138b096d 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -89,9 +89,9 @@ void mmc_cqe_recovery_notifier(struct mmc_request *mrq)
>         struct mmc_queue *mq = q->queuedata;
>         unsigned long flags;
>
> -       spin_lock_irqsave(mq->lock, flags);
> +       spin_lock_irqsave(&mq->lock, flags);
>         __mmc_cqe_recovery_notifier(mq);
> -       spin_unlock_irqrestore(mq->lock, flags);
> +       spin_unlock_irqrestore(&mq->lock, flags);
>  }
>
>  static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
> @@ -128,14 +128,14 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
>         unsigned long flags;
>         int ret;
>
> -       spin_lock_irqsave(mq->lock, flags);
> +       spin_lock_irqsave(&mq->lock, flags);
>
>         if (mq->recovery_needed || !mq->use_cqe)
>                 ret = BLK_EH_RESET_TIMER;
>         else
>                 ret = mmc_cqe_timed_out(req);
>
> -       spin_unlock_irqrestore(mq->lock, flags);
> +       spin_unlock_irqrestore(&mq->lock, flags);
>
>         return ret;
>  }
> @@ -157,9 +157,9 @@ static void mmc_mq_recovery_handler(struct work_struct *work)
>
>         mq->in_recovery = false;
>
> -       spin_lock_irq(mq->lock);
> +       spin_lock_irq(&mq->lock);
>         mq->recovery_needed = false;
> -       spin_unlock_irq(mq->lock);
> +       spin_unlock_irq(&mq->lock);
>
>         mmc_put_card(mq->card, &mq->ctx);
>
> @@ -258,10 +258,10 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>
>         issue_type = mmc_issue_type(mq, req);
>
> -       spin_lock_irq(mq->lock);
> +       spin_lock_irq(&mq->lock);
>
>         if (mq->recovery_needed || mq->busy) {
> -               spin_unlock_irq(mq->lock);
> +               spin_unlock_irq(&mq->lock);
>                 return BLK_STS_RESOURCE;
>         }
>
> @@ -269,7 +269,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>         case MMC_ISSUE_DCMD:
>                 if (mmc_cqe_dcmd_busy(mq)) {
>                         mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
> -                       spin_unlock_irq(mq->lock);
> +                       spin_unlock_irq(&mq->lock);
>                         return BLK_STS_RESOURCE;
>                 }
>                 break;
> @@ -294,7 +294,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>         get_card = (mmc_tot_in_flight(mq) == 1);
>         cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
>
> -       spin_unlock_irq(mq->lock);
> +       spin_unlock_irq(&mq->lock);
>
>         if (!(req->rq_flags & RQF_DONTPREP)) {
>                 req_to_mmc_queue_req(req)->retries = 0;
> @@ -328,12 +328,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>         if (issued != MMC_REQ_STARTED) {
>                 bool put_card = false;
>
> -               spin_lock_irq(mq->lock);
> +               spin_lock_irq(&mq->lock);
>                 mq->in_flight[issue_type] -= 1;
>                 if (mmc_tot_in_flight(mq) == 0)
>                         put_card = true;
>                 mq->busy = false;
> -               spin_unlock_irq(mq->lock);
> +               spin_unlock_irq(&mq->lock);
>                 if (put_card)
>                         mmc_put_card(card, &mq->ctx);
>         } else {
> @@ -385,19 +385,18 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
>   * mmc_init_queue - initialise a queue structure.
>   * @mq: mmc queue
>   * @card: mmc card to attach this queue
> - * @lock: queue lock
>   *
>   * Initialise a MMC card request queue.
>   */
> -int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> -                  spinlock_t *lock)
> +int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
>  {
>         struct mmc_host *host = card->host;
>         int ret;
>
>         mq->card = card;
> -       mq->lock = lock;
>         mq->use_cqe = host->cqe_enabled;
> +
> +       spin_lock_init(&mq->lock);
>
>         memset(&mq->tag_set, 0, sizeof(mq->tag_set));
>         mq->tag_set.ops = &mmc_mq_ops;
> diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
> index 5421f1542e71..fd11491ced9f 100644
> --- a/drivers/mmc/core/queue.h
> +++ b/drivers/mmc/core/queue.h
> @@ -73,11 +73,11 @@ struct mmc_queue_req {
>
>  struct mmc_queue {
>         struct mmc_card         *card;
> -       spinlock_t              *lock;
>         struct mmc_ctx          ctx;
>         struct blk_mq_tag_set   tag_set;
>         struct mmc_blk_data     *blkdata;
>         struct request_queue    *queue;
> +       spinlock_t              lock;
>         int                     in_flight[MMC_ISSUE_MAX];
>         unsigned int            cqe_busy;
>  #define MMC_CQE_DCMD_BUSY      BIT(0)
> @@ -96,7 +96,7 @@ struct mmc_queue {
>         struct work_struct      complete_work;
>  };
>
> -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *);
> +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *);
>  extern void mmc_cleanup_queue(struct mmc_queue *);
>  extern void mmc_queue_suspend(struct mmc_queue *);
>  extern void mmc_queue_resume(struct mmc_queue *);
> --
> 2.19.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-11-17 10:13 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-11-16  8:10 queue_lock fixups Christoph Hellwig
2018-11-16  8:10 ` [PATCH 1/6] block: remove the rq_alloc_data request_queue field Christoph Hellwig
2018-11-16  8:33   ` Omar Sandoval
2018-11-16  8:10 ` [PATCH 2/6] floppy: remove queue_lock around floppy_end_request Christoph Hellwig
2018-11-16  8:33   ` Omar Sandoval
2018-11-16  8:10 ` [PATCH 3/6] pktcdvd: remove queue_lock around blk_queue_max_hw_sectors Christoph Hellwig
2018-11-16  8:33   ` Omar Sandoval
2018-11-16  8:10 ` [PATCH 4/6] ide: don't acquire queue lock in ide_pm_execute_rq Christoph Hellwig
2018-11-16  8:34   ` Omar Sandoval
2018-11-16  8:10 ` [PATCH 5/6] ide: don't acquire queue_lock in ide_complete_pm_rq Christoph Hellwig
2018-11-16  8:37   ` Omar Sandoval
2018-11-16  8:40     ` Christoph Hellwig
2018-11-16  8:42       ` Omar Sandoval
2018-11-16  8:10 ` [PATCH 6/6] mmc: stop abusing the request queue_lock pointer Christoph Hellwig
2018-11-16  8:51   ` Omar Sandoval
2018-11-17 10:12   ` Ulf Hansson
2018-11-16 16:17 ` queue_lock fixups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox