From: Alan Stern <stern@rowland.harvard.edu>
To: Tom Yan <tom.ty89@gmail.com>
Cc: linux-usb@vger.kernel.org, gregkh@linuxfoundation.org,
arnd@arndb.de, cyrozap@gmail.com,
Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Subject: Re: [PATCH] usb-storage: always set hw_max_sectors in slave_configure to avoid inappropriate clamping
Date: Tue, 1 Sep 2020 10:55:35 -0400 [thread overview]
Message-ID: <20200901145535.GC587030@rowland.harvard.edu> (raw)
In-Reply-To: <20200901055417.1732-1-tom.ty89@gmail.com>
Patch submissions should have text lines limited to fewer than 80
columns.
On Tue, Sep 01, 2020 at 01:54:17PM +0800, Tom Yan wrote:
> When the scsi request queue is initialized/allocated, the scsi driver
> clamps hw_max_sectors against the dma max mapping size of
> sdev->host->dma_dev. The clamping is apparently inappriorate to USB
> drives.
Wouldn't it be more accurate to say that the clamping _is_ appropriate,
but it should be performed using the sysdev device rather than the
nominal parent? Thus the error lies in allowing shost->dma_dev to be
set incorrectly.
> Either way we are calling blk_queue_max_hw_sectors() in the usb
> drivers for some (but not all) cases, which causes the clamping to be
> overriden (inconsistently) anyway.
>
> Therefore the usb driver should always set hw_max_sectors and do the
> clamping against the right device itself.
How about fixing the dma_dev assignment instead?
Alan Stern
> Signed-off-by: Tom Yan <tom.ty89@gmail.com>
> ---
> drivers/usb/storage/scsiglue.c | 37 ++++++++++++++++------------------
> drivers/usb/storage/uas.c | 23 ++++++++++++++++-----
> 2 files changed, 35 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
> index e5a971b83e3f..804cbc0ba4da 100644
> --- a/drivers/usb/storage/scsiglue.c
> +++ b/drivers/usb/storage/scsiglue.c
> @@ -120,6 +120,23 @@ static int slave_configure(struct scsi_device *sdev)
> * better throughput on most devices.
> */
> blk_queue_max_hw_sectors(sdev->request_queue, 2048);
> + } else {
> + /*
> + * Some devices are known to choke with anything larger. It seems like
> + * the problem stems from the fact that original IDE controllers had
> + * only an 8-bit register to hold the number of sectors in one transfer
> + * and even those couldn't handle a full 256 sectors.
> + *
> + * Because we want to make sure we interoperate with as many devices as
> + * possible, we will maintain a 240 sector transfer size limit for USB
> + * Mass Storage devices.
> + *
> + * Tests show that other operating have similar limits with Microsoft
> + * Windows 7 limiting transfers to 128 sectors for both USB2 and USB3
> + * and Apple Mac OS X 10.11 limiting transfers to 256 sectors for USB2
> + * and 2048 for USB3 devices.
> + */
> + blk_queue_max_hw_sectors(sdev->request_queue, 240);
> }
>
> /*
> @@ -626,26 +643,6 @@ static const struct scsi_host_template usb_stor_host_template = {
> /* lots of sg segments can be handled */
> .sg_tablesize = SG_MAX_SEGMENTS,
>
> -
> - /*
> - * Limit the total size of a transfer to 120 KB.
> - *
> - * Some devices are known to choke with anything larger. It seems like
> - * the problem stems from the fact that original IDE controllers had
> - * only an 8-bit register to hold the number of sectors in one transfer
> - * and even those couldn't handle a full 256 sectors.
> - *
> - * Because we want to make sure we interoperate with as many devices as
> - * possible, we will maintain a 240 sector transfer size limit for USB
> - * Mass Storage devices.
> - *
> - * Tests show that other operating have similar limits with Microsoft
> - * Windows 7 limiting transfers to 128 sectors for both USB2 and USB3
> - * and Apple Mac OS X 10.11 limiting transfers to 256 sectors for USB2
> - * and 2048 for USB3 devices.
> - */
> - .max_sectors = 240,
> -
> /* emulated HBA */
> .emulated = 1,
>
> diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
> index 08f9296431e9..cffa435afd84 100644
> --- a/drivers/usb/storage/uas.c
> +++ b/drivers/usb/storage/uas.c
> @@ -827,11 +827,6 @@ static int uas_slave_alloc(struct scsi_device *sdev)
> */
> blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
>
> - if (devinfo->flags & US_FL_MAX_SECTORS_64)
> - blk_queue_max_hw_sectors(sdev->request_queue, 64);
> - else if (devinfo->flags & US_FL_MAX_SECTORS_240)
> - blk_queue_max_hw_sectors(sdev->request_queue, 240);
> -
> return 0;
> }
>
> @@ -839,6 +834,24 @@ static int uas_slave_configure(struct scsi_device *sdev)
> {
> struct uas_dev_info *devinfo = sdev->hostdata;
>
> + struct us_data *us = host_to_us(sdev->host);
> + struct device *dev = us->pusb_dev->bus->sysdev;
> +
> + if (devinfo->flags & US_FL_MAX_SECTORS_64)
> + blk_queue_max_hw_sectors(sdev->request_queue, 64);
> + else if (devinfo->flags & US_FL_MAX_SECTORS_240)
> + blk_queue_max_hw_sectors(sdev->request_queue, 240);
> + else
> + blk_queue_max_hw_sectors(sdev->request_queue, SCSI_DEFAULT_MAX_SECTORS);
> +
> + /*
> + * The max_hw_sectors should be up to maximum size of a mapping for
> + * the device. Otherwise, a DMA API might fail on swiotlb environment.
> + */
> + blk_queue_max_hw_sectors(sdev->request_queue,
> + min_t(size_t, queue_max_hw_sectors(sdev->request_queue),
> + dma_max_mapping_size(dev) >> SECTOR_SHIFT));
> +
> if (devinfo->flags & US_FL_NO_REPORT_OPCODES)
> sdev->no_report_opcodes = 1;
>
> --
> 2.28.0
>
next prev parent reply other threads:[~2020-09-01 14:55 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-01 5:54 [PATCH] usb-storage: always set hw_max_sectors in slave_configure to avoid inappropriate clamping Tom Yan
2020-09-01 14:55 ` Alan Stern [this message]
2020-09-01 23:24 ` Tom Yan
2020-09-01 23:44 ` Tom Yan
2020-09-02 15:24 ` Alan Stern
2020-09-02 0:09 ` [PATCH v2 1/2] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Tom Yan
2020-09-02 0:09 ` [PATCH v2 2/2] usb-storage: always set hw_max_sectors in slave_configure to avoid inappropriate clamping Tom Yan
2020-09-02 0:20 ` Tom Yan
2020-09-02 15:30 ` [PATCH v2 1/2] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Alan Stern
2020-09-03 6:57 ` Tom Yan
2020-09-03 7:56 ` [PATCH v3 1/2] usb-storage: fix sdev->host->dma_dev Tom Yan
2020-09-03 7:56 ` [PATCH v3 2/2] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Tom Yan
2020-09-03 8:20 ` [PATCH v3 1/2] usb-storage: fix sdev->host->dma_dev Greg KH
2020-09-03 8:28 ` Tom Yan
2020-09-03 8:34 ` Greg KH
2020-09-03 8:46 ` [PATCH v4 " Tom Yan
2020-09-03 8:46 ` [PATCH v4 2/2] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Tom Yan
2020-09-03 8:50 ` [PATCH v5 1/2] usb-storage: fix sdev->host->dma_dev Tom Yan
2020-09-03 8:50 ` [PATCH v5 2/2] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Tom Yan
2020-09-03 15:54 ` [PATCH v5 1/2] usb-storage: fix sdev->host->dma_dev Alan Stern
2020-09-03 18:17 ` [PATCH v6 1/3] " Tom Yan
2020-09-03 18:17 ` [PATCH v6 2/3] uas: " Tom Yan
2020-09-03 18:17 ` [PATCH v6 3/3] uas: bump hw_max_sectors to 2048 blocks for SS or faster drives Tom Yan
2020-09-03 18:31 ` [PATCH v6 1/3] usb-storage: fix sdev->host->dma_dev Alan Stern
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200901145535.GC587030@rowland.harvard.edu \
--to=stern@rowland.harvard.edu \
--cc=arnd@arndb.de \
--cc=cyrozap@gmail.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-usb@vger.kernel.org \
--cc=tom.ty89@gmail.com \
--cc=yoshihiro.shimoda.uh@renesas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).