public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value
@ 2024-07-23  0:05 Christoph Hellwig
  2024-07-23  0:05 ` [PATCH 1/3] dma-mapping: don't return errors from dma_set_min_align_mask Christoph Hellwig
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-07-23  0:05 UTC (permalink / raw)
  To: Robin Murphy, Marek Szyprowski; +Cc: iommu, linux-kernel

Hi all,

the above three functions can only return errors if the bus code failed
to allocate the dma_parms structure, which is a grave error that won't
get us far.  Thus remove the pointless return values, that so far have
fortunately been mostly ignored, but which the cleanup brigade now wants
to check for for no good reason.

I'd love to get this in after -rc1 so that we can catch any of those
cleanups that might be queued up for this merge window.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] dma-mapping: don't return errors from dma_set_min_align_mask
  2024-07-23  0:05 remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Christoph Hellwig
@ 2024-07-23  0:05 ` Christoph Hellwig
  2024-07-23  0:05 ` [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary Christoph Hellwig
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-07-23  0:05 UTC (permalink / raw)
  To: Robin Murphy, Marek Szyprowski; +Cc: iommu, linux-kernel

If dev->dma_parms is not allocate that indicates a grave bug in the
implementation of a DMA-capable bus.  There isn't much the driver can
do in terms of error handling, so just warn and continue as DMA
operations will fail anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/dma-mapping.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index f693aafe221f2c..cfd6bafec3f944 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -575,13 +575,12 @@ static inline unsigned int dma_get_min_align_mask(struct device *dev)
 	return 0;
 }
 
-static inline int dma_set_min_align_mask(struct device *dev,
+static inline void dma_set_min_align_mask(struct device *dev,
 		unsigned int min_align_mask)
 {
 	if (WARN_ON_ONCE(!dev->dma_parms))
-		return -EIO;
+		return;
 	dev->dma_parms->min_align_mask = min_align_mask;
-	return 0;
 }
 
 #ifndef dma_get_cache_alignment
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary
  2024-07-23  0:05 remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Christoph Hellwig
  2024-07-23  0:05 ` [PATCH 1/3] dma-mapping: don't return errors from dma_set_min_align_mask Christoph Hellwig
@ 2024-07-23  0:05 ` Christoph Hellwig
  2024-08-01 12:00   ` Marek Szyprowski
  2024-07-23  0:05 ` [PATCH 3/3] dma-mapping: don't return errors from dma_set_max_seg_size Christoph Hellwig
  2024-07-30 13:13 ` remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Robin Murphy
  3 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2024-07-23  0:05 UTC (permalink / raw)
  To: Robin Murphy, Marek Szyprowski; +Cc: iommu, linux-kernel

If dev->dma_parms is not allocate that indicates a grave bug in the
implementation of a DMA-capable bus.  There isn't much the driver can
do in terms of error handling, so just warn and continue as DMA
operations will fail anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/dma-mapping.h | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index cfd6bafec3f944..6bd1333dbacb9b 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -559,13 +559,11 @@ static inline unsigned long dma_get_seg_boundary_nr_pages(struct device *dev,
 	return (dma_get_seg_boundary(dev) >> page_shift) + 1;
 }
 
-static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
+static inline void dma_set_seg_boundary(struct device *dev, unsigned long mask)
 {
-	if (dev->dma_parms) {
-		dev->dma_parms->segment_boundary_mask = mask;
-		return 0;
-	}
-	return -EIO;
+	if (WARN_ON_ONCE(!dev->dma_parms))
+		return;
+	dev->dma_parms->segment_boundary_mask = mask;
 }
 
 static inline unsigned int dma_get_min_align_mask(struct device *dev)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] dma-mapping: don't return errors from dma_set_max_seg_size
  2024-07-23  0:05 remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Christoph Hellwig
  2024-07-23  0:05 ` [PATCH 1/3] dma-mapping: don't return errors from dma_set_min_align_mask Christoph Hellwig
  2024-07-23  0:05 ` [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary Christoph Hellwig
@ 2024-07-23  0:05 ` Christoph Hellwig
  2024-07-30 13:13 ` remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Robin Murphy
  3 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-07-23  0:05 UTC (permalink / raw)
  To: Robin Murphy, Marek Szyprowski; +Cc: iommu, linux-kernel

If dev->dma_parms is not allocate that indicates a grave bug in the
implementation of a DMA-capable bus.  There isn't much the driver can
do in terms of error handling, so just warn and continue as DMA
operations will fail anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/accel/qaic/qaic_drv.c                         |  4 +---
 drivers/dma/idma64.c                                  |  4 +---
 drivers/dma/pl330.c                                   |  5 +----
 drivers/dma/qcom/bam_dma.c                            |  6 +-----
 drivers/dma/sh/rcar-dmac.c                            |  4 +---
 drivers/dma/ste_dma40.c                               |  6 +-----
 drivers/gpu/drm/mediatek/mtk_drm_drv.c                |  6 +-----
 drivers/media/common/videobuf2/videobuf2-dma-contig.c |  3 +--
 drivers/media/pci/intel/ipu6/ipu6.c                   |  4 +---
 drivers/mmc/host/mmci_stm32_sdmmc.c                   |  3 ++-
 drivers/net/ethernet/microsoft/mana/gdma_main.c       |  6 +-----
 drivers/scsi/lpfc/lpfc_init.c                         |  7 +------
 include/linux/dma-mapping.h                           | 10 ++++------
 13 files changed, 17 insertions(+), 51 deletions(-)

diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c
index 580b29ed190217..bf10156c334e71 100644
--- a/drivers/accel/qaic/qaic_drv.c
+++ b/drivers/accel/qaic/qaic_drv.c
@@ -447,9 +447,7 @@ static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev)
 	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
 	if (ret)
 		return ret;
-	ret = dma_set_max_seg_size(&pdev->dev, UINT_MAX);
-	if (ret)
-		return ret;
+	dma_set_max_seg_size(&pdev->dev, UINT_MAX);
 
 	qdev->bar_0 = devm_ioremap_resource(&pdev->dev, &pdev->resource[0]);
 	if (IS_ERR(qdev->bar_0))
diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
index e3505e56784b1a..1398814d8fbb63 100644
--- a/drivers/dma/idma64.c
+++ b/drivers/dma/idma64.c
@@ -598,9 +598,7 @@ static int idma64_probe(struct idma64_chip *chip)
 
 	idma64->dma.dev = chip->sysdev;
 
-	ret = dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
-	if (ret)
-		return ret;
+	dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
 
 	ret = dma_async_device_register(&idma64->dma);
 	if (ret)
diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
index 60c4de8dac1d2a..82a9fe88ad54c9 100644
--- a/drivers/dma/pl330.c
+++ b/drivers/dma/pl330.c
@@ -3163,10 +3163,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
 	 * This is the limit for transfers with a buswidth of 1, larger
 	 * buswidths will have larger limits.
 	 */
-	ret = dma_set_max_seg_size(&adev->dev, 1900800);
-	if (ret)
-		dev_err(&adev->dev, "unable to set the seg size\n");
-
+	dma_set_max_seg_size(&adev->dev, 1900800);
 
 	init_pl330_debugfs(pl330);
 	dev_info(&adev->dev,
diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index 5e7d332731e0c1..368ffaa4003789 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -1325,11 +1325,7 @@ static int bam_dma_probe(struct platform_device *pdev)
 
 	/* set max dma segment size */
 	bdev->common.dev = bdev->dev;
-	ret = dma_set_max_seg_size(bdev->common.dev, BAM_FIFO_SIZE);
-	if (ret) {
-		dev_err(bdev->dev, "cannot set maximum segment size\n");
-		goto err_bam_channel_exit;
-	}
+	dma_set_max_seg_size(bdev->common.dev, BAM_FIFO_SIZE);
 
 	platform_set_drvdata(pdev, bdev);
 
diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
index 40482cb73d798a..1094a2f821649c 100644
--- a/drivers/dma/sh/rcar-dmac.c
+++ b/drivers/dma/sh/rcar-dmac.c
@@ -1868,9 +1868,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
 
 	dmac->dev = &pdev->dev;
 	platform_set_drvdata(pdev, dmac);
-	ret = dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
-	if (ret)
-		return ret;
+	dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
 
 	ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
 	if (ret)
diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
index 2c489299148eee..d52e1685aed53f 100644
--- a/drivers/dma/ste_dma40.c
+++ b/drivers/dma/ste_dma40.c
@@ -3632,11 +3632,7 @@ static int __init d40_probe(struct platform_device *pdev)
 	if (ret)
 		goto destroy_cache;
 
-	ret = dma_set_max_seg_size(base->dev, STEDMA40_MAX_SEG_SIZE);
-	if (ret) {
-		d40_err(dev, "Failed to set dma max seg size\n");
-		goto destroy_cache;
-	}
+	dma_set_max_seg_size(base->dev, STEDMA40_MAX_SEG_SIZE);
 
 	d40_hw_init(base);
 
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
index ae5c6ec24a1e61..91f9cc60c6716a 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
@@ -559,11 +559,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
 	 * Configure the DMA segment size to make sure we get contiguous IOVA
 	 * when importing PRIME buffers.
 	 */
-	ret = dma_set_max_seg_size(dma_dev, UINT_MAX);
-	if (ret) {
-		dev_err(dma_dev, "Failed to set DMA segment size\n");
-		goto err_component_unbind;
-	}
+	dma_set_max_seg_size(dma_dev, UINT_MAX);
 
 	ret = drm_vblank_init(drm, MAX_CRTC);
 	if (ret < 0)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 3d4fd4ef53107c..bb0b7fa67b539a 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -854,8 +854,7 @@ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size)
 		return -ENODEV;
 	}
 	if (dma_get_max_seg_size(dev) < size)
-		return dma_set_max_seg_size(dev, size);
-
+		dma_set_max_seg_size(dev, size);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(vb2_dma_contig_set_max_seg_size);
diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
index bbd646378ab3ed..83e70c692d957f 100644
--- a/drivers/media/pci/intel/ipu6/ipu6.c
+++ b/drivers/media/pci/intel/ipu6/ipu6.c
@@ -576,9 +576,7 @@ static int ipu6_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (ret)
 		return dev_err_probe(dev, ret, "Failed to set DMA mask\n");
 
-	ret = dma_set_max_seg_size(dev, UINT_MAX);
-	if (ret)
-		return dev_err_probe(dev, ret, "Failed to set max_seg_size\n");
+	dma_set_max_seg_size(dev, UINT_MAX);
 
 	ret = ipu6_pci_config_setup(pdev, isp->hw_ver);
 	if (ret)
diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
index f5da7f9baa52d4..9dc51859c2e51e 100644
--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
+++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
@@ -213,7 +213,8 @@ static int sdmmc_idma_setup(struct mmci_host *host)
 		host->mmc->max_seg_size = host->mmc->max_req_size;
 	}
 
-	return dma_set_max_seg_size(dev, host->mmc->max_seg_size);
+	dma_set_max_seg_size(dev, host->mmc->max_seg_size);
+	return 0;
 }
 
 static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index ddb8f68d80a206..ca4ed58f1206dd 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1496,11 +1496,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (err)
 		goto release_region;
 
-	err = dma_set_max_seg_size(&pdev->dev, UINT_MAX);
-	if (err) {
-		dev_err(&pdev->dev, "Failed to set dma device segment size\n");
-		goto release_region;
-	}
+	dma_set_max_seg_size(&pdev->dev, UINT_MAX);
 
 	err = -ENOMEM;
 	gc = vzalloc(sizeof(*gc));
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index e1dfa96c2a553a..50620918becd59 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -13861,12 +13861,7 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
 	if (sli4_params->sge_supp_len > LPFC_MAX_SGE_SIZE)
 		sli4_params->sge_supp_len = LPFC_MAX_SGE_SIZE;
 
-	rc = dma_set_max_seg_size(&phba->pcidev->dev, sli4_params->sge_supp_len);
-	if (unlikely(rc)) {
-		lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
-				"6400 Can't set dma maximum segment size\n");
-		return rc;
-	}
+	dma_set_max_seg_size(&phba->pcidev->dev, sli4_params->sge_supp_len);
 
 	/*
 	 * Check whether the adapter supports an embedded copy of the
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 6bd1333dbacb9b..1524da363734af 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -524,13 +524,11 @@ static inline unsigned int dma_get_max_seg_size(struct device *dev)
 	return SZ_64K;
 }
 
-static inline int dma_set_max_seg_size(struct device *dev, unsigned int size)
+static inline void dma_set_max_seg_size(struct device *dev, unsigned int size)
 {
-	if (dev->dma_parms) {
-		dev->dma_parms->max_segment_size = size;
-		return 0;
-	}
-	return -EIO;
+	if (WARN_ON_ONCE(!dev->dma_parms))
+		return;
+	dev->dma_parms->max_segment_size = size;
 }
 
 static inline unsigned long dma_get_seg_boundary(struct device *dev)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value
  2024-07-23  0:05 remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Christoph Hellwig
                   ` (2 preceding siblings ...)
  2024-07-23  0:05 ` [PATCH 3/3] dma-mapping: don't return errors from dma_set_max_seg_size Christoph Hellwig
@ 2024-07-30 13:13 ` Robin Murphy
  2024-07-30 15:20   ` Christoph Hellwig
  3 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2024-07-30 13:13 UTC (permalink / raw)
  To: Christoph Hellwig, Marek Szyprowski; +Cc: iommu, linux-kernel

On 23/07/2024 1:05 am, Christoph Hellwig wrote:
> Hi all,
> 
> the above three functions can only return errors if the bus code failed
> to allocate the dma_parms structure, which is a grave error that won't
> get us far.  Thus remove the pointless return values, that so far have
> fortunately been mostly ignored, but which the cleanup brigade now wants
> to check for for no good reason.
> 
> I'd love to get this in after -rc1 so that we can catch any of those
> cleanups that might be queued up for this merge window.

Indeed, I think this is long overdue ever since the responsibility for 
dma_parms was moved from these callers into bus code. For the series,

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value
  2024-07-30 13:13 ` remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Robin Murphy
@ 2024-07-30 15:20   ` Christoph Hellwig
  0 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-07-30 15:20 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Christoph Hellwig, Marek Szyprowski, iommu, linux-kernel

I've picked up the first two and plan to send it to Linus before -rc2.
For dma_set_max_seg_size I got a bit of cold feet and plan to send it
out again with patches for ever driver touched and queue it up for
6.12 after -rc2.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary
  2024-07-23  0:05 ` [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary Christoph Hellwig
@ 2024-08-01 12:00   ` Marek Szyprowski
  2024-08-01 12:36     ` Robin Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Marek Szyprowski @ 2024-08-01 12:00 UTC (permalink / raw)
  To: Christoph Hellwig, Robin Murphy; +Cc: iommu, linux-kernel

On 23.07.2024 02:05, Christoph Hellwig wrote:
> If dev->dma_parms is not allocate that indicates a grave bug in the
> implementation of a DMA-capable bus.  There isn't much the driver can
> do in terms of error handling, so just warn and continue as DMA
> operations will fail anyway.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

What about devices on platform or usb bus and subsystems calling this 
unconditionally, like scsi_init_limits()? With today's linux-next I got 
a bunch of warnings from this call for various USB storage devices. 
Before this patch, the errors from dma_set_seg_boundary() were silently 
ignored.

> ---
>   include/linux/dma-mapping.h | 10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index cfd6bafec3f944..6bd1333dbacb9b 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -559,13 +559,11 @@ static inline unsigned long dma_get_seg_boundary_nr_pages(struct device *dev,
>   	return (dma_get_seg_boundary(dev) >> page_shift) + 1;
>   }
>   
> -static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
> +static inline void dma_set_seg_boundary(struct device *dev, unsigned long mask)
>   {
> -	if (dev->dma_parms) {
> -		dev->dma_parms->segment_boundary_mask = mask;
> -		return 0;
> -	}
> -	return -EIO;
> +	if (WARN_ON_ONCE(!dev->dma_parms))
> +		return;
> +	dev->dma_parms->segment_boundary_mask = mask;
>   }
>   
>   static inline unsigned int dma_get_min_align_mask(struct device *dev)

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary
  2024-08-01 12:00   ` Marek Szyprowski
@ 2024-08-01 12:36     ` Robin Murphy
  2024-08-01 13:33       ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2024-08-01 12:36 UTC (permalink / raw)
  To: Marek Szyprowski, Christoph Hellwig; +Cc: iommu, linux-kernel

On 01/08/2024 1:00 pm, Marek Szyprowski wrote:
> On 23.07.2024 02:05, Christoph Hellwig wrote:
>> If dev->dma_parms is not allocate that indicates a grave bug in the
>> implementation of a DMA-capable bus.  There isn't much the driver can
>> do in terms of error handling, so just warn and continue as DMA
>> operations will fail anyway.
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
> 
> What about devices on platform or usb bus and subsystems calling this
> unconditionally, like scsi_init_limits()? With today's linux-next I got
> a bunch of warnings from this call for various USB storage devices.
> Before this patch, the errors from dma_set_seg_boundary() were silently
> ignored.

Maybe we could suppress the warning if the value being set is equal to 
the default, but the real point is that DMA-capable buses should be 
providing dma_parms, while non-DMA-capable devices have no business 
being here at all - if the caller isn't going to get the 
restriction/relaxation they requested, and that's not a problem, then 
why are they requesting it in the first place?

I guess I assumed that the old block layer bodges in this area had been 
cleaned up already - perhaps it *is* high time for whatever's left to 
grow a proper understanding of whether a block device actually does its 
own DMA or not.

Thanks,
Robin.

>> ---
>>    include/linux/dma-mapping.h | 10 ++++------
>>    1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
>> index cfd6bafec3f944..6bd1333dbacb9b 100644
>> --- a/include/linux/dma-mapping.h
>> +++ b/include/linux/dma-mapping.h
>> @@ -559,13 +559,11 @@ static inline unsigned long dma_get_seg_boundary_nr_pages(struct device *dev,
>>    	return (dma_get_seg_boundary(dev) >> page_shift) + 1;
>>    }
>>    
>> -static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
>> +static inline void dma_set_seg_boundary(struct device *dev, unsigned long mask)
>>    {
>> -	if (dev->dma_parms) {
>> -		dev->dma_parms->segment_boundary_mask = mask;
>> -		return 0;
>> -	}
>> -	return -EIO;
>> +	if (WARN_ON_ONCE(!dev->dma_parms))
>> +		return;
>> +	dev->dma_parms->segment_boundary_mask = mask;
>>    }
>>    
>>    static inline unsigned int dma_get_min_align_mask(struct device *dev)
> 
> Best regards

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary
  2024-08-01 12:36     ` Robin Murphy
@ 2024-08-01 13:33       ` Christoph Hellwig
  2024-08-02 11:53         ` Robin Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2024-08-01 13:33 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Marek Szyprowski, Christoph Hellwig, iommu, linux-kernel

On Thu, Aug 01, 2024 at 01:36:01PM +0100, Robin Murphy wrote:
> I guess I assumed that the old block layer bodges in this area had been 
> cleaned up already - perhaps it *is* high time for whatever's left to grow 
> a proper understanding of whether a block device actually does its own DMA 
> or not.

Can you point to a discussion on that?  The concepts here don't make
quite sense to me.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary
  2024-08-01 13:33       ` Christoph Hellwig
@ 2024-08-02 11:53         ` Robin Murphy
  0 siblings, 0 replies; 10+ messages in thread
From: Robin Murphy @ 2024-08-02 11:53 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Marek Szyprowski, iommu, linux-kernel

On 01/08/2024 2:33 pm, Christoph Hellwig wrote:
> On Thu, Aug 01, 2024 at 01:36:01PM +0100, Robin Murphy wrote:
>> I guess I assumed that the old block layer bodges in this area had been
>> cleaned up already - perhaps it *is* high time for whatever's left to grow
>> a proper understanding of whether a block device actually does its own DMA
>> or not.
> 
> Can you point to a discussion on that?  The concepts here don't make
> quite sense to me.

No specific discussion, I just have a vague memory of hacks in the USB 
layer when I first started looking at this stuff 10 years ago... From a 
bit of digging to try to remind myself, I'm fairly sure that must have 
been the block bounce related stuff, which you cleaned up in 
6eb0233ec2d0 ("usb: don't inherity DMA properties for USB devices") a 
while back.

Cheers,
Robin.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-08-02 11:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-23  0:05 remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Christoph Hellwig
2024-07-23  0:05 ` [PATCH 1/3] dma-mapping: don't return errors from dma_set_min_align_mask Christoph Hellwig
2024-07-23  0:05 ` [PATCH 2/3] dma-mapping: don't return errors from dma_set_seg_boundary Christoph Hellwig
2024-08-01 12:00   ` Marek Szyprowski
2024-08-01 12:36     ` Robin Murphy
2024-08-01 13:33       ` Christoph Hellwig
2024-08-02 11:53         ` Robin Murphy
2024-07-23  0:05 ` [PATCH 3/3] dma-mapping: don't return errors from dma_set_max_seg_size Christoph Hellwig
2024-07-30 13:13 ` remove the dma_set_{max_seg_size,seg_boundary,min_align_mask} return value Robin Murphy
2024-07-30 15:20   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox