* [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists
@ 2020-08-12 20:54 Jim Gill
2020-08-12 20:59 ` James Bottomley
2020-08-13 7:28 ` Christoph Hellwig
0 siblings, 2 replies; 3+ messages in thread
From: Jim Gill @ 2020-08-12 20:54 UTC (permalink / raw)
To: pv-drivers; +Cc: jgill, jejb, martin.petersen, linux-scsi
Use coherent memory instead of dma mapping sg lists each
time they are used. This becomes important with SEV/swiotlb where
dma mapping otherwise implies bouncing of the data. It also gets rid
of a point of potential failure.
Tested using a "bonnie++" run on an 8GB pvscsi disk on a swiotlb=force
booted kernel.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
[jgill@vmware.com: forwarding patch on behalf of thellstrom]
Acked-by: jgill@vmware.com
---
drivers/scsi/vmw_pvscsi.c | 48 +++++++++++++++++++++++------------------------
1 file changed, 23 insertions(+), 25 deletions(-)
diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
index 8dbb4db..0573e94 100644
--- a/drivers/scsi/vmw_pvscsi.c
+++ b/drivers/scsi/vmw_pvscsi.c
@@ -27,6 +27,7 @@
#include <linux/slab.h>
#include <linux/workqueue.h>
#include <linux/pci.h>
+#include <linux/dmapool.h>
#include <scsi/scsi.h>
#include <scsi/scsi_host.h>
@@ -98,6 +99,8 @@ struct pvscsi_adapter {
struct list_head cmd_pool;
struct pvscsi_ctx *cmd_map;
+
+ struct dma_pool *sg_pool;
};
@@ -372,15 +375,6 @@ static int pvscsi_map_buffers(struct pvscsi_adapter *adapter,
pvscsi_create_sg(ctx, sg, segs);
e->flags |= PVSCSI_FLAG_CMD_WITH_SG_LIST;
- ctx->sglPA = dma_map_single(&adapter->dev->dev,
- ctx->sgl, SGL_SIZE, DMA_TO_DEVICE);
- if (dma_mapping_error(&adapter->dev->dev, ctx->sglPA)) {
- scmd_printk(KERN_ERR, cmd,
- "vmw_pvscsi: Failed to map ctx sglist for DMA.\n");
- scsi_dma_unmap(cmd);
- ctx->sglPA = 0;
- return -ENOMEM;
- }
e->dataAddr = ctx->sglPA;
} else
e->dataAddr = sg_dma_address(sg);
@@ -425,14 +419,9 @@ static void pvscsi_unmap_buffers(const struct pvscsi_adapter *adapter,
if (bufflen != 0) {
unsigned count = scsi_sg_count(cmd);
- if (count != 0) {
+ if (count != 0)
scsi_dma_unmap(cmd);
- if (ctx->sglPA) {
- dma_unmap_single(&adapter->dev->dev, ctx->sglPA,
- SGL_SIZE, DMA_TO_DEVICE);
- ctx->sglPA = 0;
- }
- } else
+ else
dma_unmap_single(&adapter->dev->dev, ctx->dataPA,
bufflen, cmd->sc_data_direction);
}
@@ -1206,7 +1195,9 @@ static void pvscsi_free_sgls(const struct pvscsi_adapter *adapter)
unsigned i;
for (i = 0; i < adapter->req_depth; ++i, ++ctx)
- free_pages((unsigned long)ctx->sgl, get_order(SGL_SIZE));
+ dma_pool_free(adapter->sg_pool, ctx->sgl, ctx->sglPA);
+
+ dma_pool_destroy(adapter->sg_pool);
}
static void pvscsi_shutdown_intr(struct pvscsi_adapter *adapter)
@@ -1225,10 +1216,11 @@ static void pvscsi_release_resources(struct pvscsi_adapter *adapter)
pci_release_regions(adapter->dev);
- if (adapter->cmd_map) {
+ if (adapter->sg_pool)
pvscsi_free_sgls(adapter);
+
+ if (adapter->cmd_map)
kfree(adapter->cmd_map);
- }
if (adapter->rings_state)
dma_free_coherent(&adapter->dev->dev, PAGE_SIZE,
@@ -1268,20 +1260,26 @@ static int pvscsi_allocate_sg(struct pvscsi_adapter *adapter)
struct pvscsi_ctx *ctx;
int i;
+ /* Use a dma pool so that we can impose alignment constraints. */
+ adapter->sg_pool = dma_pool_create("pvscsi_sg", pvscsi_dev(adapter),
+ SGL_SIZE, PAGE_SIZE, 0);
+ if (!adapter->sg_pool)
+ return -ENOMEM;
+
ctx = adapter->cmd_map;
BUILD_BUG_ON(sizeof(struct pvscsi_sg_list) > SGL_SIZE);
for (i = 0; i < adapter->req_depth; ++i, ++ctx) {
- ctx->sgl = (void *)__get_free_pages(GFP_KERNEL,
- get_order(SGL_SIZE));
- ctx->sglPA = 0;
- BUG_ON(!IS_ALIGNED(((unsigned long)ctx->sgl), PAGE_SIZE));
+ ctx->sgl = dma_pool_alloc(adapter->sg_pool, GFP_KERNEL,
+ &ctx->sglPA);
if (!ctx->sgl) {
for (; i >= 0; --i, --ctx) {
- free_pages((unsigned long)ctx->sgl,
- get_order(SGL_SIZE));
+ dma_pool_free(adapter->sg_pool, ctx->sgl,
+ ctx->sglPA);
ctx->sgl = NULL;
}
+ dma_pool_destroy(adapter->sg_pool);
+ adapter->sg_pool = NULL;
return -ENOMEM;
}
}
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists
2020-08-12 20:54 [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists Jim Gill
@ 2020-08-12 20:59 ` James Bottomley
2020-08-13 7:28 ` Christoph Hellwig
1 sibling, 0 replies; 3+ messages in thread
From: James Bottomley @ 2020-08-12 20:59 UTC (permalink / raw)
To: Jim Gill, pv-drivers; +Cc: jejb, martin.petersen, linux-scsi
This is not the correct format for sending on behalf of someone else.
First there needs to be a from at the beginning separated by a blank
line identifying the Author:
From: Thomas Hellstrom <thellstrom@vmware.com>
On Wed, 2020-08-12 at 13:54 -0700, Jim Gill wrote:
> Use coherent memory instead of dma mapping sg lists each
> time they are used. This becomes important with SEV/swiotlb where
> dma mapping otherwise implies bouncing of the data. It also gets rid
> of a point of potential failure.
>
> Tested using a "bonnie++" run on an 8GB pvscsi disk on a
> swiotlb=force
> booted kernel.
>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> [jgill@vmware.com: forwarding patch on behalf of thellstrom]
Then we don't need this because the From: at the beginning both
preserves the author information and makes it obvious
> Acked-by: jgill@vmware.com
And finally this needs to be a Signed-off-by: to comply with the DCO
section (c).
James
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists
2020-08-12 20:54 [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists Jim Gill
2020-08-12 20:59 ` James Bottomley
@ 2020-08-13 7:28 ` Christoph Hellwig
1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2020-08-13 7:28 UTC (permalink / raw)
To: Jim Gill; +Cc: pv-drivers, jejb, martin.petersen, linux-scsi
On Wed, Aug 12, 2020 at 01:54:04PM -0700, Jim Gill wrote:
> Use coherent memory instead of dma mapping sg lists each
> time they are used. This becomes important with SEV/swiotlb where
> dma mapping otherwise implies bouncing of the data. It also gets rid
> of a point of potential failure.
>
> Tested using a "bonnie++" run on an 8GB pvscsi disk on a swiotlb=force
> booted kernel.
This is the wrong way around. allocations from the coherent pool put
the system under a pointless constrained on architectures that aren't
dma coherent. Please don't do that.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-08-13 7:28 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-12 20:54 [PATCH 1/3 for-next] pvscsi: Use coherent memory instead of dma mapping sg lists Jim Gill
2020-08-12 20:59 ` James Bottomley
2020-08-13 7:28 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox