* Re: [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned
2010-11-23 16:24 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned DMA Guennadi Liakhovetski
@ 2010-11-26 12:04 ` Samuel Ortiz
2010-12-19 21:16 ` [PATCH] mmc: tmio_mmc: silence compiler warnings Arnd Hannemann
` (2 subsequent siblings)
3 siblings, 0 replies; 16+ messages in thread
From: Samuel Ortiz @ 2010-11-26 12:04 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-mmc, linux-sh, Ian Molton
Hi Guennadi,
On Tue, Nov 23, 2010 at 05:24:15PM +0100, Guennadi Liakhovetski wrote:
> For example, with SDIO WLAN cards, some transfers happen with buffers at odd
> addresses, whereas the SH-Mobile DMA engine requires even addresses for SDHI.
> This patch extends the tmio driver with a bounce buffer, that is used for
> single entry scatter-gather lists both for sending and receiving. If we ever
> encounter unaligned transfers with multi-element sg lists, this patch will have
> to be extended. For now it just falls back to PIO in this and other unsupported
> cases.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
I'm not sure about who is going to carry those patches, probably not me
though.
So, for the MFD part:
Acked-by: Samuel Ortiz <sameo@linux.intel.com>
Cheers,
Samuel.
--
Intel Open Source Technology Centre
http://oss.intel.com/
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH] mmc: tmio_mmc: silence compiler warnings
2010-11-23 16:24 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned DMA Guennadi Liakhovetski
2010-11-26 12:04 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned Samuel Ortiz
@ 2010-12-19 21:16 ` Arnd Hannemann
2011-01-05 20:48 ` Chris Ball
2010-12-22 11:02 ` [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for unaligned Guennadi Liakhovetski
2011-01-05 19:56 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned Chris Ball
3 siblings, 1 reply; 16+ messages in thread
From: Arnd Hannemann @ 2010-12-19 21:16 UTC (permalink / raw)
To: g.liakhovetski, linux-mmc
Cc: linux-sh, Ian Molton, Samuel Ortiz, Arnd Hannemann
with "mmc: tmio: implement a bounce buffer for unaligned DMA"
gcc generates the following warnings:
drivers/mmc/host/tmio_mmc.c:654:6: warning: 'ret' may be used uninitialized in this function
drivers/mmc/host/tmio_mmc.c:730:6: warning: 'ret' may be used uninitialized in this function
This patch fixes these by setting ret to -EINVAL in the affected code paths.
This patch applies on top of -rc6 plus the following patches:
mmc: tmio_mmc: allow multi-element scatter-gather lists
mmc: tmio_mmc: fix PIO fallback on DMA descriptor allocation failure
mmc: tmio: merge the private header into the driver
mmc: tmio: implement a bounce buffer for unaligned DMA
Signed-off-by: Arnd Hannemann <arnd@arndnet.de>
---
drivers/mmc/host/tmio_mmc.c | 8 ++++++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
index 57ece9d..61e97d1 100644
--- a/drivers/mmc/host/tmio_mmc.c
+++ b/drivers/mmc/host/tmio_mmc.c
@@ -665,8 +665,10 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
}
if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
- align >= MAX_ALIGN)) || !multiple)
+ align >= MAX_ALIGN)) || !multiple) {
+ ret = -EINVAL;
goto pio;
+ }
/* The only sg element can be unaligned, use our bounce buffer then */
if (!aligned) {
@@ -741,8 +743,10 @@ static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
}
if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
- align >= MAX_ALIGN)) || !multiple)
+ align >= MAX_ALIGN)) || !multiple) {
+ ret = -EINVAL;
goto pio;
+ }
/* The only sg element can be unaligned, use our bounce buffer then */
if (!aligned) {
--
1.7.2.3
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH] mmc: tmio_mmc: silence compiler warnings
2010-12-19 21:16 ` [PATCH] mmc: tmio_mmc: silence compiler warnings Arnd Hannemann
@ 2011-01-05 20:48 ` Chris Ball
0 siblings, 0 replies; 16+ messages in thread
From: Chris Ball @ 2011-01-05 20:48 UTC (permalink / raw)
To: Arnd Hannemann
Cc: g.liakhovetski, linux-mmc, linux-sh, Ian Molton, Samuel Ortiz
On Sun, Dec 19, 2010 at 09:16:07PM +0000, Arnd Hannemann wrote:
> with "mmc: tmio: implement a bounce buffer for unaligned DMA"
> gcc generates the following warnings:
>
> drivers/mmc/host/tmio_mmc.c:654:6: warning: 'ret' may be used uninitialized in this function
> drivers/mmc/host/tmio_mmc.c:730:6: warning: 'ret' may be used uninitialized in this function
>
> This patch fixes these by setting ret to -EINVAL in the affected code paths.
>
> This patch applies on top of -rc6 plus the following patches:
> mmc: tmio_mmc: allow multi-element scatter-gather lists
> mmc: tmio_mmc: fix PIO fallback on DMA descriptor allocation failure
> mmc: tmio: merge the private header into the driver
> mmc: tmio: implement a bounce buffer for unaligned DMA
>
> Signed-off-by: Arnd Hannemann <arnd@arndnet.de>
Thanks, pushed to mmc-next for .38.
--
Chris Ball <cjb@laptop.org> <http://printf.net/>
One Laptop Per Child
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for unaligned
2010-11-23 16:24 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned DMA Guennadi Liakhovetski
2010-11-26 12:04 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned Samuel Ortiz
2010-12-19 21:16 ` [PATCH] mmc: tmio_mmc: silence compiler warnings Arnd Hannemann
@ 2010-12-22 11:02 ` Guennadi Liakhovetski
2010-12-24 11:11 ` [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for Samuel Ortiz
2011-01-05 19:56 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned Chris Ball
3 siblings, 1 reply; 16+ messages in thread
From: Guennadi Liakhovetski @ 2010-12-22 11:02 UTC (permalink / raw)
To: linux-mmc; +Cc: linux-sh, Ian Molton, Samuel Ortiz, Magnus Damm
For example, with SDIO WLAN cards, some transfers happen with buffers at odd
addresses, whereas the SH-Mobile DMA engine requires even addresses for SDHI.
This patch extends the tmio driver with a bounce buffer, that is used for
single entry scatter-gather lists both for sending and receiving. If we ever
encounter unaligned transfers with multi-element sg lists, this patch will have
to be extended. For now it just falls back to PIO in this and other unsupported
cases.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
v2:
1. fixed compilation without DMA support. Thanks to Magnus Damm
<damm@opensource.se> for reporting
drivers/mmc/host/tmio_mmc.c | 81 +++++++++++++++++++++++++++++++++++++++----
include/linux/mfd/tmio.h | 1 +
2 files changed, 75 insertions(+), 7 deletions(-)
diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
index 118ad86..eadf951 100644
--- a/drivers/mmc/host/tmio_mmc.c
+++ b/drivers/mmc/host/tmio_mmc.c
@@ -111,6 +111,8 @@
sd_ctrl_write32((host), CTL_STATUS, ~(i)); \
} while (0)
+/* This is arbitrary, just noone needed any higher alignment yet */
+#define MAX_ALIGN 4
struct tmio_mmc_host {
void __iomem *ctl;
@@ -127,6 +129,7 @@ struct tmio_mmc_host {
/* pio related stuff */
struct scatterlist *sg_ptr;
+ struct scatterlist *sg_orig;
unsigned int sg_len;
unsigned int sg_off;
@@ -139,9 +142,13 @@ struct tmio_mmc_host {
struct tasklet_struct dma_issue;
#ifdef CONFIG_TMIO_MMC_DMA
unsigned int dma_sglen;
+ u8 bounce_buf[PAGE_CACHE_SIZE] __attribute__((aligned(MAX_ALIGN)));
+ struct scatterlist bounce_sg;
#endif
};
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host);
+
static u16 sd_ctrl_read16(struct tmio_mmc_host *host, int addr)
{
return readw(host->ctl + (addr << host->bus_shift));
@@ -180,6 +187,7 @@ static void tmio_mmc_init_sg(struct tmio_mmc_host *host, struct mmc_data *data)
{
host->sg_len = data->sg_len;
host->sg_ptr = data->sg;
+ host->sg_orig = data->sg;
host->sg_off = 0;
}
@@ -438,6 +446,8 @@ static void tmio_mmc_do_data_irq(struct tmio_mmc_host *host)
if (data->flags & MMC_DATA_READ) {
if (!host->chan_rx)
disable_mmc_irqs(host, TMIO_MASK_READOP);
+ else
+ tmio_check_bounce_buffer(host);
dev_dbg(&host->pdev->dev, "Complete Rx request %p\n",
host->mrq);
} else {
@@ -529,8 +539,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host,
if (!host->chan_rx)
enable_mmc_irqs(host, TMIO_MASK_READOP);
} else {
- struct dma_chan *chan = host->chan_tx;
- if (!chan)
+ if (!host->chan_tx)
enable_mmc_irqs(host, TMIO_MASK_WRITEOP);
else
tasklet_schedule(&host->dma_issue);
@@ -612,6 +621,16 @@ out:
}
#ifdef CONFIG_TMIO_MMC_DMA
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host)
+{
+ if (host->sg_ptr = &host->bounce_sg) {
+ unsigned long flags;
+ void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags);
+ memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length);
+ tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
+ }
+}
+
static void tmio_mmc_enable_dma(struct tmio_mmc_host *host, bool enable)
{
#if defined(CONFIG_SUPERH) || defined(CONFIG_ARCH_SHMOBILE)
@@ -634,11 +653,36 @@ static void tmio_dma_complete(void *arg)
static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
{
- struct scatterlist *sg = host->sg_ptr;
+ struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_rx;
+ struct mfd_cell *cell = host->pdev->dev.platform_data;
+ struct tmio_mmc_data *pdata = cell->driver_data;
dma_cookie_t cookie;
- int ret;
+ int ret, i;
+ bool aligned = true, multiple = true;
+ unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
+
+ for_each_sg(sg, sg_tmp, host->sg_len, i) {
+ if (sg_tmp->offset & align)
+ aligned = false;
+ if (sg_tmp->length & align) {
+ multiple = false;
+ break;
+ }
+ }
+
+ if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
+ align >= MAX_ALIGN)) || !multiple)
+ goto pio;
+
+ /* The only sg element can be unaligned, use our bounce buffer then */
+ if (!aligned) {
+ /* The first sg element unaligned, use our bounce-buffer */
+ sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
+ host->sg_ptr = &host->bounce_sg;
+ sg = host->sg_ptr;
+ }
ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_FROM_DEVICE);
if (ret > 0) {
@@ -661,6 +705,7 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
__func__, host->sg_len, ret, cookie, host->mrq);
+pio:
if (!desc) {
/* DMA failed, fall back to PIO */
if (ret >= 0)
@@ -684,11 +729,40 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
{
- struct scatterlist *sg = host->sg_ptr;
+ struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_tx;
+ struct mfd_cell *cell = host->pdev->dev.platform_data;
+ struct tmio_mmc_data *pdata = cell->driver_data;
dma_cookie_t cookie;
- int ret;
+ int ret, i;
+ bool aligned = true, multiple = true;
+ unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
+
+ for_each_sg(sg, sg_tmp, host->sg_len, i) {
+ if (sg_tmp->offset & align)
+ aligned = false;
+ if (sg_tmp->length & align) {
+ multiple = false;
+ break;
+ }
+ }
+
+ if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
+ align >= MAX_ALIGN)) || !multiple)
+ goto pio;
+
+ /* The only sg element can be unaligned, use our bounce buffer then */
+ if (!aligned) {
+ unsigned long flags;
+ void *sg_vaddr = tmio_mmc_kmap_atomic(sg, &flags);
+ /* The first sg element unaligned, use our bounce-buffer */
+ sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
+ memcpy(host->bounce_buf, sg_vaddr, host->bounce_sg.length);
+ tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
+ host->sg_ptr = &host->bounce_sg;
+ sg = host->sg_ptr;
+ }
ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_TO_DEVICE);
if (ret > 0) {
@@ -709,6 +783,7 @@ static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
__func__, host->sg_len, ret, cookie, host->mrq);
+pio:
if (!desc) {
/* DMA failed, fall back to PIO */
if (ret >= 0)
@@ -822,6 +897,10 @@ static void tmio_mmc_release_dma(struct tmio_mmc_host *host)
}
}
#else
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host)
+{
+}
+
static void tmio_mmc_start_dma(struct tmio_mmc_host *host,
struct mmc_data *data)
{
diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h
index 085f041..dbfc053 100644
--- a/include/linux/mfd/tmio.h
+++ b/include/linux/mfd/tmio.h
@@ -66,6 +66,7 @@ void tmio_core_mmc_clk_div(void __iomem *cnf, int shift, int state);
struct tmio_mmc_dma {
void *chan_priv_tx;
void *chan_priv_rx;
+ int alignment_shift;
};
/*
--
1.7.2.3
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for
2010-12-22 11:02 ` [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for unaligned Guennadi Liakhovetski
@ 2010-12-24 11:11 ` Samuel Ortiz
0 siblings, 0 replies; 16+ messages in thread
From: Samuel Ortiz @ 2010-12-24 11:11 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-mmc, linux-sh, Ian Molton, Magnus Damm
On Wed, Dec 22, 2010 at 12:02:15PM +0100, Guennadi Liakhovetski wrote:
> For example, with SDIO WLAN cards, some transfers happen with buffers at odd
> addresses, whereas the SH-Mobile DMA engine requires even addresses for SDHI.
> This patch extends the tmio driver with a bounce buffer, that is used for
> single entry scatter-gather lists both for sending and receiving. If we ever
> encounter unaligned transfers with multi-element sg lists, this patch will have
> to be extended. For now it just falls back to PIO in this and other unsupported
> cases.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Acked-by: Samuel Ortiz <sameo@linux.intel.com>
for the MFD part.
Cheers,
Samuel.
--
Intel Open Source Technology Centre
http://oss.intel.com/
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned
2010-11-23 16:24 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned DMA Guennadi Liakhovetski
` (2 preceding siblings ...)
2010-12-22 11:02 ` [PATCH 2/3 v2] mmc: tmio: implement a bounce buffer for unaligned Guennadi Liakhovetski
@ 2011-01-05 19:56 ` Chris Ball
2011-01-05 20:56 ` [PATCH 2/3 v3] " Guennadi Liakhovetski
3 siblings, 1 reply; 16+ messages in thread
From: Chris Ball @ 2011-01-05 19:56 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-mmc, linux-sh, Ian Molton, Samuel Ortiz
Hi Guennadi, could you resend this one with some style changes:
On Tue, Nov 23, 2010 at 05:24:15PM +0100, Guennadi Liakhovetski wrote:
> For example, with SDIO WLAN cards, some transfers happen with buffers at odd
> addresses, whereas the SH-Mobile DMA engine requires even addresses for SDHI.
> This patch extends the tmio driver with a bounce buffer, that is used for
> single entry scatter-gather lists both for sending and receiving. If we ever
> encounter unaligned transfers with multi-element sg lists, this patch will have
> to be extended. For now it just falls back to PIO in this and other unsupported
> cases.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Please wrap commit messages at 72 cols, not 80.
> ---
> drivers/mmc/host/tmio_mmc.c | 81 +++++++++++++++++++++++++++++++++++++++----
> include/linux/mfd/tmio.h | 1 +
> 2 files changed, 75 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
> index 118ad86..57ece9d 100644
> --- a/drivers/mmc/host/tmio_mmc.c
> +++ b/drivers/mmc/host/tmio_mmc.c
> @@ -111,6 +111,8 @@
> sd_ctrl_write32((host), CTL_STATUS, ~(i)); \
> } while (0)
>
> +/* This is arbitrary, just noone needed any higher alignment yet */
> +#define MAX_ALIGN 4
>
> struct tmio_mmc_host {
> void __iomem *ctl;
> @@ -127,6 +129,7 @@ struct tmio_mmc_host {
>
> /* pio related stuff */
> struct scatterlist *sg_ptr;
> + struct scatterlist *sg_orig;
> unsigned int sg_len;
> unsigned int sg_off;
>
> @@ -139,6 +142,8 @@ struct tmio_mmc_host {
> struct tasklet_struct dma_issue;
> #ifdef CONFIG_TMIO_MMC_DMA
> unsigned int dma_sglen;
> + u8 bounce_buf[PAGE_CACHE_SIZE] __attribute__((aligned(MAX_ALIGN)));
> + struct scatterlist bounce_sg;
> #endif
> };
>
> @@ -180,6 +185,7 @@ static void tmio_mmc_init_sg(struct tmio_mmc_host *host, struct mmc_data *data)
> {
> host->sg_len = data->sg_len;
> host->sg_ptr = data->sg;
> + host->sg_orig = data->sg;
> host->sg_off = 0;
> }
>
> @@ -436,8 +442,14 @@ static void tmio_mmc_do_data_irq(struct tmio_mmc_host *host)
> */
>
> if (data->flags & MMC_DATA_READ) {
> - if (!host->chan_rx)
> + if (!host->chan_rx) {
> disable_mmc_irqs(host, TMIO_MASK_READOP);
> + } else if (host->sg_ptr = &host->bounce_sg) {
> + unsigned long flags;
> + void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags);
Would be nice to stay within 80-chars where possible.
> + memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length);
> + tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
> + }
> dev_dbg(&host->pdev->dev, "Complete Rx request %p\n",
> host->mrq);
> } else {
> @@ -529,8 +541,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host,
> if (!host->chan_rx)
> enable_mmc_irqs(host, TMIO_MASK_READOP);
> } else {
> - struct dma_chan *chan = host->chan_tx;
> - if (!chan)
> + if (!host->chan_tx)
> enable_mmc_irqs(host, TMIO_MASK_WRITEOP);
> else
> tasklet_schedule(&host->dma_issue);
> @@ -634,11 +645,36 @@ static void tmio_dma_complete(void *arg)
>
> static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
> {
> - struct scatterlist *sg = host->sg_ptr;
> + struct scatterlist *sg = host->sg_ptr, *sg_tmp;
> struct dma_async_tx_descriptor *desc = NULL;
> struct dma_chan *chan = host->chan_rx;
> + struct mfd_cell *cell = host->pdev->dev.platform_data;
> + struct tmio_mmc_data *pdata = cell->driver_data;
> dma_cookie_t cookie;
> - int ret;
> + int ret, i;
> + bool aligned = true, multiple = true;
> + unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
> +
> + for_each_sg(sg, sg_tmp, host->sg_len, i) {
> + if (sg_tmp->offset & align)
> + aligned = false;
> + if (sg_tmp->length & align) {
> + multiple = false;
> + break;
> + }
> + }
> +
> + if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
> + align >= MAX_ALIGN)) || !multiple)
> + goto pio;
> +
> + /* The only sg element can be unaligned, use our bounce buffer then */
> + if (!aligned) {
> + /* The first sg element unaligned, use our bounce-buffer */
> + sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
> + host->sg_ptr = &host->bounce_sg;
> + sg = host->sg_ptr;
> + }
The second comment is missing a verb, and looks redundant with the first.
>
> ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_FROM_DEVICE);
> if (ret > 0) {
> @@ -661,6 +697,7 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
> dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
> __func__, host->sg_len, ret, cookie, host->mrq);
>
> +pio:
> if (!desc) {
> /* DMA failed, fall back to PIO */
> if (ret >= 0)
> @@ -684,11 +721,40 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
>
> static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
> {
> - struct scatterlist *sg = host->sg_ptr;
> + struct scatterlist *sg = host->sg_ptr, *sg_tmp;
> struct dma_async_tx_descriptor *desc = NULL;
> struct dma_chan *chan = host->chan_tx;
> + struct mfd_cell *cell = host->pdev->dev.platform_data;
> + struct tmio_mmc_data *pdata = cell->driver_data;
> dma_cookie_t cookie;
> - int ret;
> + int ret, i;
> + bool aligned = true, multiple = true;
> + unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
> +
> + for_each_sg(sg, sg_tmp, host->sg_len, i) {
> + if (sg_tmp->offset & align)
> + aligned = false;
> + if (sg_tmp->length & align) {
> + multiple = false;
> + break;
> + }
> + }
> +
> + if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
> + align >= MAX_ALIGN)) || !multiple)
> + goto pio;
> +
> + /* The only sg element can be unaligned, use our bounce buffer then */
> + if (!aligned) {
> + unsigned long flags;
> + void *sg_vaddr = tmio_mmc_kmap_atomic(sg, &flags);
> + /* The first sg element unaligned, use our bounce-buffer */
> + sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
> + memcpy(host->bounce_buf, sg_vaddr, host->bounce_sg.length);
> + tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
> + host->sg_ptr = &host->bounce_sg;
> + sg = host->sg_ptr;
> + }
Same here.
>
> ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_TO_DEVICE);
> if (ret > 0) {
> @@ -709,6 +775,7 @@ static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
> dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
> __func__, host->sg_len, ret, cookie, host->mrq);
>
> +pio:
> if (!desc) {
> /* DMA failed, fall back to PIO */
> if (ret >= 0)
> diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h
> index 085f041..dbfc053 100644
> --- a/include/linux/mfd/tmio.h
> +++ b/include/linux/mfd/tmio.h
> @@ -66,6 +66,7 @@ void tmio_core_mmc_clk_div(void __iomem *cnf, int shift, int state);
> struct tmio_mmc_dma {
> void *chan_priv_tx;
> void *chan_priv_rx;
> + int alignment_shift;
> };
>
> /*
Thanks,
--
Chris Ball <cjb@laptop.org> <http://printf.net/>
One Laptop Per Child
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 2/3 v3] mmc: tmio: implement a bounce buffer for unaligned
2011-01-05 19:56 ` [PATCH 2/3] mmc: tmio: implement a bounce buffer for unaligned Chris Ball
@ 2011-01-05 20:56 ` Guennadi Liakhovetski
2011-01-05 21:06 ` [PATCH 2/3 v3] mmc: tmio: implement a bounce buffer for Chris Ball
0 siblings, 1 reply; 16+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-05 20:56 UTC (permalink / raw)
To: Chris Ball; +Cc: linux-mmc, linux-sh, Ian Molton, Samuel Ortiz
For example, with SDIO WLAN cards, some transfers happen with buffers at
odd addresses, whereas the SH-Mobile DMA engine requires even addresses
for SDHI. This patch extends the tmio driver with a bounce buffer, that
is used for single entry scatter-gather lists both for sending and
receiving. If we ever encounter unaligned transfers with multi-element
sg lists, this patch will have to be extended. For now it just falls
back to PIO in this and other unsupported cases.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
Chris, thanks for your comments, but you picked up a wrong version of this
patch. Please, use links from my "outstanding patches" email.
v3:
1. wrapped patch description at 72 chars instead of 80
2. removed two redundant comment lines
v2:
1. fixed compilation without DMA support. Thanks to Magnus Damm
<damm@opensource.se> for reporting
drivers/mmc/host/tmio_mmc.c | 89 ++++++++++++++++++++++++++++++++++++++++---
include/linux/mfd/tmio.h | 1 +
2 files changed, 84 insertions(+), 6 deletions(-)
diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
index 118ad86..d8163c1 100644
--- a/drivers/mmc/host/tmio_mmc.c
+++ b/drivers/mmc/host/tmio_mmc.c
@@ -111,6 +111,8 @@
sd_ctrl_write32((host), CTL_STATUS, ~(i)); \
} while (0)
+/* This is arbitrary, just noone needed any higher alignment yet */
+#define MAX_ALIGN 4
struct tmio_mmc_host {
void __iomem *ctl;
@@ -127,6 +129,7 @@ struct tmio_mmc_host {
/* pio related stuff */
struct scatterlist *sg_ptr;
+ struct scatterlist *sg_orig;
unsigned int sg_len;
unsigned int sg_off;
@@ -139,9 +142,13 @@ struct tmio_mmc_host {
struct tasklet_struct dma_issue;
#ifdef CONFIG_TMIO_MMC_DMA
unsigned int dma_sglen;
+ u8 bounce_buf[PAGE_CACHE_SIZE] __attribute__((aligned(MAX_ALIGN)));
+ struct scatterlist bounce_sg;
#endif
};
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host);
+
static u16 sd_ctrl_read16(struct tmio_mmc_host *host, int addr)
{
return readw(host->ctl + (addr << host->bus_shift));
@@ -180,6 +187,7 @@ static void tmio_mmc_init_sg(struct tmio_mmc_host *host, struct mmc_data *data)
{
host->sg_len = data->sg_len;
host->sg_ptr = data->sg;
+ host->sg_orig = data->sg;
host->sg_off = 0;
}
@@ -438,6 +446,8 @@ static void tmio_mmc_do_data_irq(struct tmio_mmc_host *host)
if (data->flags & MMC_DATA_READ) {
if (!host->chan_rx)
disable_mmc_irqs(host, TMIO_MASK_READOP);
+ else
+ tmio_check_bounce_buffer(host);
dev_dbg(&host->pdev->dev, "Complete Rx request %p\n",
host->mrq);
} else {
@@ -529,8 +539,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host,
if (!host->chan_rx)
enable_mmc_irqs(host, TMIO_MASK_READOP);
} else {
- struct dma_chan *chan = host->chan_tx;
- if (!chan)
+ if (!host->chan_tx)
enable_mmc_irqs(host, TMIO_MASK_WRITEOP);
else
tasklet_schedule(&host->dma_issue);
@@ -612,6 +621,16 @@ out:
}
#ifdef CONFIG_TMIO_MMC_DMA
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host)
+{
+ if (host->sg_ptr = &host->bounce_sg) {
+ unsigned long flags;
+ void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags);
+ memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length);
+ tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
+ }
+}
+
static void tmio_mmc_enable_dma(struct tmio_mmc_host *host, bool enable)
{
#if defined(CONFIG_SUPERH) || defined(CONFIG_ARCH_SHMOBILE)
@@ -634,11 +653,35 @@ static void tmio_dma_complete(void *arg)
static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
{
- struct scatterlist *sg = host->sg_ptr;
+ struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_rx;
+ struct mfd_cell *cell = host->pdev->dev.platform_data;
+ struct tmio_mmc_data *pdata = cell->driver_data;
dma_cookie_t cookie;
- int ret;
+ int ret, i;
+ bool aligned = true, multiple = true;
+ unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
+
+ for_each_sg(sg, sg_tmp, host->sg_len, i) {
+ if (sg_tmp->offset & align)
+ aligned = false;
+ if (sg_tmp->length & align) {
+ multiple = false;
+ break;
+ }
+ }
+
+ if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
+ align >= MAX_ALIGN)) || !multiple)
+ goto pio;
+
+ /* The only sg element can be unaligned, use our bounce buffer then */
+ if (!aligned) {
+ sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
+ host->sg_ptr = &host->bounce_sg;
+ sg = host->sg_ptr;
+ }
ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_FROM_DEVICE);
if (ret > 0) {
@@ -661,6 +704,7 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
__func__, host->sg_len, ret, cookie, host->mrq);
+pio:
if (!desc) {
/* DMA failed, fall back to PIO */
if (ret >= 0)
@@ -684,11 +728,39 @@ static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host)
static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
{
- struct scatterlist *sg = host->sg_ptr;
+ struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_tx;
+ struct mfd_cell *cell = host->pdev->dev.platform_data;
+ struct tmio_mmc_data *pdata = cell->driver_data;
dma_cookie_t cookie;
- int ret;
+ int ret, i;
+ bool aligned = true, multiple = true;
+ unsigned int align = (1 << pdata->dma->alignment_shift) - 1;
+
+ for_each_sg(sg, sg_tmp, host->sg_len, i) {
+ if (sg_tmp->offset & align)
+ aligned = false;
+ if (sg_tmp->length & align) {
+ multiple = false;
+ break;
+ }
+ }
+
+ if ((!aligned && (host->sg_len > 1 || sg->length > PAGE_CACHE_SIZE ||
+ align >= MAX_ALIGN)) || !multiple)
+ goto pio;
+
+ /* The only sg element can be unaligned, use our bounce buffer then */
+ if (!aligned) {
+ unsigned long flags;
+ void *sg_vaddr = tmio_mmc_kmap_atomic(sg, &flags);
+ sg_init_one(&host->bounce_sg, host->bounce_buf, sg->length);
+ memcpy(host->bounce_buf, sg_vaddr, host->bounce_sg.length);
+ tmio_mmc_kunmap_atomic(sg_vaddr, &flags);
+ host->sg_ptr = &host->bounce_sg;
+ sg = host->sg_ptr;
+ }
ret = dma_map_sg(&host->pdev->dev, sg, host->sg_len, DMA_TO_DEVICE);
if (ret > 0) {
@@ -709,6 +781,7 @@ static void tmio_mmc_start_dma_tx(struct tmio_mmc_host *host)
dev_dbg(&host->pdev->dev, "%s(): mapped %d -> %d, cookie %d, rq %p\n",
__func__, host->sg_len, ret, cookie, host->mrq);
+pio:
if (!desc) {
/* DMA failed, fall back to PIO */
if (ret >= 0)
@@ -822,6 +895,10 @@ static void tmio_mmc_release_dma(struct tmio_mmc_host *host)
}
}
#else
+static void tmio_check_bounce_buffer(struct tmio_mmc_host *host)
+{
+}
+
static void tmio_mmc_start_dma(struct tmio_mmc_host *host,
struct mmc_data *data)
{
diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h
index 085f041..dbfc053 100644
--- a/include/linux/mfd/tmio.h
+++ b/include/linux/mfd/tmio.h
@@ -66,6 +66,7 @@ void tmio_core_mmc_clk_div(void __iomem *cnf, int shift, int state);
struct tmio_mmc_dma {
void *chan_priv_tx;
void *chan_priv_rx;
+ int alignment_shift;
};
/*
--
1.7.2.3
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH 2/3 v3] mmc: tmio: implement a bounce buffer for
2011-01-05 20:56 ` [PATCH 2/3 v3] " Guennadi Liakhovetski
@ 2011-01-05 21:06 ` Chris Ball
0 siblings, 0 replies; 16+ messages in thread
From: Chris Ball @ 2011-01-05 21:06 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-mmc, linux-sh, Ian Molton, Samuel Ortiz
Hi Guennadi,
On Wed, Jan 05, 2011 at 09:56:01PM +0100, Guennadi Liakhovetski wrote:
> For example, with SDIO WLAN cards, some transfers happen with buffers at
> odd addresses, whereas the SH-Mobile DMA engine requires even addresses
> for SDHI. This patch extends the tmio driver with a bounce buffer, that
> is used for single entry scatter-gather lists both for sending and
> receiving. If we ever encounter unaligned transfers with multi-element
> sg lists, this patch will have to be extended. For now it just falls
> back to PIO in this and other unsupported cases.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> ---
>
> Chris, thanks for your comments, but you picked up a wrong version of this
> patch. Please, use links from my "outstanding patches" email.
Thanks. Actually, I got the right patch but then replied to the wrong
e-mail. :) Pushed v3 to mmc-next now.
--
Chris Ball <cjb@laptop.org> <http://printf.net/>
One Laptop Per Child
^ permalink raw reply [flat|nested] 16+ messages in thread