From: Christian Speich <c.speich@avm.de>
To: qemu-devel@nongnu.org
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Bin Meng" <bmeng.cn@gmail.com>,
qemu-block@nongnu.org, "Christian Speich" <c.speich@avm.de>
Subject: [PATCH 2/4] hw/sd/sdhci: Don't use bounce buffer for ADMA
Date: Fri, 19 Sep 2025 14:34:41 +0200 [thread overview]
Message-ID: <20250919-sdcard-performance-b4-v1-2-e1037e481a19@avm.de> (raw)
In-Reply-To: <20250919-sdcard-performance-b4-v1-0-e1037e481a19@avm.de>
Currently, ADMA will temporarily store data into a local bounce buffer
when transferring it. This will produce unneeded copies of the data and
limit us to the bounce buffer size for each step.
This patch now maps the requested DMA address and passes this buffer
directly to sdbus_{read,write}_data. This allows to pass much larger
buffers down to increase the performance. sdbus_{read,write}_data is
already able to handle arbitrary length and alignments, so we do not
need to ensure this.
Signed-off-by: Christian Speich <c.speich@avm.de>
---
hw/sd/sdhci.c | 102 +++++++++++++++++++++++++++++++---------------------------
1 file changed, 55 insertions(+), 47 deletions(-)
diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
index 3c897e54b721075a3ebd215e027fb73a65ff39b2..94ba23a8da990e69fd59c039e4fdd25b98929dfd 100644
--- a/hw/sd/sdhci.c
+++ b/hw/sd/sdhci.c
@@ -774,7 +774,7 @@ static void get_adma_description(SDHCIState *s, ADMADescr *dscr)
static void sdhci_do_adma(SDHCIState *s)
{
- unsigned int begin, length;
+ unsigned int length;
const uint16_t block_size = s->blksize & BLOCK_SIZE_MASK;
const MemTxAttrs attrs = { .memory = true };
ADMADescr dscr = {};
@@ -816,66 +816,74 @@ static void sdhci_do_adma(SDHCIState *s)
if (s->trnmod & SDHC_TRNS_READ) {
s->prnsts |= SDHC_DOING_READ;
while (length) {
- if (s->data_count == 0) {
- sdbus_read_data(&s->sdbus, s->fifo_buffer, block_size);
- }
- begin = s->data_count;
- if ((length + begin) < block_size) {
- s->data_count = length + begin;
- length = 0;
- } else {
- s->data_count = block_size;
- length -= block_size - begin;
- }
- res = dma_memory_write(s->dma_as, dscr.addr,
- &s->fifo_buffer[begin],
- s->data_count - begin,
- attrs);
- if (res != MEMTX_OK) {
+ dma_addr_t dma_len = length;
+
+ void *buf = dma_memory_map(s->dma_as, dscr.addr, &dma_len,
+ DMA_DIRECTION_FROM_DEVICE,
+ attrs);
+
+ if (buf == NULL) {
+ res = MEMTX_ERROR;
break;
+ } else {
+ res = MEMTX_OK;
}
- dscr.addr += s->data_count - begin;
- if (s->data_count == block_size) {
- s->data_count = 0;
- if (s->trnmod & SDHC_TRNS_BLK_CNT_EN) {
- s->blkcnt--;
- if (s->blkcnt == 0) {
- break;
- }
+
+ sdbus_read_data(&s->sdbus, buf, dma_len);
+ length -= dma_len;
+ dscr.addr += dma_len;
+
+ dma_memory_unmap(s->dma_as, buf, dma_len,
+ DMA_DIRECTION_FROM_DEVICE, dma_len);
+
+ if (s->trnmod & SDHC_TRNS_BLK_CNT_EN) {
+ size_t transfered = s->data_count + dma_len;
+
+ s->blkcnt -= transfered / block_size;
+ s->data_count = transfered % block_size;
+
+ if (s->blkcnt == 0) {
+ s->data_count = 0;
+ break;
}
}
}
} else {
s->prnsts |= SDHC_DOING_WRITE;
while (length) {
- begin = s->data_count;
- if ((length + begin) < block_size) {
- s->data_count = length + begin;
- length = 0;
- } else {
- s->data_count = block_size;
- length -= block_size - begin;
- }
- res = dma_memory_read(s->dma_as, dscr.addr,
- &s->fifo_buffer[begin],
- s->data_count - begin,
- attrs);
- if (res != MEMTX_OK) {
+ dma_addr_t dma_len = length;
+
+ void *buf = dma_memory_map(s->dma_as, dscr.addr, &dma_len,
+ DMA_DIRECTION_TO_DEVICE, attrs);
+
+ if (buf == NULL) {
+ res = MEMTX_ERROR;
break;
+ } else {
+ res = MEMTX_OK;
}
- dscr.addr += s->data_count - begin;
- if (s->data_count == block_size) {
- sdbus_write_data(&s->sdbus, s->fifo_buffer, block_size);
- s->data_count = 0;
- if (s->trnmod & SDHC_TRNS_BLK_CNT_EN) {
- s->blkcnt--;
- if (s->blkcnt == 0) {
- break;
- }
+
+ sdbus_write_data(&s->sdbus, buf, dma_len);
+ length -= dma_len;
+ dscr.addr += dma_len;
+
+ dma_memory_unmap(s->dma_as, buf, dma_len,
+ DMA_DIRECTION_TO_DEVICE, dma_len);
+
+ if (s->trnmod & SDHC_TRNS_BLK_CNT_EN) {
+ size_t transfered = s->data_count + dma_len;
+
+ s->blkcnt -= transfered / block_size;
+ s->data_count = transfered % block_size;
+
+ if (s->blkcnt == 0) {
+ s->data_count = 0;
+ break;
}
}
}
}
+
if (res != MEMTX_OK) {
s->data_count = 0;
if (s->errintstsen & SDHC_EISEN_ADMAERR) {
--
2.43.0
next prev parent reply other threads:[~2025-09-19 13:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-19 12:34 [PATCH 0/4] hw/sd: Improve performance of read/write/erase Christian Speich
2025-09-19 12:34 ` [PATCH 1/4] hw/sd: Switch from byte-wise to buf+len read/writes Christian Speich
2025-09-19 12:34 ` Christian Speich [this message]
2025-09-19 12:34 ` [PATCH 3/4] hw/sd/sdcard: Erase blocks to zero Christian Speich
2025-11-24 4:09 ` Philippe Mathieu-Daudé
2025-11-25 9:47 ` Christian Speich
2025-09-19 12:34 ` [PATCH 4/4] hw/sd/sdcard: Erase in large blocks Christian Speich
2025-11-07 9:08 ` [PATCH 0/4] hw/sd: Improve performance of read/write/erase Christian Speich
2025-11-24 4:05 ` Philippe Mathieu-Daudé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250919-sdcard-performance-b4-v1-2-e1037e481a19@avm.de \
--to=c.speich@avm.de \
--cc=bmeng.cn@gmail.com \
--cc=philmd@linaro.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).