From: Peter Maydell <peter.maydell@linaro.org>
To: qemu-devel@nongnu.org
Cc: qemu-ppc@nongnu.org, "BALATON Zoltan" <balaton@eik.bme.hu>,
"Daniel Henrique Barboza" <danielhb413@gmail.com>,
"Cédric Le Goater" <clg@kaod.org>
Subject: [RFC 1/2] hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map()
Date: Tue, 26 Jul 2022 19:23:40 +0100 [thread overview]
Message-ID: <20220726182341.1888115-2-peter.maydell@linaro.org> (raw)
In-Reply-To: <20220726182341.1888115-1-peter.maydell@linaro.org>
In dcr_write_dma(), there is code that uses cpu_physical_memory_map()
to implement a DMA transfer. That function takes a 'plen' argument,
which points to a hwaddr which is used for both input and output: the
caller must set it to the size of the range it wants to map, and on
return it is updated to the actual length mapped. The dcr_write_dma()
code fails to initialize rlen and wlen, so will end up mapping an
unpredictable amount of memory.
Initialize the length values correctly, and check that we managed to
map the entire range before using the fast-path memmove().
This was spotted by Coverity, which points out that we never
initialized the variables before using them.
Fixes: Coverity CID 1487137
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
This seems totally broken, so I presume we just don't have any
guest code that actually exercises this...
---
hw/ppc/ppc440_uc.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/hw/ppc/ppc440_uc.c b/hw/ppc/ppc440_uc.c
index a1ecf6dd1c2..11fdb88c220 100644
--- a/hw/ppc/ppc440_uc.c
+++ b/hw/ppc/ppc440_uc.c
@@ -904,14 +904,17 @@ static void dcr_write_dma(void *opaque, int dcrn, uint32_t val)
int width, i, sidx, didx;
uint8_t *rptr, *wptr;
hwaddr rlen, wlen;
+ hwaddr xferlen;
sidx = didx = 0;
width = 1 << ((val & DMA0_CR_PW) >> 25);
+ xferlen = count * width;
+ wlen = rlen = xferlen;
rptr = cpu_physical_memory_map(dma->ch[chnl].sa, &rlen,
false);
wptr = cpu_physical_memory_map(dma->ch[chnl].da, &wlen,
true);
- if (rptr && wptr) {
+ if (rptr && rlen == xferlen && wptr && wlen == xferlen) {
if (!(val & DMA0_CR_DEC) &&
val & DMA0_CR_SAI && val & DMA0_CR_DAI) {
/* optimise common case */
--
2.25.1
next prev parent reply other threads:[~2022-07-26 18:31 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-26 18:23 [RFC 0/2] Fix Coverity and other errors in ppc440_uc DMA Peter Maydell
2022-07-26 18:23 ` Peter Maydell [this message]
2022-07-26 18:24 ` [RFC 1/2] hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map() Peter Maydell
2022-07-27 14:11 ` Daniel Henrique Barboza
2022-07-28 9:43 ` Peter Maydell
2022-07-26 19:35 ` Richard Henderson
2022-07-26 18:23 ` [RFC 2/2] hw/ppc/ppc440_uc: Handle mapping failure in DMA engine Peter Maydell
2022-07-27 8:28 ` [RFC 0/2] Fix Coverity and other errors in ppc440_uc DMA Cédric Le Goater
2022-07-27 11:55 ` BALATON Zoltan
2022-07-27 13:01 ` Peter Maydell
2022-07-27 13:06 ` Peter Maydell
2022-07-28 18:03 ` BALATON Zoltan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220726182341.1888115-2-peter.maydell@linaro.org \
--to=peter.maydell@linaro.org \
--cc=balaton@eik.bme.hu \
--cc=clg@kaod.org \
--cc=danielhb413@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).