From: Wei Huang <wei@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: QEMU Trivial <qemu-trivial@nongnu.org>,
Igor Mammedov <imammedo@redhat.com>,
Shannon Zhao <zhaoshenglong@huawei.com>,
QEMU Developers <qemu-devel@nongnu.org>,
Shannon Zhao <shannon.zhao@linaro.org>
Subject: Re: [Qemu-trivial] [PATCH V2 1/2] ARM: PL061: Clear PL061 device state after reset
Date: Wed, 17 Feb 2016 13:09:19 -0600 [thread overview]
Message-ID: <56C4C55F.2050006@redhat.com> (raw)
In-Reply-To: <CAFEAcA-0+HLS4vgG3DqgUqsBNEt=J3M9Fy89uOtTQAtbt5noCQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 1638 bytes --]
On 02/17/2016 11:53 AM, Peter Maydell wrote:
> On 17 February 2016 at 17:34, Wei Huang <wei@redhat.com> wrote:
>> On 02/16/2016 08:39 AM, Peter Maydell wrote:
>>> Side note: half our "PL061" behaviour is actually specific
>>> to the TI variant in the Luminary, and for our plain old PL061
>>> we ought to restrict access to the registers that are Stellaris
>>> only. But that's a different bug and not a very major one.
>>
>> Thanks for your suggestion. I was trying to fix it. The plan was to add
>> a new field rsvd_addr in "struct PL061State". Then in pl061_read() and
>> pl061_write(), we can check offset against [rsvd_addr, 0xfcc] (ignored
>> if inside).
>>
>> While I was working on it, I realized that this is a benign issue. It is
>> true that PL061 device can access Luminary registers in the reserved
>> memory area. However QEMU doesn't use these Luminary registers anywhere
>> else other than pl061_read() and pl061_write(). It basically passes the
>> read/write requests through. I don't see a malicious driver can damage
>> device state. Thoughts?
>
> It's not a "malicious guest can do bad things" bug, it's a "modelled
> hardware doesn't behave like the real thing" bug. A non-Luminary PL061
> should act like the hardware, which means that the registers that don't
> exist should be RAZ/WI (and should log guest-errors if the guest tries
> to access them), the same way we do in the "default" case of the
> case statements for other reserved registers.
How about the attached patch? I can write a new patch based on it, or
you prefer stashing it on top of V3 I just submitted?
Thanks,
-Wei
>
> thanks
> -- PMM
>
[-- Attachment #2: pl061_boundary_check.txt --]
[-- Type: text/plain, Size: 2368 bytes --]
diff --git a/hw/gpio/pl061.c b/hw/gpio/pl061.c
index 5ece8b0..03a6351 100644
--- a/hw/gpio/pl061.c
+++ b/hw/gpio/pl061.c
@@ -60,6 +60,7 @@ typedef struct PL061State {
qemu_irq irq;
qemu_irq out[8];
const unsigned char *id;
+ uint32_t rsvd_start; /* reserved area: [rsvd_start, 0xfcc] */
} PL061State;
static const VMStateDescription vmstate_pl061 = {
@@ -158,6 +159,9 @@ static uint64_t pl061_read(void *opaque, hwaddr offset,
if (offset < 0x400) {
return s->data & (offset >> 2);
}
+ if (offset >= s->rsvd_start && offset <= 0xfcc) {
+ goto err_out;
+ }
switch (offset) {
case 0x400: /* Direction */
return s->dir;
@@ -198,10 +202,12 @@ static uint64_t pl061_read(void *opaque, hwaddr offset,
case 0x528: /* Analog mode select */
return s->amsel;
default:
- qemu_log_mask(LOG_GUEST_ERROR,
- "pl061_read: Bad offset %x\n", (int)offset);
- return 0;
+ break;
}
+err_out:
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "pl061_read: Bad offset %x\n", (int)offset);
+ return 0;
}
static void pl061_write(void *opaque, hwaddr offset,
@@ -216,6 +222,9 @@ static void pl061_write(void *opaque, hwaddr offset,
pl061_update(s);
return;
}
+ if (offset >= s->rsvd_start && offset <= 0xfcc) {
+ goto err_out;
+ }
switch (offset) {
case 0x400: /* Direction */
s->dir = value & 0xff;
@@ -274,10 +283,14 @@ static void pl061_write(void *opaque, hwaddr offset,
s->amsel = value & 0xff;
break;
default:
- qemu_log_mask(LOG_GUEST_ERROR,
- "pl061_write: Bad offset %x\n", (int)offset);
+ goto err_out;
}
pl061_update(s);
+ return;
+
+err_out:
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "pl061_write: Bad offset %x\n", (int)offset);
}
static void pl061_reset(DeviceState *dev)
@@ -347,6 +360,7 @@ static void pl061_luminary_init(Object *obj)
PL061State *s = PL061(obj);
s->id = pl061_id_luminary;
+ s->rsvd_start = 0x52c;
}
static void pl061_init(Object *obj)
@@ -354,6 +368,7 @@ static void pl061_init(Object *obj)
PL061State *s = PL061(obj);
s->id = pl061_id;
+ s->rsvd_start = 0x424;
}
static void pl061_class_init(ObjectClass *klass, void *data)
next prev parent reply other threads:[~2016-02-17 19:09 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-01 20:49 [Qemu-trivial] [PATCH V2 1/2] ARM: PL061: Clear PL061 device state after reset Wei Huang
2016-02-01 20:49 ` [Qemu-trivial] [PATCH V2 2/2] ARM: PL061: Cleaning field of PL061 device state Wei Huang
2016-02-16 14:36 ` Peter Maydell
2016-02-03 12:46 ` [Qemu-trivial] [PATCH V2 1/2] ARM: PL061: Clear PL061 device state after reset Shannon Zhao
2016-02-16 14:35 ` Peter Maydell
2016-02-16 14:39 ` Peter Maydell
2016-02-17 17:34 ` Wei Huang
2016-02-17 17:53 ` Peter Maydell
2016-02-17 19:09 ` Wei Huang [this message]
2016-02-17 19:23 ` Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56C4C55F.2050006@redhat.com \
--to=wei@redhat.com \
--cc=imammedo@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-trivial@nongnu.org \
--cc=shannon.zhao@linaro.org \
--cc=zhaoshenglong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).