From: "Philippe Mathieu-Daudé" <philmd@redhat.com>
To: Guenter Roeck <linux@roeck-us.net>,
qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Cc: Kevin Wolf <kwolf@redhat.com>, qemu-arm <qemu-arm@nongnu.org>,
qemu-block@nongnu.org, Max Reitz <mreitz@redhat.com>
Subject: Re: Question about (and problem with) pflash data access
Date: Wed, 12 Feb 2020 22:39:30 +0100 [thread overview]
Message-ID: <504e7722-0b60-ec02-774d-26a7320e5309@redhat.com> (raw)
In-Reply-To: <20200212184648.GA584@roeck-us.net>
Cc'ing Jean-Christophe and Peter.
On 2/12/20 7:46 PM, Guenter Roeck wrote:
> Hi,
>
> I have been playing with pflash recently. For the most part it works,
> but I do have an odd problem when trying to instantiate pflash on sx1.
>
> My data file looks as follows.
>
> 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> ...
>
> In the sx1 machine, this becomes:
>
> 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> ...
>
> pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".
>
> I don't have much success with pflash tracing - data accesses don't
> show up there.
>
> I did find a number of problems with the sx1 emulation, but I have no clue
> what is going on with pflash. As far as I can see pflash works fine on
> other machines. Can someone give me a hint what to look out for ?
This is specific to the SX1, introduced in commit 997641a84ff:
64 static uint64_t static_read(void *opaque, hwaddr offset,
65 unsigned size)
66 {
67 uint32_t *val = (uint32_t *) opaque;
68 uint32_t mask = (4 / size) - 1;
69
70 return *val >> ((offset & mask) << 3);
71 }
Only guessing, this looks like some hw parity, and I imagine you need to
write the parity bits in your flash.32M file before starting QEMU, then
it would appear "normal" within the guest.
next prev parent reply other threads:[~2020-02-12 21:40 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-12 18:46 Question about (and problem with) pflash data access Guenter Roeck
2020-02-12 21:39 ` Philippe Mathieu-Daudé [this message]
2020-02-12 23:09 ` Guenter Roeck
2020-02-12 23:50 ` Philippe Mathieu-Daudé
2020-02-13 7:40 ` Alexey Kardashevskiy
2020-02-13 9:51 ` Paolo Bonzini
2020-02-13 14:26 ` Guenter Roeck
2020-02-13 14:39 ` Peter Maydell
2020-02-13 15:24 ` Philippe Mathieu-Daudé
2020-02-13 16:21 ` Guenter Roeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=504e7722-0b60-ec02-774d-26a7320e5309@redhat.com \
--to=philmd@redhat.com \
--cc=kwolf@redhat.com \
--cc=linux@roeck-us.net \
--cc=mreitz@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=plagnioj@jcrosoft.com \
--cc=qemu-arm@nongnu.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).