From: Peter Xu <peterx@redhat.com>
To: Eric Auger <eric.auger@redhat.com>
Cc: eric.auger.pro@gmail.com, qemu-devel@nongnu.org, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [PATCH] exec: Fix MAP_RAM for cached access
Date: Wed, 13 Jun 2018 11:15:01 +0800 [thread overview]
Message-ID: <20180613031501.GF15344@xz-mi> (raw)
In-Reply-To: <1528830325-5501-1-git-send-email-eric.auger@redhat.com>
On Tue, Jun 12, 2018 at 09:05:25PM +0200, Eric Auger wrote:
> When an IOMMUMemoryRegion is in front of a virtio device,
> address_space_cache_init does not set cache->ptr as the memory
> region is not RAM. However when the device performs an access,
> we end up in glue() which performs the translation and then uses
> MAP_RAM. This latter uses the unset ptr and returns a wrong value
> which leads to a SIGSEV in address_space_lduw_internal_cached_slow,
> for instance. Let's test whether the cache->ptr is set, and in
> the negative use the old macro definition. This fixes the
> use cases featuring vIOMMU (Intel and ARM SMMU) which lead to
> a SIGSEV.
>
> Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching)
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>
> ---
>
> I am not sure whether it doesn't break any targeted optimization
> but at least it removes the SIGSEV.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> ---
> exec.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/exec.c b/exec.c
> index f6645ed..46fbd25 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3800,7 +3800,9 @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
> #define SUFFIX _cached_slow
> #define TRANSLATE(...) address_space_translate_cached(cache, __VA_ARGS__)
> #define IS_DIRECT(mr, is_write) memory_access_is_direct(mr, is_write)
> -#define MAP_RAM(mr, ofs) (cache->ptr + (ofs - cache->xlat))
> +#define MAP_RAM(mr, ofs) (cache->ptr ? \
> + (cache->ptr + (ofs - cache->xlat)) : \
> + qemu_map_ram_ptr((mr)->ram_block, ofs))
A pure question: if the MR is not a RAM (I think the only case for
virtio case should be an IOMMU MR), then why we'll call MAP_RAM()
after all? An glue() example:
void glue(address_space_stb, SUFFIX)(ARG1_DECL,
hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
{
uint8_t *ptr;
MemoryRegion *mr;
hwaddr l = 1;
hwaddr addr1;
MemTxResult r;
bool release_lock = false;
RCU_READ_LOCK();
mr = TRANSLATE(addr, &addr1, &l, true, attrs);
if (!IS_DIRECT(mr, true)) { <----------------- [1]
release_lock |= prepare_mmio_access(mr);
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
} else {
/* RAM case */
ptr = MAP_RAM(mr, addr1);
stb_p(ptr, val);
INVALIDATE(mr, addr1, 1);
r = MEMTX_OK;
}
if (result) {
*result = r;
}
if (release_lock) {
qemu_mutex_unlock_iothread();
}
RCU_READ_UNLOCK();
}
At [1] we should check first against whether it's direct after all.
AFAIU IOMMU MR should not be direct then it'll go the slow path rather
than calling MAP_RAM()?
Since at it, I have another (pure) question about the address space
cache. I don't think it's urgent since I think it's never a problem
for virtio, but I'm still asking anyways...
Still taking the stb example:
static inline void address_space_stb_cached(MemoryRegionCache *cache,
hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
{
assert(addr < cache->len); <----------------------------- [2]
if (likely(cache->ptr)) {
stb_p(cache->ptr + addr, val);
} else {
address_space_stb_cached_slow(cache, addr, val, attrs, result);
}
}
Here at [2] what if the region cached is smaller than provided when
doing address_space_cache_init()? AFAIU the "len" provided to
address_space_cache_init() can actually shrink (though for virtio it
should never) when do:
l = len;
...
cache->mrs = *address_space_translate_internal(d, addr, &cache->xlat, &l, true);
...
cache->len = l;
And here not sure whether we should not assert, instead we only run
the fast path if the address falls into the cache region, say:
static inline void address_space_stb_cached(MemoryRegionCache *cache,
hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
{
if (likely(cache->ptr && addr < cache->len)) {
stb_p(cache->ptr + addr, val);
} else {
address_space_stb_cached_slow(cache, addr, val, attrs, result);
}
}
Or we should add a check in address_space_cache_init() to make sure
the region won't shrink.
Regards,
--
Peter Xu
next prev parent reply other threads:[~2018-06-13 3:15 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-12 19:05 [Qemu-devel] [PATCH] exec: Fix MAP_RAM for cached access Eric Auger
2018-06-13 3:15 ` Peter Xu [this message]
2018-06-13 6:31 ` Auger Eric
2018-06-13 6:53 ` Peter Xu
2018-06-13 9:56 ` Paolo Bonzini
2018-06-13 13:20 ` Auger Eric
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180613031501.GF15344@xz-mi \
--to=peterx@redhat.com \
--cc=eric.auger.pro@gmail.com \
--cc=eric.auger@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).