From: Igor Mammedov <imammedo@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com
Subject: Re: [Qemu-devel] [RFC v2 3/6] memory: support unmapping of MemoryRegion mapped into HVA parent
Date: Mon, 8 Jun 2015 18:13:14 +0200 [thread overview]
Message-ID: <20150608181314.3ab8fc80@nial.brq.redhat.com> (raw)
In-Reply-To: <5575B58B.50105@redhat.com>
On Mon, 08 Jun 2015 17:32:27 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 08/06/2015 17:19, Igor Mammedov wrote:
> > +void qemu_ram_unmap_hva(ram_addr_t addr)
> > +{
> > + RAMBlock *block = find_ram_block(addr);
> > +
> > + assert(block);
> > + mmap(block->host, block->used_length, PROT_NONE,
> > + MAP_FIXED | MAP_NORESERVE | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > +}
> > +
>
> Hmm, this is not good. :( The area at block->host can be in use, for
> example via memory_region_ref/memory_region_unref. This can happen a
> bit after the memory_region_del_subregion. So you can SEGV if you
> simply make a synchronous update. I'm not sure if there is a solution
Yep, that's the problem I haven't found solution to so far,
any ideas hoe to approach this are appreciated.
issue is that we have to re-reserve HVA region first so no other allocation
would claim gap and the only way I found was just to call mmap() on it
which as side effect invalidates MemoryRegion's backing RAM.
> (but thanks for splitting the patches in a way that made the problem
> clear!).
>
> Paolo
next prev parent reply other threads:[~2015-06-08 16:13 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-08 15:19 [Qemu-devel] [RFC v2 0/6] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 1/6] memory: get rid of memory_region_destructor_ram_from_ptr() Igor Mammedov
2015-06-08 15:23 ` Paolo Bonzini
2015-06-08 16:08 ` Igor Mammedov
2015-06-08 16:09 ` Paolo Bonzini
2015-06-08 16:26 ` Igor Mammedov
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 2/6] memory: introduce MemoryRegion container with reserved HVA range Igor Mammedov
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 3/6] memory: support unmapping of MemoryRegion mapped into HVA parent Igor Mammedov
2015-06-08 15:26 ` Paolo Bonzini
2015-06-08 15:32 ` Paolo Bonzini
2015-06-08 16:13 ` Igor Mammedov [this message]
2015-06-08 16:25 ` Michael S. Tsirkin
2015-06-08 17:06 ` Paolo Bonzini
2015-06-09 10:08 ` Igor Mammedov
2015-06-17 8:14 ` Paolo Bonzini
2015-06-17 15:04 ` Igor Mammedov
2015-06-17 15:10 ` Michael S. Tsirkin
2015-06-17 16:15 ` Paolo Bonzini
2015-06-17 16:30 ` Michael S. Tsirkin
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 4/6] hostmem: return recreated MemoryRegion if current can't be reused Igor Mammedov
2015-06-08 15:30 ` Paolo Bonzini
2015-06-08 16:25 ` Igor Mammedov
2015-06-08 16:28 ` Paolo Bonzini
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 5/6] pc: reserve hotpluggable memory range with memory_region_init_hva_range() Igor Mammedov
2015-06-08 15:19 ` [Qemu-devel] [RFC v2 6/6] pc: fix QEMU crashing when more than ~50 memory hotplugged Igor Mammedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150608181314.3ab8fc80@nial.brq.redhat.com \
--to=imammedo@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).