From: "Michael S. Tsirkin" <mst@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged
Date: Tue, 14 Jul 2015 16:14:54 +0300 [thread overview]
Message-ID: <20150714161212-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <20150714150244.44c323eb@nial.brq.redhat.com>
On Tue, Jul 14, 2015 at 03:02:44PM +0200, Igor Mammedov wrote:
> On Mon, 13 Jul 2015 23:14:37 +0300
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> > On Mon, Jul 13, 2015 at 08:55:13PM +0200, Igor Mammedov wrote:
> > > On Mon, 13 Jul 2015 09:55:18 +0300
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > >
> > > > On Fri, Jul 10, 2015 at 12:12:36PM +0200, Igor Mammedov wrote:
> > > > > On Thu, 9 Jul 2015 16:46:43 +0300
> > > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > >
> > > > > > On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote:
> > > > > > >
> > > > > > >
> > > > > > > On 09/07/2015 15:06, Michael S. Tsirkin wrote:
> > > > > > > > > QEMU asserts in vhost due to hitting vhost backend limit
> > > > > > > > > on number of supported memory regions.
> > > > > > > > >
> > > > > > > > > Describe all hotplugged memory as one continuos range
> > > > > > > > > to vhost with linear 1:1 HVA->GPA mapping in backend.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > > > > > > >
> > > > > > > > Hmm - a bunch of work here to recombine MRs that memory
> > > > > > > > listener interface breaks up. In particular KVM could
> > > > > > > > benefit from this too (on workloads that change the table a
> > > > > > > > lot). Can't we teach memory core to pass hva range as a
> > > > > > > > single continuous range to memory listeners?
> > > > > > >
> > > > > > > Memory listeners are based on memory regions, not HVA ranges.
> > > > > > >
> > > > > > > Paolo
> > > > > >
> > > > > > Many listeners care about HVA ranges. I know KVM and vhost do.
> > > > > I'm not sure about KVM, it works just fine with fragmented memory
> > > > > regions, the same will apply to vhost once module parameter to
> > > > > increase limit is merged.
> > > > >
> > > > > but changing generic memory listener interface to replace HVA mapped
> > > > > regions with HVA container would lead to a case when listeners
> > > > > won't see exact layout that they might need.
> > > >
> > > > I don't think they care, really.
> > > >
> > > > > In addition vhost itself will suffer from working with big HVA
> > > > > since it allocates log depending on size of memory => bigger log.
> > > >
> > > > Not really - it allocates the log depending on the PA range.
> > > > Leaving unused holes doesn't reduce it's size.
> > > if it would use HVA container instead then it will always allocate
> > > log for max possible GPA, meaning that -m 1024,maxmem=1T will waste
> > > a lot of memory and more so for bigger maxmem.
> > > It's still possible to induce worst case by plugging pc-dimm at the end
> > > of hotplug-memory area by specifying address for it explicitly.
> > > That problem exists since memory hot-add was introduced, I've just
> > > haven't noticed it back then.
> >
> > There you are then. Depending on maxmem seems cleaner as it's more
> > predictable.
> >
> > > It's perfectly fine to allocate log by last GPA as far as
> > > memory is nearly continuous but memory hot-add makes it possible to
> > > have sparse layout with a huge gaps between guest mapped RAM
> > > which makes current log handling inefficient.
> > >
> > > I wonder how hard it would be to make log_size depend on present RAM
> > > size rather than max present GPA so it wouldn't allocate excess
> > > memory for log.
> >
> > We can simply map the unused parts of the log RESERVED.
> meaning that vhost listener should get RAM regions so it would know
> which parts of log it has to mmap(NORESERVE|DONTNEED)
>
> it would also require custom allocator for log, that could manage
> punching/unpunching holes in log depending on RAM layout.
Yea. Anyway, this isn't urgent I think.
> btw is it possible for guest to force vhost module access
> NORESERVE area and what would happen it that case?
Sure. I think you'll get EFAULT, vhost will stop processing the ring then.
>
> >
> > That can be a natural continuation of these series, but
> > I don't think it needs to block it.
> >
> > >
> > > >
> > > >
> > > > > That's one of the reasons that in this patch HVA ranges in
> > > > > memory map are compacted only for backend consumption,
> > > > > QEMU's side of vhost uses exact map for internal purposes.
> > > > > And the other reason is I don't know vhost enough to rewrite it
> > > > > to use big HVA for everything.
> > > > >
> > > > > > I guess we could create dummy MRs to fill in the holes left by
> > > > > > memory hotplug?
> > > > > it looks like nice thing from vhost pov but complicates other side,
> > > >
> > > > What other side do you have in mind?
> > > >
> > > > > hence I dislike an idea inventing dummy MRs for vhost's convenience.
> > > memory core, but lets see what Paolo thinks about it.
> > >
> > > > >
> > > > >
> > > > > > vhost already has logic to recombine
> > > > > > consequitive chunks created by memory core.
> > > > > which looks a bit complicated and I was thinking about simplifying
> > > > > it some time in the future.
> > > >
> >
next prev parent reply other threads:[~2015-07-14 13:15 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-09 11:47 [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 1/7] memory: get rid of memory_region_destructor_ram_from_ptr() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 2/7] memory: introduce MemoryRegion container with reserved HVA range Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 3/7] pc: reserve hotpluggable memory range with memory_region_init_hva_range() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged Igor Mammedov
2015-07-09 13:06 ` Michael S. Tsirkin
2015-07-09 13:43 ` Paolo Bonzini
2015-07-09 13:46 ` Michael S. Tsirkin
2015-07-10 10:12 ` Igor Mammedov
2015-07-13 6:55 ` Michael S. Tsirkin
2015-07-13 18:55 ` Igor Mammedov
2015-07-13 20:14 ` Michael S. Tsirkin
2015-07-14 13:02 ` Igor Mammedov
2015-07-14 13:14 ` Michael S. Tsirkin [this message]
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 5/7] exec: make sure that RAMBlock descriptor won't be leaked Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 6/7] exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 7/7] memory: add support for deleting HVA mapped MemoryRegion Igor Mammedov
2015-07-15 15:12 ` [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-07-15 16:32 ` Michael S. Tsirkin
2015-07-16 7:26 ` Igor Mammedov
2015-07-16 7:35 ` Michael S. Tsirkin
2015-07-16 9:42 ` Igor Mammedov
2015-07-16 10:24 ` Michael S. Tsirkin
2015-07-16 11:11 ` Igor Mammedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150714161212-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=imammedo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).