qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Peng Hao <peng.hao2@zte.com.cn>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-devel@nongnu.org, maxime.coquelin@redhat.com,
	marcandre.lureau@redhat.com,
	Wang Yechao <wang.yechao255@zte.com.cn>
Subject: Re: [Qemu-devel] [PATCH] vhost: fix a migration failed because of vhost region merge
Date: Mon, 24 Jul 2017 15:01:28 +0200	[thread overview]
Message-ID: <20170724150128.17b59235@nial.brq.redhat.com> (raw)
In-Reply-To: <20170721232134-mutt-send-email-mst@kernel.org>

Michael,

You once advocated idea of using MAP_NORESERVE to reserve continuous
HVA upto maxmmem in QEMU and then 'allocating' guest's RAM from that range
so that translation map for vhost could consist only from
that single pre-reserved HVA range and and if guest access
page outside of actually present memory it would be ok
to let guest misbehave.

Reason we get so many fragments is holes pocked in
initial RAM by device memory/MMIO ranges. Maybe we can reuse
'would be ok to let guest misbehave' part in vhost_set_memory()
which tracks flat memory map represented by sections.

Idea here is that each section has reference to a MemoryRegion it belongs to,
for vhost mem map we could reuse that MemoryRegion range instead
of set of sections that belong to it. Well behaved guest will continue
working as it uses accessible RAM pages and malicious guest will
misbehave if it will ask for translation of the page outside of
accessible RAM.
This way vhost mem map will typically have 1/2 entries for
low/high mem ranges + an entry per each dimm. This way we
can keep the same or less number of entries in vhost map
without merging (as in practice it works only for fragmented
initial memory and by chanse may work for dimm device)
and at the same time number of entries will be constant
(number of RAM memory regions) depending on initial RAM amount
and amount of used dimm devices regardless of the order they were created
or the runtime point they are created at.

  parent reply	other threads:[~2017-07-24 13:01 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-19 15:17 [Qemu-devel] [PATCH] vhost: fix a migration failed because of vhost region merge Peng Hao
2017-07-19  7:50 ` Igor Mammedov
2017-07-19 11:46   ` Dr. David Alan Gilbert
2017-07-19 13:24     ` Igor Mammedov
2017-07-19 15:52       ` Michael S. Tsirkin
2017-07-20 17:22         ` Dr. David Alan Gilbert
2017-07-21 19:49           ` Michael S. Tsirkin
2017-07-24  8:06             ` Dr. David Alan Gilbert
2017-07-24 10:46               ` Igor Mammedov
2017-07-21 14:41         ` Igor Mammedov
2017-07-21 21:30           ` Michael S. Tsirkin
2017-07-24 10:05             ` Igor Mammedov
2017-07-24 13:01             ` Igor Mammedov [this message]
2017-07-19  8:36 ` no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170724150128.17b59235@nial.brq.redhat.com \
    --to=imammedo@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mst@redhat.com \
    --cc=peng.hao2@zte.com.cn \
    --cc=qemu-devel@nongnu.org \
    --cc=wang.yechao255@zte.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).