From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Thomas Huth <thuth@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, dgibson@redhat.com, wehuang@redhat.com,
drjones@redhat.com, amit.shah@redhat.com, jitendra.kolhe@hpe.com
Subject: Re: [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host page sizes
Date: Thu, 14 Apr 2016 12:47:58 +0100 [thread overview]
Message-ID: <20160414114757.GE2252@work-vm> (raw)
In-Reply-To: <570E5D05.2030507@redhat.com>
* Thomas Huth (thuth@redhat.com) wrote:
> That would mean a regression compared to what we have today. Currently,
> the ballooning is working OK for 64k guests on a 64k ppc host - rather
> by chance than on purpose, but it's working. The guest is always sending
> all the 4k fragments of a 64k page, and QEMU is trying to call madvise()
> for every one of them, but the kernel is ignoring madvise() on
> non-64k-aligned addresses, so we end up with a situation where the
> madvise() frees a whole 64k page which is also declared as free by the
> guest.
I wouldn't worry about migrating your fragmenet map; but I wonder if it
needs to be that complex - does the guest normally do something more sane
like do the 4k pages in order and so you've just got to track the last
page it tried rather than having a full map?
A side question is whether the behaviour that's seen by virtio_ballon_handle_output
is always actually the full 64k page; it calls balloon_page once
for each message/element - but if all of those elements add back up to the full
page, perhaps it makes more sense to reassemble it there?
> I think we should either take this patch as it is right now (without
> adding extra code for migration) and later update it to the bitmap code
> by Jitendra Kolhe, or omit it completely (leaving 4k guests broken) and
> fix it properly after the bitmap code has been applied. But disabling
> the balloon code for 64k guests on 64k hosts completely does not sound
> very appealing to me. What do you think?
Yeh I agree; your existing code should work and I don't think we should
break 64k-on-64k.
Dave
>
> Thomas
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2016-04-14 11:48 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-13 11:52 [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host page sizes Thomas Huth
2016-04-13 12:37 ` Andrew Jones
2016-04-13 13:15 ` Michael S. Tsirkin
2016-04-13 14:51 ` Thomas Huth
2016-04-13 17:07 ` Michael S. Tsirkin
2016-04-13 17:38 ` Thomas Huth
2016-04-13 17:55 ` Michael S. Tsirkin
2016-04-13 18:11 ` Thomas Huth
2016-04-13 18:14 ` Michael S. Tsirkin
2016-04-14 3:45 ` David Gibson
2016-04-13 18:21 ` Andrew Jones
2016-04-14 11:47 ` Dr. David Alan Gilbert [this message]
2016-04-14 12:19 ` Thomas Huth
2016-04-14 18:34 ` Dr. David Alan Gilbert
2016-04-15 4:26 ` David Gibson
2016-05-23 6:25 ` Jitendra Kolhe
2016-04-14 3:39 ` David Gibson
2016-04-14 3:37 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160414114757.GE2252@work-vm \
--to=dgilbert@redhat.com \
--cc=amit.shah@redhat.com \
--cc=dgibson@redhat.com \
--cc=drjones@redhat.com \
--cc=jitendra.kolhe@hpe.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
--cc=wehuang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).