From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38853) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aqfkk-0007qy-OQ for qemu-devel@nongnu.org; Thu, 14 Apr 2016 07:48:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aqfkh-00005o-IX for qemu-devel@nongnu.org; Thu, 14 Apr 2016 07:48:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45436) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aqfkh-00005f-Do for qemu-devel@nongnu.org; Thu, 14 Apr 2016 07:48:03 -0400 Date: Thu, 14 Apr 2016 12:47:58 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20160414114757.GE2252@work-vm> References: <1460548364-27469-1-git-send-email-thuth@redhat.com> <20160413145835-mutt-send-email-mst@redhat.com> <570E5D05.2030507@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <570E5D05.2030507@redhat.com> Subject: Re: [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host page sizes List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Thomas Huth Cc: "Michael S. Tsirkin" , qemu-devel@nongnu.org, dgibson@redhat.com, wehuang@redhat.com, drjones@redhat.com, amit.shah@redhat.com, jitendra.kolhe@hpe.com * Thomas Huth (thuth@redhat.com) wrote: > That would mean a regression compared to what we have today. Currently, > the ballooning is working OK for 64k guests on a 64k ppc host - rather > by chance than on purpose, but it's working. The guest is always sending > all the 4k fragments of a 64k page, and QEMU is trying to call madvise() > for every one of them, but the kernel is ignoring madvise() on > non-64k-aligned addresses, so we end up with a situation where the > madvise() frees a whole 64k page which is also declared as free by the > guest. I wouldn't worry about migrating your fragmenet map; but I wonder if it needs to be that complex - does the guest normally do something more sane like do the 4k pages in order and so you've just got to track the last page it tried rather than having a full map? A side question is whether the behaviour that's seen by virtio_ballon_handle_output is always actually the full 64k page; it calls balloon_page once for each message/element - but if all of those elements add back up to the full page, perhaps it makes more sense to reassemble it there? > I think we should either take this patch as it is right now (without > adding extra code for migration) and later update it to the bitmap code > by Jitendra Kolhe, or omit it completely (leaving 4k guests broken) and > fix it properly after the bitmap code has been applied. But disabling > the balloon code for 64k guests on 64k hosts completely does not sound > very appealing to me. What do you think? Yeh I agree; your existing code should work and I don't think we should break 64k-on-64k. Dave > > Thomas > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK