From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933514AbdCLAGl (ORCPT ); Sat, 11 Mar 2017 19:06:41 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56720 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754980AbdCLAGe (ORCPT ); Sat, 11 Mar 2017 19:06:34 -0500 Date: Sun, 12 Mar 2017 02:05:47 +0200 From: "Michael S. Tsirkin" To: Matthew Wilcox Cc: Wei Wang , virtio-dev@lists.oasis-open.org, kvm@vger.kernel.org, qemu-devel@nongnu.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-mm@kvack.org, Liang Li , Paolo Bonzini , Cornelia Huck , Amit Shah , Dave Hansen , Andrea Arcangeli , David Hildenbrand , Liang Li Subject: Re: [PATCH v7 kernel 3/5] virtio-balloon: implementation of VIRTIO_BALLOON_F_CHUNK_TRANSFER Message-ID: <20170312020203-mutt-send-email-mst@kernel.org> References: <1488519630-89058-1-git-send-email-wei.w.wang@intel.com> <1488519630-89058-4-git-send-email-wei.w.wang@intel.com> <20170309141411.GZ16328@bombadil.infradead.org> <58C28FF8.5040403@intel.com> <20170310175349-mutt-send-email-mst@kernel.org> <20170310171143.GA16328@bombadil.infradead.org> <20170310211102-mutt-send-email-mst@kernel.org> <20170310212541.GC16328@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170310212541.GC16328@bombadil.infradead.org> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Sun, 12 Mar 2017 00:05:51 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 10, 2017 at 01:25:41PM -0800, Matthew Wilcox wrote: > On Fri, Mar 10, 2017 at 09:35:21PM +0200, Michael S. Tsirkin wrote: > > > bit 0 clear => bits 1-11 encode a page count, bits 12-63 encode a PFN, page size 4k. > > > bit 0 set, bit 1 clear => bits 2-12 encode a page count, bits 13-63 encode a PFN, page size 8k > > > bits 0+1 set, bit 2 clear => bits 3-13 for page count, bits 14-63 for PFN, page size 16k. > > > bits 0-2 set, bit 3 clear => bits 4-14 for page count, bits 15-63 for PFN, page size 32k > > > bits 0-3 set, bit 4 clear => bits 5-15 for page count, bits 16-63 for PFN, page size 64k > > > That means we can always pass 2048 pages (of whatever page size) in a single chunk. And > > > we support arbitrary power of two page sizes. I suggest something like this: > > > > > > u64 page_to_chunk(struct page *page) > > > { > > > u64 chunk = page_to_pfn(page) << PAGE_SHIFT; > > > chunk |= (1UL << compound_order(page)) - 1; > > > } > > > > You need to fill in the size, do you not? > > I think I did ... (1UL << compound_order(page)) - 1 sets the bottom > N bits. Bit N+1 will already be clear. What am I missing? This sets the order but not the number of pages. For that you would do something like chunk |= size << compound_order(page) right? > > > > - host should pass its base page size to guest > > > > this can be a separate patch and for now we can fall back on 12 bit if not there > > > > > > With this encoding scheme, I don't think we need to do this? As long as > > > it's *at least* 12 bit, then we're fine. > > > > I think we will still need something like this down the road. The point > > is that not all hosts are able to use 4k pages in a balloon. > > So it's pointless for guest to pass 4k pages to such a host, > > and we need host to tell guest the page size it needs. > > > > However that's a separate feature that can wait until > > another day. > > Ah, the TRIM/DISCARD debate all over again ... should the guest batch > up or should the host do that work ... probably easier to account it in > the guest. Might be better to frame it as 'balloon chunk size' rather than > host page size as it might have nothing to do with the host page size. Exactly. > > > What per-chunk flags are you thinking would be useful? > > > > Not entirely sure but I think would have been prudent to leave some free > > if possible. Your encoding seems to use them all up, so be it. > > We don't necessarily have to support 2048 pages in a single chunk. > If it's worth reserving some bits, we can do that at the expense of > reducing the maximum number of pages per chunk. Well we can always change things with a feature bit .. I'll leave this up to you and Wei. -- MST