From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takuya Yoshikawa Subject: Re: [PATCH] KVM: fix the handling of dirty bitmaps to avoid overflows Date: Tue, 13 Apr 2010 09:52:24 +0900 Message-ID: <4BC3C048.5030704@oss.ntt.co.jp> References: <20100412193535.6c502695.yoshikawa.takuya@oss.ntt.co.jp> <20100412173951.GA5614@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: avi-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-ia64-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-ppc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Marcelo Tosatti Return-path: In-Reply-To: <20100412173951.GA5614-I4X2Mt4zSy4@public.gmane.org> Sender: kvm-ppc-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: kvm.vger.kernel.org (2010/04/13 2:39), Marcelo Tosatti wrote: > On Mon, Apr 12, 2010 at 07:35:35PM +0900, Takuya Yoshikawa wrote: >> This patch fixes a bug found by Avi during the review process >> of my dirty bitmap related work. >> >> To ppc and ia64 people: >> The fix is really simple but touches all architectures using >> dirty bitmaps. So please check this will not suffer your part. >> >> === >> >> Int is not long enough to store the size of a dirty bitmap. >> >> This patch fixes this problem with the introduction of a wrapper >> function to calculate the sizes of dirty bitmaps. >> >> Note: in mark_page_dirty(), we have to consider the fact that >> __set_bit() takes the offset as int, not long. >> >> Signed-off-by: Takuya Yoshikawa > > Applied, thanks. > Thanks everyone! BTW, just from my curiosity, are there any cases in which we use such huge number of pages currently? ALIGN(memslot->npages, BITS_PER_LONG) / 8; More than G pages need really big memory! -- We are assuming some special cases like "short" int size? If so, we may have to care about a lot of things from now on, because common functions like __set_bit() don't support such long buffers. If not, my patch might be over hacking -- especially the following part: @@ -1183,10 +1183,13 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) memslot = gfn_to_memslot_unaliased(kvm, gfn); if (memslot && memslot->dirty_bitmap) { unsigned long rel_gfn = gfn - memslot->base_gfn; + unsigned long *p = memslot->dirty_bitmap + + rel_gfn / BITS_PER_LONG; + int offset = rel_gfn % BITS_PER_LONG; /* avoid RMW */ - if (!generic_test_le_bit(rel_gfn, memslot->dirty_bitmap)) - generic___set_le_bit(rel_gfn, memslot->dirty_bitmap); + if (!generic_test_le_bit(offset, p)) + generic___set_le_bit(offset, p); } }