From mboxrd@z Thu Jan 1 00:00:00 1970 From: Cornelia Huck Subject: Re: [PATCH 1/1] KVM: s390: fix cmma migration for multiple memory slots Date: Tue, 19 Dec 2017 11:41:42 +0100 Message-ID: <20171219114142.3490f80b.cohuck@redhat.com> References: <20171219081921.81670-1-borntraeger@de.ibm.com> <20171219081921.81670-2-borntraeger@de.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20171219081921.81670-2-borntraeger@de.ibm.com> Sender: kvm-owner@vger.kernel.org List-Archive: List-Post: To: Christian Borntraeger Cc: KVM , linux-s390 , Thomas Huth , Halil Pasic , Janosch Frank , Claudio Imbrenda List-ID: On Tue, 19 Dec 2017 09:19:21 +0100 Christian Borntraeger wrote: > when multiple memory slots are present the cmma migration code s/when/When/ > does not allocate enough memory for the bitmap. The memory slots > are sorted in reverse order, so we must use gfn and size of > slot[0] instead of the last one. I've spent way too much time looking at the memslot code, but this seems correct. > > Signed-off-by: Christian Borntraeger > Reviewed-by: Claudio Imbrenda > Cc: stable@vger.kernel.org # 4.13+ > Fixes: 190df4a212a7 (KVM: s390: CMMA tracking, ESSA emulation, migration mode) > --- > arch/s390/kvm/kvm-s390.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c > index 966ea611210a..3373d8dff131 100644 > --- a/arch/s390/kvm/kvm-s390.c > +++ b/arch/s390/kvm/kvm-s390.c > @@ -792,11 +792,12 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm) > > if (kvm->arch.use_cmma) { > /* > - * Get the last slot. They should be sorted by base_gfn, so the > - * last slot is also the one at the end of the address space. > - * We have verified above that at least one slot is present. > + * Get the first slot. They are reverse sorted by base_gfn, so > + * the first slot is also the one at the end of the address > + * space. We have verified above that at least one slot is > + * present. > */ > - ms = slots->memslots + slots->used_slots - 1; > + ms = slots->memslots; > /* round up so we only use full longs */ > ram_pages = roundup(ms->base_gfn + ms->npages, BITS_PER_LONG); > /* allocate enough bytes to store all the bits */ Reviewed-by: Cornelia Huck As you wrote, this is good as a minimal fix.