From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Collin L. Walling" Subject: Re: [PATCH RFC 4/6] KVM: s390: consider epoch index on TOD clock syncs Date: Wed, 7 Feb 2018 15:08:26 -0500 Message-ID: <438d8539-44fa-2661-ea04-a0642c48c9fa@linux.vnet.ibm.com> References: <20180207114647.6220-1-david@redhat.com> <20180207114647.6220-5-david@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20180207114647.6220-5-david@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-Archive: List-Post: To: David Hildenbrand , linux-s390@vger.kernel.org, kvm@vger.kernel.org Cc: Christian Borntraeger , Cornelia Huck , Janosch Frank List-ID: On 02/07/2018 06:46 AM, David Hildenbrand wrote: > For now, we don't take care of over/underflows. Especially underflows > are critical: > > Assume the epoch is currently 0 and we get a sync request for delta=1, > meaning the TOD is moved forward by 1 and we have to fix it up by > subtracting 1 from the epoch. Right now, this will leave the epoch > index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong. > > We have to take care of over and underflows, also for the VSIE case. So > let's factor out calculation into a separate function. > > Signed-off-by: David Hildenbrand > --- > arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++--- > 1 file changed, 29 insertions(+), 3 deletions(-) > > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c > index d007b737cd4d..c2b62379049e 100644 > --- a/arch/s390/kvm/kvm-s390.c > +++ b/arch/s390/kvm/kvm-s390.c > @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void) > static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start, > unsigned long end); > > +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta) > +{ > + u64 delta_idx = 0; > + > + /* > + * The TOD jumps by delta, we have to compensate this by adding > + * -delta to the epoch. > + */ > + delta = -delta; > + > + /* sign-extension - we're adding to signed values below */ > + if ((s64)delta < 0) > + delta_idx = 0xff; > + > + scb->epoch += delta; > + if (scb->ecd & ECD_MEF) { > + scb->epdx += delta_idx; > + if (scb->epoch < delta) > + scb->epdx += 1; > + } > +} > + Is the sync always a jump forward? Do we need to worry about a borrow from the epdx in case of underflow? > /* > * This callback is executed during stop_machine(). All CPUs are therefore > * temporarily stopped. In order not to change guest behavior, we have to > @@ -194,13 +216,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val, > unsigned long long *delta = v; > > list_for_each_entry(kvm, &vm_list, vm_list) { > - kvm->arch.epoch -= *delta; > kvm_for_each_vcpu(i, vcpu, kvm) { > - vcpu->arch.sie_block->epoch -= *delta; > + kvm_clock_sync_scb(vcpu->arch.sie_block, *delta); > + if (i == 0) { > + kvm->arch.epoch = vcpu->arch.sie_block->epoch; > + kvm->arch.epdx = vcpu->arch.sie_block->epdx; Are we safe by setting the kvm epochs to the sie epochs wrt migration? > + } > if (vcpu->arch.cputm_enabled) > vcpu->arch.cputm_start += *delta; > if (vcpu->arch.vsie_block) > - vcpu->arch.vsie_block->epoch -= *delta; > + kvm_clock_sync_scb(vcpu->arch.vsie_block, > + *delta); > } > } > return NOTIFY_OK; -- - Collin L Walling