From: Paolo Bonzini <pbonzini@redhat.com>
To: "Radim Krčmář" <rkrcmar@redhat.com>, linux-kernel@vger.kernel.org
Cc: kvm@vger.kernel.org, Marcelo Tosatti <mtosatti@redhat.com>,
Luiz Capitulino <lcapitulino@redhat.com>
Subject: Re: [RFC PATCH 0/2] kvmclock: fix ABI breakage from PVCLOCK_COUNTS_FROM_ZERO.
Date: Mon, 28 Sep 2015 13:05:58 +0200 [thread overview]
Message-ID: <56091F16.5050503@redhat.com> (raw)
In-Reply-To: <1442591670-5216-1-git-send-email-rkrcmar@redhat.com>
On 18/09/2015 17:54, Radim Krčmář wrote:
> This patch series will be disabling PVCLOCK_COUNTS_FROM_ZERO flag and is
> RFC because I haven't explored many potential problems or tested it.
>
> [1/2] uses a different algorithm in the guest to start counting from 0.
> [2/2] stops exposing PVCLOCK_COUNTS_FROM_ZERO in the hypervisor.
>
> A viable alternative would be to implement opt-in features in kvm clock.
>
> And because we probably only broke one old user (the infamous SLES 10), a
> workaround like this is also possible: (but I'd rather not do that)
Thanks,
applying 2/2 for 4.4 and 1/2 for 4.3.
Paolo
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a60bdbccff51..ae9049248aaf 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2007,7 +2007,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>
> ka->boot_vcpu_runs_old_kvmclock = tmp;
>
> - ka->kvmclock_offset = -get_kernel_ns();
> + if (!ka->boot_vcpu_runs_old_kvmclock)
> + ka->kvmclock_offset = -get_kernel_ns();
> }
>
> vcpu->arch.time = data;
>
>
> Radim Krčmář (2):
> x86: kvmclock: abolish PVCLOCK_COUNTS_FROM_ZERO
> Revert "KVM: x86: zero kvmclock_offset when vcpu0 initializes kvmclock
> system MSR"
>
> arch/x86/include/asm/pvclock-abi.h | 1 +
> arch/x86/kernel/kvmclock.c | 46 +++++++++++++++++++++++++++++---------
> arch/x86/kvm/x86.c | 4 ----
> 3 files changed, 36 insertions(+), 15 deletions(-)
>
prev parent reply other threads:[~2015-09-28 11:06 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-18 15:54 [RFC PATCH 0/2] kvmclock: fix ABI breakage from PVCLOCK_COUNTS_FROM_ZERO Radim Krčmář
2015-09-18 15:54 ` [PATCH 1/2] x86: kvmclock: abolish PVCLOCK_COUNTS_FROM_ZERO Radim Krčmář
2015-09-22 19:01 ` Marcelo Tosatti
2015-09-28 14:10 ` Paolo Bonzini
2015-09-18 15:54 ` [PATCH 2/2] Revert "KVM: x86: zero kvmclock_offset when vcpu0 initializes kvmclock system MSR" Radim Krčmář
2015-09-22 19:01 ` Marcelo Tosatti
2015-09-22 19:52 ` Paolo Bonzini
2015-09-22 20:23 ` Marcelo Tosatti
2015-09-20 22:57 ` [RFC PATCH 0/2] kvmclock: fix ABI breakage from PVCLOCK_COUNTS_FROM_ZERO Marcelo Tosatti
2015-09-21 15:12 ` Radim Krčmář
2015-09-21 15:43 ` Radim Krčmář
2015-09-21 15:52 ` Marcelo Tosatti
2015-09-21 20:00 ` Radim Krčmář
2015-09-21 20:53 ` Marcelo Tosatti
2015-09-21 22:00 ` Radim Krčmář
2015-09-21 22:37 ` Marcelo Tosatti
2015-09-22 0:40 ` Marcelo Tosatti
2015-09-22 14:33 ` Radim Krčmář
2015-09-22 14:46 ` Radim Krčmář
2015-09-28 11:05 ` Paolo Bonzini [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56091F16.5050503@redhat.com \
--to=pbonzini@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=lcapitulino@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=rkrcmar@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).