* KVM: x86: fix kvmclock write race
@ 2015-04-17 23:38 Marcelo Tosatti
2015-04-18 0:04 ` Andy Lutomirski
0 siblings, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2015-04-17 23:38 UTC (permalink / raw)
To: kvm; +Cc: Paolo Bonzini, Andy Lutomirski, Radim Krčmář
From: Radim Krčmář <rkrcmar@redhat.com>
As noted by Andy Lutomirski, kvm does not follow the documented version
protocol. Fix it.
Note: this bug results in a race which can occur if the following three
conditions are met:
1) There is KVM guest time update (there is one every 5 minutes).
2) Which races with a thread in the guest in the following way:
The execution of these 29 instructions has to take at _least_
2 seconds (rebalance interval is 1 second).
lsl %r9w,%esi
mov %esi,%r8d
and $0x3f,%esi
and $0xfff,%r8d
test $0xfc0,%r8d
jne 0xa12 <vread_pvclock+210>
shl $0x6,%rsi
mov -0xa01000(%rsi),%r10d
data32 xchg %ax,%ax
data32 xchg %ax,%ax
rdtsc
shl $0x20,%rdx
mov %eax,%eax
movsbl -0xa00fe4(%rsi),%ecx
or %rax,%rdx
sub -0xa00ff8(%rsi),%rdx
mov -0xa00fe8(%rsi),%r11d
mov %rdx,%rax
shl %cl,%rax
test %ecx,%ecx
js 0xa08 <vread_pvclock+200>
mov %r11d,%edx
movzbl -0xa00fe3(%rsi),%ecx
mov -0xa00ff0(%rsi),%r11
mul %rdx
shrd $0x20,%rdx,%rax
data32 xchg %ax,%ax
data32 xchg %ax,%ax
lsl %r9w,%edx
3) Scheduler moves the task, while executing these 29 instructions, to a
destination processor, then back to the source processor.
4) Source processor, after has been moved back from destination,
perceives data out of order as written by processor performing guest
time update (item 1), with string mov.
Given the rarity of this condition, and the fact it was never observed
or reported, reverting pvclock vsyscall on systems whose host is
susceptible to the race, seems unnecessary.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc2c759f69a3..8658599e0024 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1658,12 +1658,24 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
&guest_hv_clock, sizeof(guest_hv_clock))))
return 0;
- /*
- * The interface expects us to write an even number signaling that the
- * update is finished. Since the guest won't see the intermediate
- * state, we just increase by 2 at the end.
+ /* A guest can read other VCPU's kvmclock; specification says that
+ * version is odd if data is being modified and even after it is
+ * consistent.
+ * We write three times to be sure.
+ * 1) update version to odd number
+ * 2) write modified data (version is still odd)
+ * 3) update version to even number
+ *
+ * TODO: optimize
+ * - only two writes should be enough -- version is first
+ * - the second write could update just version
*/
- vcpu->hv_clock.version = guest_hv_clock.version + 2;
+ guest_hv_clock.version += 1;
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &guest_hv_clock,
+ sizeof(guest_hv_clock));
+
+ vcpu->hv_clock.version = guest_hv_clock.version;
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED);
@@ -1684,6 +1696,11 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
sizeof(vcpu->hv_clock));
+
+ vcpu->hv_clock.version += 1;
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &vcpu->hv_clock,
+ sizeof(vcpu->hv_clock));
return 0;
}
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: KVM: x86: fix kvmclock write race
2015-04-17 23:38 KVM: x86: fix kvmclock write race Marcelo Tosatti
@ 2015-04-18 0:04 ` Andy Lutomirski
2015-04-18 0:12 ` Marcelo Tosatti
2015-04-18 0:20 ` Marcelo Tosatti
0 siblings, 2 replies; 6+ messages in thread
From: Andy Lutomirski @ 2015-04-18 0:04 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm list, Paolo Bonzini, Radim Krčmář
On Fri, Apr 17, 2015 at 4:38 PM, Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> From: Radim Krčmář <rkrcmar@redhat.com>
>
> As noted by Andy Lutomirski, kvm does not follow the documented version
> protocol. Fix it.
>
> Note: this bug results in a race which can occur if the following three
> conditions are met:
>
> 1) There is KVM guest time update (there is one every 5 minutes).
>
> 2) Which races with a thread in the guest in the following way:
> The execution of these 29 instructions has to take at _least_
> 2 seconds (rebalance interval is 1 second).
>
> lsl %r9w,%esi
> mov %esi,%r8d
> and $0x3f,%esi
> and $0xfff,%r8d
> test $0xfc0,%r8d
> jne 0xa12 <vread_pvclock+210>
> shl $0x6,%rsi
> mov -0xa01000(%rsi),%r10d
> data32 xchg %ax,%ax
> data32 xchg %ax,%ax
> rdtsc
> shl $0x20,%rdx
> mov %eax,%eax
> movsbl -0xa00fe4(%rsi),%ecx
> or %rax,%rdx
> sub -0xa00ff8(%rsi),%rdx
> mov -0xa00fe8(%rsi),%r11d
> mov %rdx,%rax
> shl %cl,%rax
> test %ecx,%ecx
> js 0xa08 <vread_pvclock+200>
> mov %r11d,%edx
> movzbl -0xa00fe3(%rsi),%ecx
> mov -0xa00ff0(%rsi),%r11
> mul %rdx
> shrd $0x20,%rdx,%rax
> data32 xchg %ax,%ax
> data32 xchg %ax,%ax
> lsl %r9w,%edx
>
> 3) Scheduler moves the task, while executing these 29 instructions, to a
> destination processor, then back to the source processor.
>
> 4) Source processor, after has been moved back from destination,
> perceives data out of order as written by processor performing guest
> time update (item 1), with string mov.
>
> Given the rarity of this condition, and the fact it was never observed
> or reported, reverting pvclock vsyscall on systems whose host is
> susceptible to the race, seems unnecessary.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index cc2c759f69a3..8658599e0024 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1658,12 +1658,24 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
> &guest_hv_clock, sizeof(guest_hv_clock))))
> return 0;
>
> - /*
> - * The interface expects us to write an even number signaling that the
> - * update is finished. Since the guest won't see the intermediate
> - * state, we just increase by 2 at the end.
> + /* A guest can read other VCPU's kvmclock; specification says that
> + * version is odd if data is being modified and even after it is
> + * consistent.
> + * We write three times to be sure.
> + * 1) update version to odd number
> + * 2) write modified data (version is still odd)
> + * 3) update version to even number
> + *
> + * TODO: optimize
> + * - only two writes should be enough -- version is first
> + * - the second write could update just version
You're relying on lots of barely-defined behavior here, since I think
that both copies could use fast string operations. Those are
explicitly unordered internally, so I think you really do need three
writes.
Personally, if I wanted to optimize this (I'm not convinced it
matters), I'd add a write-a-single-word primitive and use that for the
version.
Anyway, I think this code looks okay as is.
--Andy
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KVM: x86: fix kvmclock write race
2015-04-18 0:04 ` Andy Lutomirski
@ 2015-04-18 0:12 ` Marcelo Tosatti
2015-04-18 0:20 ` Marcelo Tosatti
1 sibling, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2015-04-18 0:12 UTC (permalink / raw)
To: Andy Lutomirski; +Cc: kvm list, Paolo Bonzini, Radim Krčmář
On Fri, Apr 17, 2015 at 05:04:29PM -0700, Andy Lutomirski wrote:
> On Fri, Apr 17, 2015 at 4:38 PM, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> >
> > From: Radim Krčmář <rkrcmar@redhat.com>
> >
> > As noted by Andy Lutomirski, kvm does not follow the documented version
> > protocol. Fix it.
> >
> > Note: this bug results in a race which can occur if the following three
> > conditions are met:
> >
> > 1) There is KVM guest time update (there is one every 5 minutes).
> >
> > 2) Which races with a thread in the guest in the following way:
> > The execution of these 29 instructions has to take at _least_
> > 2 seconds (rebalance interval is 1 second).
> >
> > lsl %r9w,%esi
> > mov %esi,%r8d
> > and $0x3f,%esi
> > and $0xfff,%r8d
> > test $0xfc0,%r8d
> > jne 0xa12 <vread_pvclock+210>
> > shl $0x6,%rsi
> > mov -0xa01000(%rsi),%r10d
> > data32 xchg %ax,%ax
> > data32 xchg %ax,%ax
> > rdtsc
> > shl $0x20,%rdx
> > mov %eax,%eax
> > movsbl -0xa00fe4(%rsi),%ecx
> > or %rax,%rdx
> > sub -0xa00ff8(%rsi),%rdx
> > mov -0xa00fe8(%rsi),%r11d
> > mov %rdx,%rax
> > shl %cl,%rax
> > test %ecx,%ecx
> > js 0xa08 <vread_pvclock+200>
> > mov %r11d,%edx
> > movzbl -0xa00fe3(%rsi),%ecx
> > mov -0xa00ff0(%rsi),%r11
> > mul %rdx
> > shrd $0x20,%rdx,%rax
> > data32 xchg %ax,%ax
> > data32 xchg %ax,%ax
> > lsl %r9w,%edx
> >
> > 3) Scheduler moves the task, while executing these 29 instructions, to a
> > destination processor, then back to the source processor.
> >
> > 4) Source processor, after has been moved back from destination,
> > perceives data out of order as written by processor performing guest
> > time update (item 1), with string mov.
> >
> > Given the rarity of this condition, and the fact it was never observed
> > or reported, reverting pvclock vsyscall on systems whose host is
> > susceptible to the race, seems unnecessary.
> >
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index cc2c759f69a3..8658599e0024 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1658,12 +1658,24 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
> > &guest_hv_clock, sizeof(guest_hv_clock))))
> > return 0;
> >
> > - /*
> > - * The interface expects us to write an even number signaling that the
> > - * update is finished. Since the guest won't see the intermediate
> > - * state, we just increase by 2 at the end.
> > + /* A guest can read other VCPU's kvmclock; specification says that
> > + * version is odd if data is being modified and even after it is
> > + * consistent.
> > + * We write three times to be sure.
> > + * 1) update version to odd number
> > + * 2) write modified data (version is still odd)
> > + * 3) update version to even number
> > + *
> > + * TODO: optimize
> > + * - only two writes should be enough -- version is first
> > + * - the second write could update just version
>
> You're relying on lots of barely-defined behavior here, since I think
> that both copies could use fast string operations. Those are
> explicitly unordered internally, so I think you really do need three
> writes.
The memory-ordering model respects the follow principles:
1. Stores within a single string operation may be executed out of order.
2. Stores from separate string operations (for example, stores from
consecutive string operations) do not execute
out of order. All the stores from an earlier string operation will
complete before any store from a later string
operation.
> Personally, if I wanted to optimize this (I'm not convinced it
> matters),
It does not matter.
> I'd add a write-a-single-word primitive and use that for the
> version.
>
> Anyway, I think this code looks okay as is.
Looking forward to see the patchset to have all vcpus reading
from vcpu0 pvclock area.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: KVM: x86: fix kvmclock write race
2015-04-18 0:04 ` Andy Lutomirski
2015-04-18 0:12 ` Marcelo Tosatti
@ 2015-04-18 0:20 ` Marcelo Tosatti
2015-04-18 0:23 ` KVM: x86: fix kvmclock write race (v2) Marcelo Tosatti
1 sibling, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2015-04-18 0:20 UTC (permalink / raw)
To: Andy Lutomirski; +Cc: kvm list, Paolo Bonzini, Radim Krčmář
On Fri, Apr 17, 2015 at 05:04:29PM -0700, Andy Lutomirski wrote:
> On Fri, Apr 17, 2015 at 4:38 PM, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> >
> > From: Radim Krčmář <rkrcmar@redhat.com>
> >
> > As noted by Andy Lutomirski, kvm does not follow the documented version
> > protocol. Fix it.
> >
> > Note: this bug results in a race which can occur if the following three
> > conditions are met:
> >
> > 1) There is KVM guest time update (there is one every 5 minutes).
> >
> > 2) Which races with a thread in the guest in the following way:
> > The execution of these 29 instructions has to take at _least_
> > 2 seconds (rebalance interval is 1 second).
> >
> > lsl %r9w,%esi
> > mov %esi,%r8d
> > and $0x3f,%esi
> > and $0xfff,%r8d
> > test $0xfc0,%r8d
> > jne 0xa12 <vread_pvclock+210>
> > shl $0x6,%rsi
> > mov -0xa01000(%rsi),%r10d
> > data32 xchg %ax,%ax
> > data32 xchg %ax,%ax
> > rdtsc
> > shl $0x20,%rdx
> > mov %eax,%eax
> > movsbl -0xa00fe4(%rsi),%ecx
> > or %rax,%rdx
> > sub -0xa00ff8(%rsi),%rdx
> > mov -0xa00fe8(%rsi),%r11d
> > mov %rdx,%rax
> > shl %cl,%rax
> > test %ecx,%ecx
> > js 0xa08 <vread_pvclock+200>
> > mov %r11d,%edx
> > movzbl -0xa00fe3(%rsi),%ecx
> > mov -0xa00ff0(%rsi),%r11
> > mul %rdx
> > shrd $0x20,%rdx,%rax
> > data32 xchg %ax,%ax
> > data32 xchg %ax,%ax
> > lsl %r9w,%edx
> >
> > 3) Scheduler moves the task, while executing these 29 instructions, to a
> > destination processor, then back to the source processor.
> >
> > 4) Source processor, after has been moved back from destination,
> > perceives data out of order as written by processor performing guest
> > time update (item 1), with string mov.
> >
> > Given the rarity of this condition, and the fact it was never observed
> > or reported, reverting pvclock vsyscall on systems whose host is
> > susceptible to the race, seems unnecessary.
> >
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index cc2c759f69a3..8658599e0024 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1658,12 +1658,24 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
> > &guest_hv_clock, sizeof(guest_hv_clock))))
> > return 0;
> >
> > - /*
> > - * The interface expects us to write an even number signaling that the
> > - * update is finished. Since the guest won't see the intermediate
> > - * state, we just increase by 2 at the end.
> > + /* A guest can read other VCPU's kvmclock; specification says that
> > + * version is odd if data is being modified and even after it is
> > + * consistent.
> > + * We write three times to be sure.
> > + * 1) update version to odd number
> > + * 2) write modified data (version is still odd)
> > + * 3) update version to even number
> > + *
> > + * TODO: optimize
> > + * - only two writes should be enough -- version is first
> > + * - the second write could update just version
>
> You're relying on lots of barely-defined behavior here, since I think
> that both copies could use fast string operations. Those are
> explicitly unordered internally, so I think you really do need three
> writes.
Correct, 3 writes are needed.
^ permalink raw reply [flat|nested] 6+ messages in thread
* KVM: x86: fix kvmclock write race (v2)
2015-04-18 0:20 ` Marcelo Tosatti
@ 2015-04-18 0:23 ` Marcelo Tosatti
2015-04-20 12:54 ` Radim Krčmář
0 siblings, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2015-04-18 0:23 UTC (permalink / raw)
To: Andy Lutomirski; +Cc: kvm list, Paolo Bonzini, Radim Krčmář
From: Radim Krčmář <rkrcmar@redhat.com>
As noted by Andy Lutomirski, kvm does not follow the documented version
protocol. Fix it.
Note: this bug results in a race which can occur if the following three
conditions are met:
1) There is KVM guest time update (there is one every 5 minutes).
2) Which races with a thread in the guest in the following way:
The execution of these 29 instructions has to take at _least_
2 seconds (rebalance interval is 1 second).
lsl %r9w,%esi
mov %esi,%r8d
and $0x3f,%esi
and $0xfff,%r8d
test $0xfc0,%r8d
jne 0xa12 <vread_pvclock+210>
shl $0x6,%rsi
mov -0xa01000(%rsi),%r10d
data32 xchg %ax,%ax
data32 xchg %ax,%ax
rdtsc
shl $0x20,%rdx
mov %eax,%eax
movsbl -0xa00fe4(%rsi),%ecx
or %rax,%rdx
sub -0xa00ff8(%rsi),%rdx
mov -0xa00fe8(%rsi),%r11d
mov %rdx,%rax
shl %cl,%rax
test %ecx,%ecx
js 0xa08 <vread_pvclock+200>
mov %r11d,%edx
movzbl -0xa00fe3(%rsi),%ecx
mov -0xa00ff0(%rsi),%r11
mul %rdx
shrd $0x20,%rdx,%rax
data32 xchg %ax,%ax
data32 xchg %ax,%ax
lsl %r9w,%edx
3) Scheduler moves the task, while executing these 29 instructions, to a
destination processor, then back to the source processor.
4) Source processor, after has been moved back from destination,
perceives data out of order as written by processor performing guest
time update (item 1), with string mov.
Given the rarity of this condition, and the fact it was never observed
or reported, reverting pvclock vsyscall on systems whose host is
susceptible to the race, seems an excessive measure.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Cc: stable@kernel.org
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc2c759..e75fafd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1658,12 +1658,20 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
&guest_hv_clock, sizeof(guest_hv_clock))))
return 0;
- /*
- * The interface expects us to write an even number signaling that the
- * update is finished. Since the guest won't see the intermediate
- * state, we just increase by 2 at the end.
+ /* A guest can read other VCPU's kvmclock; specification says that
+ * version is odd if data is being modified and even after it is
+ * consistent.
+ * We write three times to be sure.
+ * 1) update version to odd number
+ * 2) write modified data (version is still odd)
+ * 3) update version to even number
*/
- vcpu->hv_clock.version = guest_hv_clock.version + 2;
+ guest_hv_clock.version += 1;
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &guest_hv_clock,
+ sizeof(guest_hv_clock));
+
+ vcpu->hv_clock.version = guest_hv_clock.version;
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED);
@@ -1684,6 +1692,11 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
sizeof(vcpu->hv_clock));
+
+ vcpu->hv_clock.version += 1;
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &vcpu->hv_clock,
+ sizeof(vcpu->hv_clock));
return 0;
}
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: KVM: x86: fix kvmclock write race (v2)
2015-04-18 0:23 ` KVM: x86: fix kvmclock write race (v2) Marcelo Tosatti
@ 2015-04-20 12:54 ` Radim Krčmář
0 siblings, 0 replies; 6+ messages in thread
From: Radim Krčmář @ 2015-04-20 12:54 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Andy Lutomirski, kvm list, Paolo Bonzini
2015-04-17 21:23-0300, Marcelo Tosatti:
>
> From: Radim Krčmář <rkrcmar@redhat.com>
>
> As noted by Andy Lutomirski, kvm does not follow the documented version
> protocol. Fix it.
>
> Note: this bug results in a race which can occur if the following three
> conditions are met:
>
> 1) There is KVM guest time update (there is one every 5 minutes).
>
> 2) Which races with a thread in the guest in the following way:
> The execution of these 29 instructions has to take at _least_
> 2 seconds (rebalance interval is 1 second).
>
> lsl %r9w,%esi
> mov %esi,%r8d
> and $0x3f,%esi
> and $0xfff,%r8d
> test $0xfc0,%r8d
> jne 0xa12 <vread_pvclock+210>
> shl $0x6,%rsi
> mov -0xa01000(%rsi),%r10d
> data32 xchg %ax,%ax
> data32 xchg %ax,%ax
> rdtsc
> shl $0x20,%rdx
> mov %eax,%eax
> movsbl -0xa00fe4(%rsi),%ecx
> or %rax,%rdx
> sub -0xa00ff8(%rsi),%rdx
> mov -0xa00fe8(%rsi),%r11d
> mov %rdx,%rax
> shl %cl,%rax
> test %ecx,%ecx
> js 0xa08 <vread_pvclock+200>
> mov %r11d,%edx
> movzbl -0xa00fe3(%rsi),%ecx
> mov -0xa00ff0(%rsi),%r11
> mul %rdx
> shrd $0x20,%rdx,%rax
> data32 xchg %ax,%ax
> data32 xchg %ax,%ax
> lsl %r9w,%edx
>
> 3) Scheduler moves the task, while executing these 29 instructions, to a
> destination processor, then back to the source processor.
>
> 4) Source processor, after has been moved back from destination,
> perceives data out of order as written by processor performing guest
> time update (item 1), with string mov.
>
> Given the rarity of this condition, and the fact it was never observed
> or reported, reverting pvclock vsyscall on systems whose host is
> susceptible to the race, seems an excessive measure.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> Cc: stable@kernel.org
Thanks.
Reviewed-or-Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Like most code, I would have written it differently now :)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> + kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
> + &guest_hv_clock,
> + sizeof(guest_hv_clock));
The easiest optimization is replacing sizeof(guest_hv_clock) with
offsetof(typeof(guest_hv_clock), version) + sizeof(guest_hv_clock.version)
because kvm_write_guest_cached() allows writing of prefixes.
This still won't get optimized to a simple MOV at compile time, but
saves few mov bytes.
(Offset of version is 0 now, so using 'sizeof guest_hv_clock.version' is
just a minor offence sand saves some hard to read code.)
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-04-20 12:54 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-04-17 23:38 KVM: x86: fix kvmclock write race Marcelo Tosatti
2015-04-18 0:04 ` Andy Lutomirski
2015-04-18 0:12 ` Marcelo Tosatti
2015-04-18 0:20 ` Marcelo Tosatti
2015-04-18 0:23 ` KVM: x86: fix kvmclock write race (v2) Marcelo Tosatti
2015-04-20 12:54 ` Radim Krčmář
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox