From: Jan Kiszka <jan.kiszka@siemens.com>
To: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH 04/19] qemu-kvm: x86: Drop MSR reset
Date: Thu, 05 May 2011 12:36:43 +0200 [thread overview]
Message-ID: <4DC27DBB.5020500@siemens.com> (raw)
In-Reply-To: <4DC27A68.3070409@redhat.com>
On 2011-05-05 12:22, Avi Kivity wrote:
> On 05/05/2011 12:32 PM, Jan Kiszka wrote:
>> >> >
>> >> > If the former, we simply do the reset operation per-cpu. It's
>> the
>> >> > natural thing anyway.
>> >>
>> >> Quite wasteful /wrt to memory given that the majority will be
>> identical.
>> >
>> > We're talking a few hundred bytes per cpu. If you want to save
>> memory,
>> > look at the PhysPageDesc array, it takes up 0.4% of guest memory,
>> so 4MB
>> > for a 1GB guest.
>>
>> I know (that's fixable, BTW). But that should not excuse needless memory
>> wasting elsewhere.
>
> IMO a few hundred bytes is worth the correctness here.
>
>> >
>> >> >> Nevertheless, the qemu-kvm code is already unneeded today and
>> can
>> >> safely
>> >> >> be removed IMHO.
>> >> >
>> >> > I don't follow? Won't it cause a regression?
>> >>
>> >> Not at all. We use the "individual care" pattern upstream now,
>> >> specifically for those MSRs (kvmclock) for which the qemu-kvm code
>> was
>> >> introduced.
>> >
>> > I mean a future regression with current+patch qemu and a new kernel.
>>
>> For sane scenarios, such a combination should never expose new (ie.
>> unknown from qemu's POV) MSRs to the guest. Thus not clearing them
>> cannot cause any harm.
>
> The problem is with hardware MSRs (PV MSRs are protected by cpuid, and
> always disable themselves when zeroed).
Well, this doesn't change the problem of the existing code.
>
>> BTW, you also do not know if 0 will be the right reset value for these
>> to-be-invented MSRs. That could cause regression as well.
>
> What I suggested wasn't zeroing them, but writing the value we read just
> after vcpu creation.
>
> We had a regression when we started supporting PAT. Zeroing it causes
> the cache to be disabled, making everything ridiculously slow. We now
> special case it; my proposed solution would have taken care of it.
I'm talking about the current code, not the proper way to do it in the
future. PAT demonstrates why regressions can happen as long as we zero
and it's better to stop this now even without the new code in place.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
next prev parent reply other threads:[~2011-05-05 10:36 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-04 19:43 [PATCH 00/19] qemu-kvm: Cleanup and switch to upstream - Part I Jan Kiszka
2011-05-04 19:43 ` [PATCH 01/19] qemu-kvm: Switch to upstream mp_state functions Jan Kiszka
2011-05-04 19:43 ` [PATCH 02/19] qemu-kvm: x86: Use upstream kvmclock device Jan Kiszka
2011-05-04 19:43 ` [PATCH 03/19] Revert "introduce VMSTATE_U64" Jan Kiszka
2011-05-04 19:43 ` [PATCH 04/19] qemu-kvm: x86: Drop MSR reset Jan Kiszka
2011-05-05 8:08 ` Avi Kivity
2011-05-05 8:11 ` Jan Kiszka
2011-05-05 8:16 ` Avi Kivity
2011-05-05 8:27 ` Jan Kiszka
2011-05-05 8:33 ` Avi Kivity
2011-05-05 8:44 ` Jan Kiszka
2011-05-05 8:53 ` Avi Kivity
2011-05-05 9:32 ` Jan Kiszka
2011-05-05 10:22 ` Avi Kivity
2011-05-05 10:36 ` Jan Kiszka [this message]
2011-05-05 11:57 ` Avi Kivity
2011-05-05 11:22 ` Gleb Natapov
2011-05-05 11:58 ` Avi Kivity
2011-05-05 12:23 ` Gleb Natapov
2011-05-05 12:42 ` Jan Kiszka
2011-05-05 13:33 ` Marcelo Tosatti
2011-05-05 18:08 ` Gleb Natapov
2011-05-04 19:43 ` [PATCH 05/19] qemu-kvm: Use upstream VCPU reset services Jan Kiszka
2011-05-04 19:43 ` [PATCH 06/19] qemu-kvm: Use upstream vcpu initialization Jan Kiszka
2011-05-04 19:43 ` [PATCH 07/19] qemu-kvm: Start using qemu-thread services Jan Kiszka
2011-05-04 19:43 ` [PATCH 08/19] qemu-kvm: Use upstream kvm_arch_get/put_registers Jan Kiszka
2011-05-04 19:43 ` [PATCH 09/19] qemu-kvm: Use upstream state synchronization services Jan Kiszka
2011-05-04 19:43 ` [PATCH 10/19] qemu-kvm: Drop remaining libkvm fragments Jan Kiszka
2011-05-04 19:43 ` [PATCH 11/19] qemu-kvm: Drop some more unused code Jan Kiszka
2011-05-04 19:43 ` [PATCH 12/19] qemu-kvm: Drop some obsolete/unused fields from kvm_context Jan Kiszka
2011-05-04 19:43 ` [PATCH 13/19] qemu-kvm: Refactor in-kernel irqchip and pit control Jan Kiszka
2011-05-04 19:43 ` [PATCH 14/19] qemu-kvm: Fold kvm_create into kvm_create_context Jan Kiszka
2011-05-04 19:43 ` [PATCH 15/19] qemu-kvm: Fold kvm_arch_qemu_create_context into kvm_arch_create Jan Kiszka
2011-05-04 19:43 ` [PATCH 16/19] qemu-kvm: Push PIT reinjection control into x86 code Jan Kiszka
2011-05-04 19:43 ` [PATCH 17/19] qemu-kvm: Replace kvm_show_regs/code with cpu_dump_state Jan Kiszka
2011-05-04 19:43 ` [PATCH 18/19] qemu-kvm: Fold kvm_init_coalesced_mmio into kvm_create_context Jan Kiszka
2011-05-04 19:43 ` [PATCH 19/19] qemu-kvm: x86: Use kvm_arch_init Jan Kiszka
2011-05-05 8:22 ` [PATCH 00/19] qemu-kvm: Cleanup and switch to upstream - Part I Avi Kivity
2011-05-05 8:29 ` Jan Kiszka
2011-05-06 13:51 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4DC27DBB.5020500@siemens.com \
--to=jan.kiszka@siemens.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox