From: Dor Laor <dlaor@redhat.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: Anthony Liguori <aliguori@us.ibm.com>,
Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>,
Avi Kivity <avi@redhat.com>, kvm-devel <kvm@vger.kernel.org>,
qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] Better qemu/kvm defaults (was Re: [RFC PATCH 0/4] Gang scheduling in CFS)
Date: Sun, 01 Jan 2012 12:16:08 +0200 [thread overview]
Message-ID: <4F003268.90906@redhat.com> (raw)
In-Reply-To: <4EFC9277.9040604@codemonkey.ws>
On 12/29/2011 06:16 PM, Anthony Liguori wrote:
> On 12/29/2011 10:07 AM, Dor Laor wrote:
>> On 12/26/2011 11:05 AM, Avi Kivity wrote:
>>> On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:
>>>>>
>>>>> btw you can get an additional speedup by enabling x2apic, for
>>>>> default_send_IPI_mask_logical().
>>>>>
>>>> In the host?
>>>>
>>>
>>> In the host, for the guest:
>>>
>>> qemu -cpu ...,+x2apic
>>>
>>
>> It seems to me that we should improve our default flags.
>> So many times users fail to submit the proper huge command-line
>> options that we
>> require. Honestly, we can't blame them, there are so many flags and so
>> many use
>> cases its just too hard to get it right for humans.
>>
>> I propose a basic idea and folks are welcome to discuss it:
>>
>> 1. Improve qemu/kvm defaults
>> Break the current backward compatibility (but add a --default-
>> backward-compat-mode) and set better values for:
>> - rtc slew time
>
> What do you specifically mean?
-rtc localtime,driftfix=slew
>
>> - cache=none
>
> I'm not sure I see this as a "better default" particularly since
> O_DIRECT fails on certain file systems. I think we really need to let
> WCE be toggable from the guest and then have a caching mode independent
> of WCE. We then need some heuristics to only enable cache=off when we
> know it's safe.
cache=none is still faster then it has the FS support.
qemu can test-run O_DIRECT and fall back to cache mode or just test the
filesystem capabilities.
>
>> - x2apic, maybe enhance qemu64 or move to -cpu host?
>
> Alex posted a patch for this. I'm planning on merging it although so far
> no one has chimed up either way.
>
>> - aio=native|threads (auto-sense?)
>
> aio=native is unsafe to default because linux-aio is just fubar. It
> falls back to synchronous I/O if the underlying filesystem doesn't
> support aio. There's no way in userspace to problem if it's actually
> supported or not either...
Can we test-run this too? Maybe as a separate qemu mode or even binary
that given a qemu cmdline, it will try to suggest better parameters?
>> - use virtio devices by default
>
> I don't think this is realistic since appropriately licensed signed
> virtio drivers do not exist for Windows. (Please note the phrase
> "appropriately licensed signed").
What's the percentage of qemu invocation w/ windows guest and a short
cmd line? My hunch is that plain short cmdline indicates a developer and
probably they'll use linux guest.
>
>> - more?
>>
>> Different defaults may be picked automatically when TCG|KVM used.
>>
>> 2. External hardening configuration file kept in qemu.git
>> For non qemu/kvm specific definitions like the io scheduler we
>> should maintain a script in our tree that sets/sense the optimal
>> settings of the host kernel (maybe similar one for the guest).
>
> What are "appropriate host settings" and why aren't we suggesting that
> distros and/or upstream just set them by default?
It's hard to set the right default for a distribution since the same
distro should optimize for various usages of the same OS. For example,
Fedora has tuned-adm w/ available profiles:
- desktop-powersave
- server-powersave
- enterprise-storage
- spindown-disk
- laptop-battery-powersave
- default
- throughput-performance
- latency-performance
- laptop-ac-powersave
We need to keep on recommending the best profile for virtualization, for
Fedora I think it either enterprise-storage and maybe
throughput-performance.
If we have a such a script, it can call the matching tuned profile
instead of tweaking every /sys option.
>
> Regards,
>
> Anthony Liguori
>
>> HTH,
>> Dor
>>
>
>
next prev parent reply other threads:[~2012-01-01 10:16 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20111219083141.32311.9429.stgit@abhimanyu.in.ibm.com>
[not found] ` <20111219112326.GA15090@elte.hu>
[not found] ` <87sjke1a53.fsf@abhimanyu.in.ibm.com>
[not found] ` <4EF1B85F.7060105@redhat.com>
[not found] ` <877h1o9dp7.fsf@linux.vnet.ibm.com>
[not found] ` <20111223103620.GD4749@elte.hu>
[not found] ` <4EF701C7.9080907@redhat.com>
[not found] ` <87vcp4t45p.fsf@linux.vnet.ibm.com>
[not found] ` <4EF838BD.60406@redhat.com>
2011-12-29 16:07 ` [Qemu-devel] Better qemu/kvm defaults (was Re: [RFC PATCH 0/4] Gang scheduling in CFS) Dor Laor
2011-12-29 16:13 ` Avi Kivity
2011-12-29 16:16 ` Anthony Liguori
2012-01-01 10:16 ` Dor Laor [this message]
2012-01-01 14:01 ` Ronen Hod
2012-01-02 9:37 ` Dor Laor
2012-01-03 15:48 ` Anthony Liguori
2012-01-03 22:31 ` Dor Laor
2012-01-03 22:45 ` Anthony Liguori
2012-01-03 22:59 ` Dor Laor
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F003268.90906@redhat.com \
--to=dlaor@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=anthony@codemonkey.ws \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=nikunj@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).