* [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best @ 2012-01-08 23:52 Alexander Graf 2012-01-08 23:52 ` [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 Alexander Graf ` (2 more replies) 0 siblings, 3 replies; 13+ messages in thread From: Alexander Graf @ 2012-01-08 23:52 UTC (permalink / raw) To: qemu-devel@nongnu.org Developers; +Cc: Avi Kivity, kvm list During discussions on whether to make -cpu host the default in SLE, I found myself disagreeing to the thought, because it potentially opens a big can of worms for potential bugs. But if I already am so opposed to it for SLE, how can it possibly be reasonable to default to -cpu host in upstream QEMU? And what would a sane default look like? So I had this idea of looping through all available CPU definitions. We can pretty well tell if our host is able to execute any of them by checking the respective flags and seeing if our host has all features the CPU definition requires. With that, we can create a -cpu type that would fall back to the "best known CPU definition" that our host can fulfill. On my Phenom II system for example, that would be -cpu phenom. With this approach we can test and verify that CPU types actually work at any random user setup, because we can always verify that all the -cpu types we ship actually work. And we only default to some clever mechanism that chooses from one of these. Signed-off-by: Alexander Graf <agraf@suse.de> --- target-i386/cpuid.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 81 insertions(+), 0 deletions(-) diff --git a/target-i386/cpuid.c b/target-i386/cpuid.c index 91a104b..b2e3420 100644 --- a/target-i386/cpuid.c +++ b/target-i386/cpuid.c @@ -550,6 +550,85 @@ static int cpu_x86_fill_host(x86_def_t *x86_cpu_def) return 0; } +/* Are all guest feature bits present on the host? */ +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest) +{ + int i; + + for (i = 0; i < 32; i++) { + uint32_t mask = 1 << i; + if ((guest & mask) && !(host & mask)) { + return false; + } + } + + return true; +} + +/* Does the host support all the features of the CPU definition? */ +static bool cpu_x86_fits_host(x86_def_t *x86_cpu_def) +{ + uint32_t eax = 0, ebx = 0, ecx = 0, edx = 0; + + host_cpuid(0x0, 0, &eax, &ebx, &ecx, &edx); + if (x86_cpu_def->level > eax) { + return false; + } + if ((x86_cpu_def->vendor1 != ebx) || + (x86_cpu_def->vendor2 != edx) || + (x86_cpu_def->vendor3 != ecx)) { + return false; + } + + host_cpuid(0x1, 0, &eax, &ebx, &ecx, &edx); + if (!cpu_x86_feature_subset(ecx, x86_cpu_def->ext_features) || + !cpu_x86_feature_subset(edx, x86_cpu_def->features)) { + return false; + } + + host_cpuid(0x80000000, 0, &eax, &ebx, &ecx, &edx); + if (x86_cpu_def->xlevel > eax) { + return false; + } + + host_cpuid(0x80000001, 0, &eax, &ebx, &ecx, &edx); + if (!cpu_x86_feature_subset(edx, x86_cpu_def->ext2_features) || + !cpu_x86_feature_subset(ecx, x86_cpu_def->ext3_features)) { + return false; + } + + return true; +} + +/* Returns true when new_def is higher versioned than old_def */ +static int cpu_x86_fits_higher(x86_def_t *new_def, x86_def_t *old_def) +{ + int old_fammod = (old_def->family << 24) | (old_def->model << 8) + | (old_def->stepping); + int new_fammod = (new_def->family << 24) | (new_def->model << 8) + | (new_def->stepping); + + return new_fammod > old_fammod; +} + +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def) +{ + x86_def_t *def; + + x86_cpu_def->family = 0; + x86_cpu_def->model = 0; + for (def = x86_defs; def; def = def->next) { + if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) { + memcpy(x86_cpu_def, def, sizeof(*def)); + } + } + + if (!x86_cpu_def->family && !x86_cpu_def->model) { + fprintf(stderr, "No fitting CPU model found!\n"); + exit(1); + } +} + static int unavailable_host_feature(struct model_features_t *f, uint32_t mask) { int i; @@ -617,6 +696,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model) break; if (kvm_enabled() && name && strcmp(name, "host") == 0) { cpu_x86_fill_host(x86_cpu_def); + } else if (kvm_enabled() && name && strcmp(name, "best") == 0) { + cpu_x86_fill_best(x86_cpu_def); } else if (!def) { goto error; } else { -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-08 23:52 [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Alexander Graf @ 2012-01-08 23:52 ` Alexander Graf 2012-01-16 19:30 ` Ryan Harper 2012-01-09 0:02 ` [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Peter Maydell 2012-01-16 19:33 ` Anthony Liguori 2 siblings, 1 reply; 13+ messages in thread From: Alexander Graf @ 2012-01-08 23:52 UTC (permalink / raw) To: qemu-devel@nongnu.org Developers; +Cc: Avi Kivity, kvm list When running QEMU without -cpu parameter, the user usually wants a sane default. So far, we're using the qemu64/qemu32 CPU type, which basically means "the maximum TCG can emulate". That's a really good default when using TCG, but when running with KVM we much rather want a default saying "the maximum performance I can get". Fortunately we just added an option that gives us the best performance while still staying safe on the testability side of things: -cpu best. So all we need to do is make -cpu best the default when the user doesn't define any. This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which hicks up on QEMU's non-existent CPU models. This patch also adds a new pc-1.1 machine type to keep backwards compatible with older versions of QEMU. Signed-off-by: Alexander Graf <agraf@suse.de> --- hw/pc_piix.c | 42 ++++++++++++++++++++++++++++++++++-------- 1 files changed, 34 insertions(+), 8 deletions(-) diff --git a/hw/pc_piix.c b/hw/pc_piix.c index 00f525e..3d78ccb 100644 --- a/hw/pc_piix.c +++ b/hw/pc_piix.c @@ -79,7 +79,8 @@ static void pc_init1(MemoryRegion *system_memory, const char *initrd_filename, const char *cpu_model, int pci_enabled, - int kvmclock_enabled) + int kvmclock_enabled, + int may_cpu_best) { int i; ram_addr_t below_4g_mem_size, above_4g_mem_size; @@ -102,6 +103,9 @@ static void pc_init1(MemoryRegion *system_memory, MemoryRegion *rom_memory; DeviceState *dev; + if (!cpu_model && kvm_enabled() && may_cpu_best) { + cpu_model = "best"; + } pc_cpus_init(cpu_model); if (kvmclock_enabled) { @@ -263,7 +267,21 @@ static void pc_init_pci(ram_addr_t ram_size, get_system_io(), ram_size, boot_device, kernel_filename, kernel_cmdline, - initrd_filename, cpu_model, 1, 1); + initrd_filename, cpu_model, 1, 1, 1); +} + +static void pc_init_pci_oldcpu(ram_addr_t ram_size, + const char *boot_device, + const char *kernel_filename, + const char *kernel_cmdline, + const char *initrd_filename, + const char *cpu_model) +{ + pc_init1(get_system_memory(), + get_system_io(), + ram_size, boot_device, + kernel_filename, kernel_cmdline, + initrd_filename, cpu_model, 1, 1, 0); } static void pc_init_pci_no_kvmclock(ram_addr_t ram_size, @@ -277,7 +295,7 @@ static void pc_init_pci_no_kvmclock(ram_addr_t ram_size, get_system_io(), ram_size, boot_device, kernel_filename, kernel_cmdline, - initrd_filename, cpu_model, 1, 0); + initrd_filename, cpu_model, 1, 0, 0); } static void pc_init_isa(ram_addr_t ram_size, @@ -293,7 +311,7 @@ static void pc_init_isa(ram_addr_t ram_size, get_system_io(), ram_size, boot_device, kernel_filename, kernel_cmdline, - initrd_filename, cpu_model, 0, 1); + initrd_filename, cpu_model, 0, 1, 0); } #ifdef CONFIG_XEN @@ -314,8 +332,8 @@ static void pc_xen_hvm_init(ram_addr_t ram_size, } #endif -static QEMUMachine pc_machine_v1_0 = { - .name = "pc-1.0", +static QEMUMachine pc_machine_v1_1 = { + .name = "pc-1.1", .alias = "pc", .desc = "Standard PC", .init = pc_init_pci, @@ -323,17 +341,24 @@ static QEMUMachine pc_machine_v1_0 = { .is_default = 1, }; +static QEMUMachine pc_machine_v1_0 = { + .name = "pc-1.0", + .desc = "Standard PC", + .init = pc_init_pci_oldcpu, + .max_cpus = 255, +}; + static QEMUMachine pc_machine_v0_15 = { .name = "pc-0.15", .desc = "Standard PC", - .init = pc_init_pci, + .init = pc_init_pci_oldcpu, .max_cpus = 255, }; static QEMUMachine pc_machine_v0_14 = { .name = "pc-0.14", .desc = "Standard PC", - .init = pc_init_pci, + .init = pc_init_pci_oldcpu, .max_cpus = 255, .compat_props = (GlobalProperty[]) { { @@ -612,6 +637,7 @@ static QEMUMachine xenfv_machine = { static void pc_machine_init(void) { + qemu_register_machine(&pc_machine_v1_1); qemu_register_machine(&pc_machine_v1_0); qemu_register_machine(&pc_machine_v0_15); qemu_register_machine(&pc_machine_v0_14); -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-08 23:52 ` [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 Alexander Graf @ 2012-01-16 19:30 ` Ryan Harper 2012-01-16 19:36 ` Alexander Graf 0 siblings, 1 reply; 13+ messages in thread From: Ryan Harper @ 2012-01-16 19:30 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: > When running QEMU without -cpu parameter, the user usually wants a sane > default. So far, we're using the qemu64/qemu32 CPU type, which basically > means "the maximum TCG can emulate". it also means we all maximum possible migration targets. Have you given any thought to migration with -cpu best? > > That's a really good default when using TCG, but when running with KVM > we much rather want a default saying "the maximum performance I can get". > > Fortunately we just added an option that gives us the best performance > while still staying safe on the testability side of things: -cpu best. > So all we need to do is make -cpu best the default when the user doesn't > define any. > > This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which > hicks up on QEMU's non-existent CPU models. > > This patch also adds a new pc-1.1 machine type to keep backwards compatible > with older versions of QEMU. > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > hw/pc_piix.c | 42 ++++++++++++++++++++++++++++++++++-------- > 1 files changed, 34 insertions(+), 8 deletions(-) > > diff --git a/hw/pc_piix.c b/hw/pc_piix.c > index 00f525e..3d78ccb 100644 > --- a/hw/pc_piix.c > +++ b/hw/pc_piix.c > @@ -79,7 +79,8 @@ static void pc_init1(MemoryRegion *system_memory, > const char *initrd_filename, > const char *cpu_model, > int pci_enabled, > - int kvmclock_enabled) > + int kvmclock_enabled, > + int may_cpu_best) > { > int i; > ram_addr_t below_4g_mem_size, above_4g_mem_size; > @@ -102,6 +103,9 @@ static void pc_init1(MemoryRegion *system_memory, > MemoryRegion *rom_memory; > DeviceState *dev; > > + if (!cpu_model && kvm_enabled() && may_cpu_best) { > + cpu_model = "best"; > + } > pc_cpus_init(cpu_model); > > if (kvmclock_enabled) { > @@ -263,7 +267,21 @@ static void pc_init_pci(ram_addr_t ram_size, > get_system_io(), > ram_size, boot_device, > kernel_filename, kernel_cmdline, > - initrd_filename, cpu_model, 1, 1); > + initrd_filename, cpu_model, 1, 1, 1); > +} > + > +static void pc_init_pci_oldcpu(ram_addr_t ram_size, > + const char *boot_device, > + const char *kernel_filename, > + const char *kernel_cmdline, > + const char *initrd_filename, > + const char *cpu_model) > +{ > + pc_init1(get_system_memory(), > + get_system_io(), > + ram_size, boot_device, > + kernel_filename, kernel_cmdline, > + initrd_filename, cpu_model, 1, 1, 0); > } > > static void pc_init_pci_no_kvmclock(ram_addr_t ram_size, > @@ -277,7 +295,7 @@ static void pc_init_pci_no_kvmclock(ram_addr_t ram_size, > get_system_io(), > ram_size, boot_device, > kernel_filename, kernel_cmdline, > - initrd_filename, cpu_model, 1, 0); > + initrd_filename, cpu_model, 1, 0, 0); > } > > static void pc_init_isa(ram_addr_t ram_size, > @@ -293,7 +311,7 @@ static void pc_init_isa(ram_addr_t ram_size, > get_system_io(), > ram_size, boot_device, > kernel_filename, kernel_cmdline, > - initrd_filename, cpu_model, 0, 1); > + initrd_filename, cpu_model, 0, 1, 0); > } > > #ifdef CONFIG_XEN > @@ -314,8 +332,8 @@ static void pc_xen_hvm_init(ram_addr_t ram_size, > } > #endif > > -static QEMUMachine pc_machine_v1_0 = { > - .name = "pc-1.0", > +static QEMUMachine pc_machine_v1_1 = { > + .name = "pc-1.1", > .alias = "pc", > .desc = "Standard PC", > .init = pc_init_pci, > @@ -323,17 +341,24 @@ static QEMUMachine pc_machine_v1_0 = { > .is_default = 1, > }; > > +static QEMUMachine pc_machine_v1_0 = { > + .name = "pc-1.0", > + .desc = "Standard PC", > + .init = pc_init_pci_oldcpu, > + .max_cpus = 255, > +}; > + > static QEMUMachine pc_machine_v0_15 = { > .name = "pc-0.15", > .desc = "Standard PC", > - .init = pc_init_pci, > + .init = pc_init_pci_oldcpu, > .max_cpus = 255, > }; > > static QEMUMachine pc_machine_v0_14 = { > .name = "pc-0.14", > .desc = "Standard PC", > - .init = pc_init_pci, > + .init = pc_init_pci_oldcpu, > .max_cpus = 255, > .compat_props = (GlobalProperty[]) { > { > @@ -612,6 +637,7 @@ static QEMUMachine xenfv_machine = { > > static void pc_machine_init(void) > { > + qemu_register_machine(&pc_machine_v1_1); > qemu_register_machine(&pc_machine_v1_0); > qemu_register_machine(&pc_machine_v0_15); > qemu_register_machine(&pc_machine_v0_14); > -- > 1.6.0.2 > -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 19:30 ` Ryan Harper @ 2012-01-16 19:36 ` Alexander Graf 2012-01-16 19:46 ` Ryan Harper 0 siblings, 1 reply; 13+ messages in thread From: Alexander Graf @ 2012-01-16 19:36 UTC (permalink / raw) To: Ryan Harper; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 16.01.2012, at 20:30, Ryan Harper wrote: > * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: >> When running QEMU without -cpu parameter, the user usually wants a sane >> default. So far, we're using the qemu64/qemu32 CPU type, which basically >> means "the maximum TCG can emulate". > > it also means we all maximum possible migration targets. Have you > given any thought to migration with -cpu best? If you have the same boxes in your cluster, migration just works. If you don't, you usually use a specific CPU model that is the least dominator between your boxes either way. The current kvm64 type is broken. Libgmp just abort()s when we pass it in. So anything is better than what we do today on AMD hosts :). Alex ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 19:36 ` Alexander Graf @ 2012-01-16 19:46 ` Ryan Harper 2012-01-16 19:51 ` Alexander Graf 0 siblings, 1 reply; 13+ messages in thread From: Ryan Harper @ 2012-01-16 19:46 UTC (permalink / raw) To: Alexander Graf Cc: Ryan Harper, qemu-devel@nongnu.org Developers, kvm list, Avi Kivity * Alexander Graf <agraf@suse.de> [2012-01-16 13:37]: > > On 16.01.2012, at 20:30, Ryan Harper wrote: > > > * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: > >> When running QEMU without -cpu parameter, the user usually wants a sane > >> default. So far, we're using the qemu64/qemu32 CPU type, which basically > >> means "the maximum TCG can emulate". > > > > it also means we all maximum possible migration targets. Have you > > given any thought to migration with -cpu best? > > If you have the same boxes in your cluster, migration just works. If > you don't, you usually use a specific CPU model that is the least > dominator between your boxes either way. Sure, but the idea behind -cpu best is to not have to figure that out; you had suggested that the qemu64/qemu32 were just related to TCG, and what I'm suggesting is that it's also the most compatible w.r.t migration. it sounds like if migration is a requirement, then -cpu best probably isn't something that would be used. I suppose I'm OK with that, or at least I don't have a better suggestion on how to carefully push up the capabilities without at some point breaking migration. > > The current kvm64 type is broken. Libgmp just abort()s when we pass it > in. So anything is better than what we do today on AMD hosts :). I wonder if it breaks with Cyris cpus... other tools tend to do runtime detection (mplayer). > > > Alex -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 19:46 ` Ryan Harper @ 2012-01-16 19:51 ` Alexander Graf 2012-01-16 20:13 ` Ryan Harper 0 siblings, 1 reply; 13+ messages in thread From: Alexander Graf @ 2012-01-16 19:51 UTC (permalink / raw) To: Ryan Harper; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 16.01.2012, at 20:46, Ryan Harper wrote: > * Alexander Graf <agraf@suse.de> [2012-01-16 13:37]: >> >> On 16.01.2012, at 20:30, Ryan Harper wrote: >> >>> * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: >>>> When running QEMU without -cpu parameter, the user usually wants a sane >>>> default. So far, we're using the qemu64/qemu32 CPU type, which basically >>>> means "the maximum TCG can emulate". >>> >>> it also means we all maximum possible migration targets. Have you >>> given any thought to migration with -cpu best? >> >> If you have the same boxes in your cluster, migration just works. If >> you don't, you usually use a specific CPU model that is the least >> dominator between your boxes either way. > > Sure, but the idea behind -cpu best is to not have to figure that out; > you had suggested that the qemu64/qemu32 were just related to TCG, and > what I'm suggesting is that it's also the most compatible w.r.t > migration. The, the most compatible wrt migration is -cpu kvm64 / kvm32. > it sounds like if migration is a requirement, then -cpu best probably > isn't something that would be used. I suppose I'm OK with that, or at > least I don't have a better suggestion on how to carefully push up the > capabilities without at some point breaking migration. Yes, if you're interested in migration, then you're almost guaranteed to have a toolstack on top that has knowledge of your whole cluster and can do the least dominator detection over all of your nodes. On the QEMU level we don't know anything about other machines. > >> >> The current kvm64 type is broken. Libgmp just abort()s when we pass it >> in. So anything is better than what we do today on AMD hosts :). > > I wonder if it breaks with Cyris cpus... other tools tend to do runtime > detection (mplayer). It probably does :). But then again those don't do KVM, do they? Alex ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 19:51 ` Alexander Graf @ 2012-01-16 20:13 ` Ryan Harper 2012-01-16 20:51 ` Alexander Graf 0 siblings, 1 reply; 13+ messages in thread From: Ryan Harper @ 2012-01-16 20:13 UTC (permalink / raw) To: Alexander Graf Cc: Ryan Harper, qemu-devel@nongnu.org Developers, kvm list, Avi Kivity * Alexander Graf <agraf@suse.de> [2012-01-16 13:52]: > > On 16.01.2012, at 20:46, Ryan Harper wrote: > > > * Alexander Graf <agraf@suse.de> [2012-01-16 13:37]: > >> > >> On 16.01.2012, at 20:30, Ryan Harper wrote: > >> > >>> * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: > >>>> When running QEMU without -cpu parameter, the user usually wants a sane > >>>> default. So far, we're using the qemu64/qemu32 CPU type, which basically > >>>> means "the maximum TCG can emulate". > >>> > >>> it also means we all maximum possible migration targets. Have you > >>> given any thought to migration with -cpu best? > >> > >> If you have the same boxes in your cluster, migration just works. If > >> you don't, you usually use a specific CPU model that is the least > >> dominator between your boxes either way. > > > > Sure, but the idea behind -cpu best is to not have to figure that out; > > you had suggested that the qemu64/qemu32 were just related to TCG, and > > what I'm suggesting is that it's also the most compatible w.r.t > > migration. > > The, the most compatible wrt migration is -cpu kvm64 / kvm32. > > > it sounds like if migration is a requirement, then -cpu best probably > > isn't something that would be used. I suppose I'm OK with that, or at > > least I don't have a better suggestion on how to carefully push up the > > capabilities without at some point breaking migration. > > Yes, if you're interested in migration, then you're almost guaranteed to have a toolstack on top that has knowledge of your whole cluster and can do the least dominator detection over all of your nodes. On the QEMU level we don't know anything about other machines. > > > > >> > >> The current kvm64 type is broken. Libgmp just abort()s when we pass it > >> in. So anything is better than what we do today on AMD hosts :). > > > > I wonder if it breaks with Cyris cpus... other tools tend to do runtime > > detection (mplayer). > > It probably does :). But then again those don't do KVM, do they? not following; mplayer issues SSE2, 3 and 4 instructions to see what works to figure out how to optimize; it doesn't care if the cpu is called QEMU64 or Cyrus or AMD. I'm not saying that we can't do better than qemu64 w.r.t best cpu to select by default, but there are plenty of applications that want to optimize their code based on what's available, but this is done via code execution instead of string comparision. > > > Alex > -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 20:13 ` Ryan Harper @ 2012-01-16 20:51 ` Alexander Graf 2012-01-16 21:33 ` Ryan Harper 0 siblings, 1 reply; 13+ messages in thread From: Alexander Graf @ 2012-01-16 20:51 UTC (permalink / raw) To: Ryan Harper; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 16.01.2012, at 21:13, Ryan Harper wrote: > * Alexander Graf <agraf@suse.de> [2012-01-16 13:52]: >> >> On 16.01.2012, at 20:46, Ryan Harper wrote: >> >>> * Alexander Graf <agraf@suse.de> [2012-01-16 13:37]: >>>> >>>> On 16.01.2012, at 20:30, Ryan Harper wrote: >>>> >>>>> * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: >>>>>> When running QEMU without -cpu parameter, the user usually wants a sane >>>>>> default. So far, we're using the qemu64/qemu32 CPU type, which basically >>>>>> means "the maximum TCG can emulate". >>>>> >>>>> it also means we all maximum possible migration targets. Have you >>>>> given any thought to migration with -cpu best? >>>> >>>> If you have the same boxes in your cluster, migration just works. If >>>> you don't, you usually use a specific CPU model that is the least >>>> dominator between your boxes either way. >>> >>> Sure, but the idea behind -cpu best is to not have to figure that out; >>> you had suggested that the qemu64/qemu32 were just related to TCG, and >>> what I'm suggesting is that it's also the most compatible w.r.t >>> migration. >> >> The, the most compatible wrt migration is -cpu kvm64 / kvm32. >> >>> it sounds like if migration is a requirement, then -cpu best probably >>> isn't something that would be used. I suppose I'm OK with that, or at >>> least I don't have a better suggestion on how to carefully push up the >>> capabilities without at some point breaking migration. >> >> Yes, if you're interested in migration, then you're almost guaranteed to have a toolstack on top that has knowledge of your whole cluster and can do the least dominator detection over all of your nodes. On the QEMU level we don't know anything about other machines. >> >>> >>>> >>>> The current kvm64 type is broken. Libgmp just abort()s when we pass it >>>> in. So anything is better than what we do today on AMD hosts :). >>> >>> I wonder if it breaks with Cyris cpus... other tools tend to do runtime >>> detection (mplayer). >> >> It probably does :). But then again those don't do KVM, do they? > > not following; mplayer issues SSE2, 3 and 4 instructions to see what > works to figure out how to optimize; it doesn't care if the cpu is > called QEMU64 or Cyrus or AMD. I'm not saying that we can't do better > than qemu64 w.r.t best cpu to select by default, but there are plenty of > applications that want to optimize their code based on what's available, > but this is done via code execution instead of string comparision. The problem with -cpu kvm64 is that we choose a family/model that doesn't exist in the real world, and then glue AuthenticAMD or GenuineIntel in the vendor string. Libgmp checks for existing CPUs, finds that this CPU doesn't match any real world IDs and abort()s. The problem is that there is not a single CPU on this planet in silicon that has the same model+family numbers, but exists in AuthenticAMD _and_ GenuineIntel flavors. We need to pass the host vendor in though, because the guest uses it to detect if it should execute SYSCALL or SYSENTER, because intel and amd screwed up heavily on that one. It's not about feature flags which is what mplayer uses. Those are fine :). Alex ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 2012-01-16 20:51 ` Alexander Graf @ 2012-01-16 21:33 ` Ryan Harper 0 siblings, 0 replies; 13+ messages in thread From: Ryan Harper @ 2012-01-16 21:33 UTC (permalink / raw) To: Alexander Graf Cc: Ryan Harper, qemu-devel@nongnu.org Developers, kvm list, Avi Kivity * Alexander Graf <agraf@suse.de> [2012-01-16 14:52]: > > On 16.01.2012, at 21:13, Ryan Harper wrote: > > > * Alexander Graf <agraf@suse.de> [2012-01-16 13:52]: > >> > >> On 16.01.2012, at 20:46, Ryan Harper wrote: > >> > >>> * Alexander Graf <agraf@suse.de> [2012-01-16 13:37]: > >>>> > >>>> On 16.01.2012, at 20:30, Ryan Harper wrote: > >>>> > >>>>> * Alexander Graf <agraf@suse.de> [2012-01-08 17:53]: > >>>>>> When running QEMU without -cpu parameter, the user usually wants a sane > >>>>>> default. So far, we're using the qemu64/qemu32 CPU type, which basically > >>>>>> means "the maximum TCG can emulate". > >>>>> > >>>>> it also means we all maximum possible migration targets. Have you > >>>>> given any thought to migration with -cpu best? > >>>> > >>>> If you have the same boxes in your cluster, migration just works. If > >>>> you don't, you usually use a specific CPU model that is the least > >>>> dominator between your boxes either way. > >>> > >>> Sure, but the idea behind -cpu best is to not have to figure that out; > >>> you had suggested that the qemu64/qemu32 were just related to TCG, and > >>> what I'm suggesting is that it's also the most compatible w.r.t > >>> migration. > >> > >> The, the most compatible wrt migration is -cpu kvm64 / kvm32. > >> > >>> it sounds like if migration is a requirement, then -cpu best probably > >>> isn't something that would be used. I suppose I'm OK with that, or at > >>> least I don't have a better suggestion on how to carefully push up the > >>> capabilities without at some point breaking migration. > >> > >> Yes, if you're interested in migration, then you're almost guaranteed to have a toolstack on top that has knowledge of your whole cluster and can do the least dominator detection over all of your nodes. On the QEMU level we don't know anything about other machines. > >> > >>> > >>>> > >>>> The current kvm64 type is broken. Libgmp just abort()s when we pass it > >>>> in. So anything is better than what we do today on AMD hosts :). > >>> > >>> I wonder if it breaks with Cyris cpus... other tools tend to do runtime > >>> detection (mplayer). > >> > >> It probably does :). But then again those don't do KVM, do they? > > > > not following; mplayer issues SSE2, 3 and 4 instructions to see what > > works to figure out how to optimize; it doesn't care if the cpu is > > called QEMU64 or Cyrus or AMD. I'm not saying that we can't do better > > than qemu64 w.r.t best cpu to select by default, but there are plenty of > > applications that want to optimize their code based on what's available, > > but this is done via code execution instead of string comparision. > > The problem with -cpu kvm64 is that we choose a family/model that > doesn't exist in the real world, and then glue AuthenticAMD or > GenuineIntel in the vendor string. Libgmp checks for existing CPUs, > finds that this CPU doesn't match any real world IDs and abort()s. > > The problem is that there is not a single CPU on this planet in > silicon that has the same model+family numbers, but exists in > AuthenticAMD _and_ GenuineIntel flavors. We need to pass the host > vendor in though, because the guest uses it to detect if it should > execute SYSCALL or SYSENTER, because intel and amd screwed up heavily > on that one. I forgot about this one. =( -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best 2012-01-08 23:52 [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Alexander Graf 2012-01-08 23:52 ` [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 Alexander Graf @ 2012-01-09 0:02 ` Peter Maydell 2012-01-09 0:06 ` Alexander Graf 2012-01-16 19:33 ` Anthony Liguori 2 siblings, 1 reply; 13+ messages in thread From: Peter Maydell @ 2012-01-09 0:02 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 8 January 2012 23:52, Alexander Graf <agraf@suse.de> wrote: > During discussions on whether to make -cpu host the default in SLE, I found > myself disagreeing to the thought, because it potentially opens a big can > of worms for potential bugs. But if I already am so opposed to it for SLE, how > can it possibly be reasonable to default to -cpu host in upstream QEMU? And > what would a sane default look like? > > So I had this idea of looping through all available CPU definitions. We can > pretty well tell if our host is able to execute any of them by checking the > respective flags and seeing if our host has all features the CPU definition > requires. With that, we can create a -cpu type that would fall back to the > "best known CPU definition" that our host can fulfill. On my Phenom II > system for example, that would be -cpu phenom. ...shouldn't this be supported on at least all hosts with KVM support, not just x86? Also I don't see any documentation updates in this patchset :-) -- PMM ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best 2012-01-09 0:02 ` [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Peter Maydell @ 2012-01-09 0:06 ` Alexander Graf 0 siblings, 0 replies; 13+ messages in thread From: Alexander Graf @ 2012-01-09 0:06 UTC (permalink / raw) To: Peter Maydell; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 09.01.2012, at 01:02, Peter Maydell wrote: > On 8 January 2012 23:52, Alexander Graf <agraf@suse.de> wrote: >> During discussions on whether to make -cpu host the default in SLE, I found >> myself disagreeing to the thought, because it potentially opens a big can >> of worms for potential bugs. But if I already am so opposed to it for SLE, how >> can it possibly be reasonable to default to -cpu host in upstream QEMU? And >> what would a sane default look like? >> >> So I had this idea of looping through all available CPU definitions. We can >> pretty well tell if our host is able to execute any of them by checking the >> respective flags and seeing if our host has all features the CPU definition >> requires. With that, we can create a -cpu type that would fall back to the >> "best known CPU definition" that our host can fulfill. On my Phenom II >> system for example, that would be -cpu phenom. > > ...shouldn't this be supported on at least all hosts with KVM support, > not just x86? I don't think it makes sense on any other platform. For PPC -cpu host is good enough, since it's basically doing the same as -cpu best. We only have a single 32 bit number as identifier for -cpu host there. > Also I don't see any documentation updates in this patchset :-) Do we have documentation for -cpu host? Alex ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best 2012-01-08 23:52 [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Alexander Graf 2012-01-08 23:52 ` [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 Alexander Graf 2012-01-09 0:02 ` [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Peter Maydell @ 2012-01-16 19:33 ` Anthony Liguori 2012-01-16 19:42 ` Alexander Graf 2 siblings, 1 reply; 13+ messages in thread From: Anthony Liguori @ 2012-01-16 19:33 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 01/08/2012 05:52 PM, Alexander Graf wrote: > During discussions on whether to make -cpu host the default in SLE, I found > myself disagreeing to the thought, because it potentially opens a big can > of worms for potential bugs. But if I already am so opposed to it for SLE, how > can it possibly be reasonable to default to -cpu host in upstream QEMU? And > what would a sane default look like? What are the arguments against -cpu host? Regards, Anthony Liguori ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best 2012-01-16 19:33 ` Anthony Liguori @ 2012-01-16 19:42 ` Alexander Graf 0 siblings, 0 replies; 13+ messages in thread From: Alexander Graf @ 2012-01-16 19:42 UTC (permalink / raw) To: Anthony Liguori; +Cc: qemu-devel@nongnu.org Developers, kvm list, Avi Kivity On 16.01.2012, at 20:33, Anthony Liguori wrote: > On 01/08/2012 05:52 PM, Alexander Graf wrote: >> During discussions on whether to make -cpu host the default in SLE, I found >> myself disagreeing to the thought, because it potentially opens a big can >> of worms for potential bugs. But if I already am so opposed to it for SLE, how >> can it possibly be reasonable to default to -cpu host in upstream QEMU? And >> what would a sane default look like? > > > What are the arguments against -cpu host? It's hard to test. New CPUs have new features and we're having a hard time to catch up. With -cpu best we only select from a pool of known-good CPU types. If you want to check that everything works, go to a box that has the maximum available features, go through all -cpu options that users could run into and you're good. With -cpu host you can't really test (unless you own all possible CPUs there are). We expose CPUID information that doesn't exist that way in the real world. A small example from today's code. There are a bunch of CPUID leafs. On Nehalem, one of them is a list of possible C-States to go into. With -cpu host we sync feature bits, CPU name, CPU family and some other bits of information, but not the C-State information. So we end up with a CPU inside the guest that looks and feels like a Nehalem CPU, but doesn't expose any C-State information. Linux now boots, goes in, checks that it's running on Nehalem, sets the powersave mechanism to the respective model and fills an internal callback table with the C-State information with a loop that ends without any action, since we expose 0 C-State bits. When the guest now calls the idle callback, it dereferences that table, which contains a NULL pointer, oops. That is just one example from current Linux. Another one would be my development AMD box that when it came out wasn't around in the market yet, so guests would just refuse to boot at all. Since they'd just say the CPUID is unknown. Overall, I used to be a big fan of -cpu host, but it's a maintainability nightmare. It can be great for testing stuff, so we should definitely keep it around. But after thinking about it again, I don't think it should be the default. The default should be something safe. Alex ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2012-01-16 21:33 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-01-08 23:52 [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Alexander Graf 2012-01-08 23:52 ` [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86 Alexander Graf 2012-01-16 19:30 ` Ryan Harper 2012-01-16 19:36 ` Alexander Graf 2012-01-16 19:46 ` Ryan Harper 2012-01-16 19:51 ` Alexander Graf 2012-01-16 20:13 ` Ryan Harper 2012-01-16 20:51 ` Alexander Graf 2012-01-16 21:33 ` Ryan Harper 2012-01-09 0:02 ` [Qemu-devel] [PATCH 1/2] KVM: Add new -cpu best Peter Maydell 2012-01-09 0:06 ` Alexander Graf 2012-01-16 19:33 ` Anthony Liguori 2012-01-16 19:42 ` Alexander Graf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).