From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Roedel, Joerg" Subject: Re: [PATCH 3/4] test: Add mode-switch test for nested svm Date: Mon, 2 Aug 2010 16:11:46 +0200 Message-ID: <20100802141146.GB25471@amd.com> References: <1280756016-11330-1-git-send-email-joerg.roedel@amd.com> <1280756016-11330-4-git-send-email-joerg.roedel@amd.com> <4C56CE5E.2080908@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: Marcelo Tosatti , "kvm@vger.kernel.org" To: Avi Kivity Return-path: Received: from va3ehsobe003.messaging.microsoft.com ([216.32.180.13]:7130 "EHLO VA3EHSOBE003.bigfish.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751095Ab0HBOJa (ORCPT ); Mon, 2 Aug 2010 10:09:30 -0400 Content-Disposition: inline In-Reply-To: <4C56CE5E.2080908@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Aug 02, 2010 at 09:55:42AM -0400, Avi Kivity wrote: > On 08/02/2010 04:33 PM, Joerg Roedel wrote: > > +static void test_mode_switch(struct test *test) > > +{ > > + asm volatile(" cli\n" > > + " ljmp *1f\n" /* jump to 32-bit code segment */ > > + "1:\n" > > + " .long 2f\n" > > + " .long 40\n" > > + ".code32\n" > > + "2:\n" > > + " movl %%cr0, %%eax\n" > > + " btcl $31, %%eax\n" /* clear PG */ > > + " movl %%eax, %%cr0\n" > > + " movl $0xc0000080, %%ecx\n" /* EFER */ > > + " rdmsr\n" > > + " btcl $8, %%eax\n" /* clear LME */ > > + " wrmsr\n" > > + " movl %%cr4, %%eax\n" > > + " btcl $5, %%eax\n" /* clear PAE */ > > + " movl %%eax, %%cr4\n" > > + " movw $64, %%ax\n" > > + " movw %%ax, %%ds\n" > > + " ljmpl $56, $3f\n" /* jump to 16 bit protected-mode */ > > + ".code16\n" > > + "3:\n" > > + " movl %%cr0, %%eax\n" > > + " btcl $0, %%eax\n" /* clear PE */ > > + " movl %%eax, %%cr0\n" > > + " ljmpl $0, $4f\n" /* jump to real-mode */ > > + "4:\n" > > + " vmmcall\n" > > + " movl %%cr0, %%eax\n" > > + " btsl $0, %%eax\n" /* set PE */ > > + " movl %%eax, %%cr0\n" > > + " ljmpl $40, $5f\n" /* back to protected mode */ > > + ".code32\n" > > + "5:\n" > > + " movl %%cr4, %%eax\n" > > + " btsl $5, %%eax\n" /* set PAE */ > > + " movl %%eax, %%cr4\n" > > + " movl $0xc0000080, %%ecx\n" /* EFER */ > > + " rdmsr\n" > > + " btsl $8, %%eax\n" /* set LME */ > > + " wrmsr\n" > > + " movl %%cr0, %%eax\n" > > + " btsl $31, %%eax\n" /* set PG */ > > + " movl %%eax, %%cr0\n" > > + " ljmpl $8, $6f\n" /* back to long mode */ > > + ".code64\n\t" > > + "6:\n" > > + " vmmcall\n" > > + ::: "rax", "rbx", "rcx", "rdx", "memory"); > > +} > > + > > What is this testing exactly? There is no svm function directly > associated with mode switch. In fact, most L1s will intercept cr and > efer access and emulate the mode switch, rather than letting L2 perform > the mode switch directly. This is testing the failure case without the nested-svm efer patch I submitted last week. The sequence above (which switches from long mode to real mode and back to long mode) fails without this patch. Joerg -- AMD Operating System Research Center Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach General Managers: Alberto Bozzo, Andrew Bowd Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632