From: bugzilla-daemon@bugzilla.kernel.org
To: kvm@vger.kernel.org
Subject: [Bug 53611] New: nVMX: Add nested EPT
Date: Mon, 11 Feb 2013 12:49:06 +0000 (UTC) [thread overview]
Message-ID: <bug-53611-28872@https.bugzilla.kernel.org/> (raw)
https://bugzilla.kernel.org/show_bug.cgi?id=53611
Summary: nVMX: Add nested EPT
Product: Virtualization
Version: unspecified
Platform: All
OS/Version: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: kvm
AssignedTo: virtualization_kvm@kernel-bugs.osdl.org
ReportedBy: nyh@math.technion.ac.il
Regression: No
Created an attachment (id=93101)
--> (https://bugzilla.kernel.org/attachment.cgi?id=93101)
Nested EPT patches, v2
Nested EPT means emulating EPT for an L1 guest, allowing it to use EPT when
running a nested guest L2. When L1 uses EPT, it allows the L2 guest to set
its own cr3 and take its own page faults without either of L0 or L1 getting
involved. In many workloads this significanlty improves L2's performance over
the previous two alternatives (shadow page tables over ept, and shadow page
tables over shadow page tables). As an example, I measured a single-threaded
"make", which has a lot of context switches and page faults, on the three
options:
shadow over shadow: 105 seconds
shadow over EPT: 87 seconds (this is the default currently)
EPT over EPT: 29 seconds
single-level virtualization (with EPT): 25 seconds
So clearly nested EPT would be a big win for such workloads.
I attach a patch set which I worked on and allowed me to measure the above
results. This is the same patch set I sent to KVM mailing list on August 1st,
2012, titled "nEPT v2: Nested EPT support for Nested VMX".
This patch set still needs some work: it is known to only work in some setups
but not others, and the file "announce" in the attached tar lists 5 things
which definitely need to be done. There were a few additional comments in the
mailing list - see
http://comments.gmane.org/gmane.comp.emulators.kvm.devel/95395
--
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
next reply other threads:[~2013-02-11 12:49 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-11 12:49 bugzilla-daemon [this message]
2013-02-11 12:50 ` [Bug 53611] nVMX: Add nested EPT bugzilla-daemon
2013-02-11 13:16 ` [Bug 53611] New: " Jan Kiszka
2013-02-11 13:27 ` Nadav Har'El
2013-02-11 14:53 ` Jan Kiszka
2013-02-12 19:13 ` Nakajima, Jun
2013-02-13 7:43 ` Jan Kiszka
2013-02-15 2:07 ` Nakajima, Jun
2013-02-26 14:11 ` Nadav Har'El
2013-02-26 19:43 ` Jan Kiszka
2013-02-26 20:14 ` Gleb Natapov
2013-03-05 4:45 ` Nakajima, Jun
2013-03-05 8:28 ` Jan Kiszka
2013-03-22 6:23 ` Nakajima, Jun
2013-03-22 16:45 ` Jan Kiszka
2013-04-24 7:25 ` Jan Kiszka
2013-04-24 15:55 ` Nakajima, Jun
2013-04-24 15:57 ` Jan Kiszka
2013-04-25 8:00 ` Nakajima, Jun
2013-04-25 9:19 ` Gleb Natapov
2013-04-26 6:26 ` Jan Kiszka
2013-04-26 16:07 ` Nakajima, Jun
2013-04-28 10:03 ` Jan Kiszka
2013-02-27 8:14 ` [Bug 53611] " bugzilla-daemon
2015-03-17 3:53 ` bugzilla-daemon
2015-04-08 9:02 ` bugzilla-daemon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-53611-28872@https.bugzilla.kernel.org/ \
--to=bugzilla-daemon@bugzilla.kernel.org \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox