* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-05 18:44 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-05 18:44 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 7718 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 3
Information leak via side effects of speculative execution
UPDATES IN VERSION 3
====================
Add information about ARM vulnerability.
Correct description of SP2 difficulty.
Mention that resolutions for SP1 and SP3 may be available in the
future.
Move description of the PV-in-PVH shim from Mitigation to Resolution.
(When available and deployed, it will eliminate the SP3
vulnerability.)
Add colloquial names and CVEs to the relevant paragraphs in Issue
Description.
Add a URL.
Say explicitly in Vulnerable Systems that HVM guests cannot exploit
SP3.
Clarify that SP1 and SP2 can be exploited against other victims
besides operating systems and hypervisors.
Grammar fixes.
Remove erroneous detail about when Xen direct maps the whole of
physical memory.
State in Description that Xen ARM guests run in a separate address
space.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
For guests with legacy PV kernels which cannot be run in HVM mode, we
have developed a "shim" hypervisor that allows PV guests to run in PVH
mode. Unfortunately, due to the accelerated schedule, this is not yet
ready to release. We expect to have it ready for 4.10, as well as PVH
backports to 4.9 and 4.8, available over the next few days.
When we have useful information we will send an update.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaT8eJAAoJEIP+FMlX6CvZpHsIAMd+oeUvMIDyGwMSDL93KAqJ
TPKV9Qi5FxTfW+dkfJ5GRR/IPHbxr9yHfbUpU33QfLYDmyMzL3oNokOR3R6jSpFE
dgqHIoS04EXsy7fSZ777YWwZoGBsAfbDZ5sJnFWxLTcLx6440N03LJC0wsLFyRET
6wPF7Ml9ZsWfkd3VvMDUc4PRhjbzGio1eP+ZUS4HfRk01DYmv/NTnUZIdY01sFFE
PVSTxO3iO0ptiTlqd+PPsjlqswNu0gmvW7jkc/MaLPLUhKcUG7tat0yDapxCf0Hv
xJZ6eNsjhTVJitINISyGYR5ZZESpfhXzig6znex6nr7r1/Ey4w6ud90pSV9j2/o=
=VIt1
-----END PGP SIGNATURE-----
[-- Attachment #2: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-11 20:09 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-11 20:09 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 7391 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 4
Information leak via side effects of speculative execution
UPDATES IN VERSION 4
====================
Added README for determining which shim to use, as well as
instructions for using "Vixen" (HVM shim) and the required
conversion script
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. The HVM shim (codenamed
"Vixen") is available now. We expect to have the PVH shim (codenamed
"Comet") available within a few days. Please read README.which-shim
to determine which shim is suitable for you.
$ sha256sum xsa254*/*
2df6b811ec7a377a9cc717f7a8ed497f3a90928c21cba81182eb4a802e32ecd7 xsa254/README.vixen
bc04385fd3ec899e1b8c1c001b6169587a8a8b20d5d0d584ff749b7ed67d7e70 xsa254/README.which-shim
36e825118fa8fca30158e50607580ddf64f6c62e5c5127d87d0042fbe2ff37b2 xsa254/pvshim-converter.pl
$
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaV8ReAAoJEIP+FMlX6CvZWoUH/joZJ3sMPCs5EHlDcKMcoWXx
YMsZuypqVyotc9WbvBdh3QfdfCEOqouJatHUBkl3Me8bzkJY1IEzcE4BlG0Ku1Bv
s2DKEcUDbEtA7zuJuQukeuYdx4QaqfVr93fnW48P2Ax2X7kBl1cvr5isxjBaPqC2
dHVMqXgwPGPwOzPW7GZjmzDikyPAHgsNxdH/rXdAHSJ8hLVUeQv3zhMaoUmvQiNb
xq7+mSIoVAZr82fXKGKApX2XTxmwq7SgyzAVVfGySID9GGjnGGoSpirpMtkD+7io
rpe0W+KD/muukgzvRd5+eHbx+dIq5MN0VnQiFbc2WmM8HNoJF/R8k/kvLtQfiZ4=
=2xGF
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2499 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #3: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 3423 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
You might consider this approach if:
- You want to deploy a fix immediately
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
You might consider this approach if:
- You're on 4.8 or later already
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
Unfortunately this solution is not yet available. We expect to have
it available within a few working days.
[-- Attachment #4: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6402 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP';
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
set -x
for path in /usr/local/lib /usr/lib; do
$path/xen/bin/qemu-system-i386 "${newargs[@]}" ||:
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #5: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-12 12:15 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-12 12:15 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 7982 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 5
Information leak via side effects of speculative execution
UPDATES IN VERSION 5
====================
PV-in-PVH/HVM shim approach leaves *guest* vulnerable to Meltdown
attacks from its unprivileged users, even if the guest has KPTI
patches. That is, guest userspace can use Meltdown to read all memory
in the same guest.
In Vixen shim sidecar creator script, look for qemu in some more
places, and provide a command line option to specify the
qemu-system-i386 to use in case the default doesn't find it.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
(Within-guest attacks are still possible unless the guest OS has also
been updated with an SP3 mitigation series such as KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now. We expect to have
the PVH shim (codenamed "Comet") available within a few days. Please
read README.which-shim to determine which shim is suitable for you.
$ sha256sum xsa254*/*
2df6b811ec7a377a9cc717f7a8ed497f3a90928c21cba81182eb4a802e32ecd7 xsa254/README.vixen
4c30295513ad82debe04845248b5baac0b3d0c151b80fdca32f2df8b9aa0b541 xsa254/README.which-shim
6210615c1384e13da953452e6f47066f8837e2b2c7f671280902e32e96763b54 xsa254/pvshim-converter.pl
$
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaWKbzAAoJEIP+FMlX6CvZtl4H/RKmXpS1fL51efZbrhYaDBTF
nLSHxfPdmi+MLaJ8Y7hS9w061ovK7OYTcvi9xlAhE6yC0b4lX5NToc1CPkX6pjGV
atOh0q4QyxDQm9JGW1aL9pZa3ZSF/Y7ad/zv5OlU97ZmDEwuEVvOTSsGj+jMFB08
gJ+VfQ0F2R+sjdh9BIScbUedLEz+M5so2wGaOJObr/ybRfLyAobxwiIc+yPniBoi
c4eNLSdzBjmg0YrRGeMToVziNH6YXmHD+VLSj23SbVYOjgSS/vnbpRtw7DbcwGXy
jhwK8WheInGUsCe+Nz0VU54MXtRhkV+JtsB/g2h4flr49mUm8kt2VY3P0NO7dcE=
=jGQH
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2499 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #3: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 3949 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
NB that these approaches leave the guest vulnerable to within-guest
information leaks based on Meltdown, *even if the guest OS has
KPTI/Kaiser or a similar Meltdown migitation*.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
Unfortunately this solution is not yet available. We expect to have
it available within a few working days.
[-- Attachment #4: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6701 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
'%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
$path/qemu-system-i386 "${newargs[@]}" ||:
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #5: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-12 17:36 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-12 17:36 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 7928 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 6
Information leak via side effects of speculative execution
UPDATES IN VERSION 6
====================
PVH shim ("Comet") for 4.10 is available.
Mention within-guest attack in README.vixen as well as
README.which-shim.
Vixen shim converter script "exec"s qemu, avoiding stale qemu
processes (and, therefore, avoiding stale domains).
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
(Within-guest attacks are still possible unless the guest OS has also
been updated with an SP3 mitigation series such as KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10. We expect to have Comet for 4.8 and
4.9 within a few days. Please read README.which-shim to determine
which shim is suitable for you.
$ sha256sum xsa254*/*
f81c4624f8b188a2c33efa8687d3442bbd17c476e1a10761ef70c0aa99f6c659 xsa254/README.comet
1c594822dbd95998951203f6094bc77586d5720788de15897784d20bacb2ef08 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaWPIbAAoJEIP+FMlX6CvZQuoH/0A21scnQhrQPmFjtBO0b0Ai
/xQ7VCf2t3iKeZYJJGzj2atE1Hj91H6sZe6t6tLFbfPeYv2Gbfpl/09EE8ONSpSj
ae69fgwQN/EvpkCVec+QWQ0pWj7tLYgkT4IwQJSW+6VrTWjEV8PzQgkfjgclJEOk
J7EhaauI0qZVPEC2QZoMGJlgwfoS4xJalpCUGflrvgtmPhYbGGYDP8bP7WbVtqYS
I9nIoqndBdeWeyyu1O+cnMquV5BX2Nq7BDOTB3SMwNBHsnKudRQQRc3yNdmvQa2C
jvUMs/U7rqfK5pgOfimvLSDLR0TSnzNC8ahuI9Tv6TSwIl+AVt4xg0DZzhMjiqQ=
=aOVG
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 1828 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
At the moment, only 4.10 is available. We hope to have 4.8 and 4.9 in
the coming few days.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-1
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-1.1
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
[-- Attachment #3: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2736 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #4: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #5: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #6: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-12 17:46 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-12 17:46 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 7778 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 7
Information leak via side effects of speculative execution
UPDATES IN VERSION 7
====================
PVH shim ("Comet") for 4.10 tag correction: please use tag
4.10.0-shim-comet-1.1.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
(Within-guest attacks are still possible unless the guest OS has also
been updated with an SP3 mitigation series such as KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10. We expect to have Comet for 4.8 and
4.9 within a few days. Please read README.which-shim to determine
which shim is suitable for you.
$ sha256sum xsa254*/*
34749c1169c5c8a1c0f7457184998e17ae54d5b262984150286db74ac1a82d22 xsa254/README.comet
1c594822dbd95998951203f6094bc77586d5720788de15897784d20bacb2ef08 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaWPSKAAoJEIP+FMlX6CvZkicH/2H/Nn8eN90XeK6cXXTnz4Nx
OhDM1Rr9K0Sdnw84T5azKbtpEjPhiM762oRMRgO6uAYHs4cbCHemDLvruqS65Se5
0+Gs6V0b7nqXPremlulqe81A2rTBlmqtFTCQf2VWg2uLLHXwMVtbqCtCCdzmMA+w
XyiVQUO/MfgEOjbgM2XJSfmA0TcZfTClDW3FCvb9LhYLgdOGioxpGQ+SGsSNiZOL
0acn2eocI+Lihr0o/bX6tkhePTzThVOniah/AfIOcKD6WqEeN0NXdHZQUOOXCMMq
Js8tlwCu1ixrg8IFngUxFAKrD3Ge0pEmtCw90yWdhY/vsS6eE80Ixj+ZqaKUATE=
=FHIM
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 1830 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
At the moment, only 4.10 is available. We hope to have 4.8 and 4.9 in
the coming few days.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-1.1
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-1.1
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
[-- Attachment #3: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2736 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #4: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #5: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #6: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-16 17:43 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-16 17:43 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 8130 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 8
Information leak via side effects of speculative execution
UPDATES IN VERSION 8
====================
PVH shim ("Comet") is now available for Xen 4.8.
Fixes for two bugs in PVH shim "Comet": one relating to shim
initialisation, which can cause hangs during guest boot shortly after
host boot(!), and one to make qemu PV backends work in PVH mode.
Thanks to the respective contributors.
We are longer inclined to port the "Comet" patches to Xen 4.9. If
this causes you a problem please let us know by contacting us:
To: security@xenproject.org; CC: xen-devel@lists.xenproject.org
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by running guests in HVM or PVH mode.
(Within-guest attacks are still possible unless the guest OS has also
been updated with an SP3 mitigation series such as KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10 and Xen 4.8. Please read
README.which-shim to determine which shim is suitable for you.
$ sha256sum xsa254*/*
2f830fede5d58d3d90fe942ec2d8c4ef65cd14c4d565f9a1b9817847662ebba1 xsa254/README.comet
1c594822dbd95998951203f6094bc77586d5720788de15897784d20bacb2ef08 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
RESOLUTION
==========
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaXjm9AAoJEIP+FMlX6CvZ5VwH/1KQOIRXgsfYILMkdYIR4mG4
VGFcPT7l6egTndGOxPUUDcjxchP1guyyAucSMX+OzoK+SNJReqlSM/mjIN9Vvka4
BQiTr2Xh0y6GcyB+ldd29YTYAv45FYaIiMzrWUfATdkswezraW/uv3AKFkIrmwt3
LRNMGws0fyXLYfLAISdUJtlLN5pfuQ6jKNGXQTnAbmJ+PbGuOBJcOrJZjf+estGK
ptIp3jLwjBPuKwO8IR8jSYEAP7vOTRwOES1+TNeMyU9vPqWIa6D0L1wyjt4uTrjz
OPeAgD52v/Xh4nekFDaAZYaezqhLuzQqpIJKAtGbAUMxJkzFhevgCcBzOu/1/vM=
=F+76
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 2854 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
Versions for Xen 4.8 and 4.10 are available.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-2
- For 4.8: 4.8.3pre-shim-comet-2 and 4.10.0-shim-comet-2
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-2
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Build instructions: 4.8
-----------------------
The code for shim itself is not backported to 4.8. 4.8 users should
use a shim built from 4.10-based source code; this can be simply
dropped into a Xen 4.8 installation.
1. Build a 4.8+ system with support for running PVH, and for pvshim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.8.3pre-shim-comet-2
Do a build and install as normal.
2. Build a 4.10+ system to be the shim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-2
./configure
make -C tools/firmware/xen-dir
And then install the shim executable where
the 4.8 pv shim mode tools expect to find it
cp tools/firmware/xen-dir/xen-shim /usr/lib/xen/boot/xen-shim
cp tools/firmware/xen-dir/xen-shim /usr/local/lib/xen/boot/xen-shim
This step is only needed to boot guests in "PVH with PV shim"
mode; it is not needed when booting PVH-supporting guests as PVH.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
[-- Attachment #3: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2736 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of two mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #4: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #5: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #6: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-17 17:13 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-17 17:13 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 9157 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 9
Information leak via side effects of speculative execution
UPDATES IN VERSION 9
====================
"Stage 1" pagetable isolation (PTI) Meltdown fixes for Xen are
available.
"Comet" updates to shim code (4.10 branch):
* Include >32vcpu workaround in shim branch so that all shim
guests can boot without hypervisor changes.
* Fix shim build on systems whose find(1) lacks -printf
* Place shim trampoline at page 0x1 to avoid having 0 mapped
(4.8 "Comet" users are using the 4.10 shim and may want to update.)
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by page-table isolation ("PTI").
See Resolution below.
SP3 can be mitigated by running guests in HVM or PVH mode.
(Within-guest attacks are still possible unless the guest OS has also
been updated with an SP3 mitigation series such as KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10 and Xen 4.8. Please read
README.which-shim to determine which shim is suitable for you.
$ sha256sum xsa254*/*
1cba14ff83844d001d6c8a74afc3f764f49182cc7a06bb4463548450ac96cc2f xsa254/README.comet
cddd78cd7a00df9fa254156993f0309cea825d600f5ad8b36243148cf686bc9b xsa254/README.pti
3ef42381879befc84aa78b67d3a9b7b0cd862a2ffa445810466e90be6c6a5e86 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
RESOLUTION
==========
These are hardware bugs, so technically speaking they cannot be
properly fixed in software. However, it is possible in many cases to
provide patches to software to work around the problems.
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
SP3 can be mitigated by page-table isolation ("PTI").
We have a "stage 1" implementation. It allows 64-bit PV guests to be
run natively while restricting what can be accessed via SP3 to the Xen
stack of the current pcpu (which may contain remnants of information
from other guests, but should be much more difficult to attack
reliably).
Unfortunately these "stage 1" patches incur a non-negligible
performance overhead; about equivalent to the "PV shim" approaches
above. Moving to plain HVM or PVH guests is recommended where
possible. For more information on that, see below.
Patches for the "stage-1" PTI implementation are available in the Xen
staging-NN branches for each Xen revision. See README.pti for
specific revisons.
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaX4QSAAoJEIP+FMlX6CvZubQH/iuxfjnW24mzMX+hVughCH5Q
PKoZiNDnKMoWCzztrRjMNNcXRFcLAo+IU/+jWdjytJr5ISvNtICPtU6mzRTduqRe
KwfvOxrX8bfkoxJWdM7g4ux6sGTNKGS27+HaJYHNBypPexmwQwb/GBJnp+Yj+TRJ
0p+OGvN/F+gVBrOm17rD2/NE2jwDLa3WAX/oS12WaTJtwvnnFjTKmNAKj4XU3FRs
PMZdmE6Iimix5rA6YlYLmmsVrS+kD9B7SSU2CRX0wqOQcFpLn1ZM1QXQ7ux7p9+I
bAE7EMrA28ZJ+TS8H+1AYYL8e8xvo2/KIXPjEKsEAEr1nXIEOciSuVjHByvTGbQ=
=2SAx
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 2896 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
Versions for Xen 4.8 and 4.10 are available.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-3
- For 4.8: 4.8.3pre-shim-comet-2 and 4.10.0-shim-comet-3
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Build instructions: 4.8
-----------------------
The code for shim itself is not backported to 4.8. 4.8 users should
use a shim built from 4.10-based source code; this can be simply
dropped into a Xen 4.8 installation.
1. Build a 4.8+ system with support for running PVH, and for pvshim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.8.3pre-shim-comet-2
Do a build and install as normal.
2. Build a 4.10+ system to be the shim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
./configure
make -C tools/firmware/xen-dir
And then install the shim executable where
the 4.8 pv shim mode tools expect to find it
cp tools/firmware/xen-dir/xen-shim /usr/lib/xen/boot/xen-shim
cp tools/firmware/xen-dir/xen-shim /usr/local/lib/xen/boot/xen-shim
This step is only needed to boot guests in "PVH with PV shim"
mode; it is not needed when booting PVH-supporting guests as PVH.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
* There is no need to reboot the host.
[-- Attachment #3: xsa254/README.pti --]
[-- Type: application/octet-stream, Size: 2280 bytes --]
Xen page-table isolation (XPTI)
===============================
Summary
-------
This README gives references for one of three mitigation strategies
for Meltdown.
This series is a first-class migitation pagetable isolation series for
Xen. It is available for Xen 4.6 to Xen 4.10 and later.
Precise git commits are as follows:
4.10:
7cccd6f748ec724cf9408cec6b3ec8e54a8a2c1f x86: allow Meltdown band-aid to be disabled
234f481337ea1a93db968d614649a6bdfdc8418a x86: Meltdown band-aid against malicious 64-bit PV guests
57dc197cf0d36c56ba1d9d32c6a1454bb52605bb x86/mm: Always set _PAGE_ACCESSED on L4e updates
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
dc7d46580d9c633a59be1c3776f79c01dd0cb98b x86: allow Meltdown band-aid to be disabled
1e0974638d65d9b8acf9ac7511d747188f38bcc3 x86: Meltdown band-aid against malicious 64-bit PV guests
87ea7816247090e8e5bc5653b16c412943a058b5 x86/mm: Always set _PAGE_ACCESSED on L4e updates
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
31d38d633a306b2b06767b5a5f5a8a00269f3c92 x86: allow Meltdown band-aid to be disabled
1ba477bde737bf9b28cc455bef1e9a6bc76d66fc x86: Meltdown band-aid against malicious 64-bit PV guests
049e2f45bfa488967494466ec6506c3ecae5fe0e x86/mm: Always set _PAGE_ACCESSED on L4e updates
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
e19d0af4ee2ae9e42a85db639fd6848e72f5658b x86: allow Meltdown band-aid to be disabled
e19517a3355acaaa2ff83018bc41e7fd044161e5 x86: Meltdown band-aid against malicious 64-bit PV guests
9b76908e6e074d7efbeafe6bad066ecc5f3c3c43 x86/mm: Always set _PAGE_ACCESSED on L4e updates
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
44ad7f6895da9861042d7a41e635d42d83cb2660 x86: allow Meltdown band-aid to be disabled
91dc902fdf41659c210329d6f6578f8132ee4770 x86: Meltdown band-aid against malicious 64-bit PV guests
a065841b3ae9f0ef49b9823cd205c79ee0c22b9c x86/mm: Always set _PAGE_ACCESSED on L4e updates
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #4: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2738 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #5: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #6: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #7: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-01-18 18:38 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-01-18 18:38 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 13074 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 10
Information leak via side effects of speculative execution
UPDATES IN VERSION 10
=====================
Provided summary table for the varous Meltdown options.
Note that in XSA-254 v9's Updates section we said
* Include >32vcpu workaround in shim branch ...
but this workaround is for guests with 32 or *fewer* vcpus; guests
with more will still need the L0 hypervisor patched and rebooted.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1 and SP2.
SP3 can be mitigated by page-table isolation ("PTI").
See Resolution below.
SP3 can, alternatively, be mitigated by running guests in HVM or PVH
mode. (Within-guest attacks are still possible unless the guest OS
has also been updated with an SP3 mitigation series such as
KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10 and Xen 4.8. Please read
README.which-shim to determine which shim is suitable for you.
RESOLUTION
==========
These are hardware bugs, so technically speaking they cannot be
properly fixed in software. However, it is possible in many cases to
provide patches to software to work around the problems.
There is no available resolution for SP1. A solution may be available
in the future.
We are working on patches which mitigate SP2 but these are not
currently available. Given that the vulnerabilities are now public,
these will be developed and published in public, initially via
xen-devel.
SP3 can be mitigated by page-table isolation ("PTI").
We have a "stage 1" implementation. It allows 64-bit PV guests to be
run natively while restricting what can be accessed via SP3 to the Xen
stack of the current pcpu (which may contain remnants of information
from other guests, but should be much more difficult to attack
reliably).
Unfortunately these "stage 1" patches incur a non-negligible
performance overhead; about equivalent to the "PV shim" approaches
above. Moving to plain HVM or PVH guests is recommended where
possible. For more information on that, see below.
Patches for the "stage-1" PTI implementation are available in the Xen
staging-NN branches for each Xen revision. See README.pti for
specific revisons.
SP3 MITIGATION OPTIONS SUMMARY TABLE FOR 64-bit X86 PV GUESTS
=============================================================
Everything in this section applies to 64-bit PV x86 guests only.
Xen PTI Use PVH Use HVM PVH shim HVM shim
"stage 1" "Comet" "Vixen"
How to use README.pti type="pvh" type="hvm" README.comet README.vixen
Guest All Linux 4.11+ Most[4] All All
support ?unikernels?[3]
Xen 4.6+ 4.10+ All 4.10, 4.8 All
versions 4.8-comet[1]
Testing Limited 4.10: Good Very good Moderate Very good
status Very new 4.8: Moderate
Performance Fair Excellent Varies[4] Fair Fair
Hypervisor Needed No need No need No need No need
changes
SP3 guest Substantially Protected Protected Protected Protected
to host protected
SP3 within Protected Guest Guest Vulnerable Vulnerable
guest patches patches [5] [5]
SP3 from Protected n/a; vuln. n/a; vuln. n/a; vuln. n/a; vuln.
dom0 user [9] [9] [9] [9]
Device model No dm No dm Qemu No dm Qemu
Config change None type="pvh" type="hvm"/ type="pvh" Tool to rewrite
builder="hvm" pvshim=1 Needs "sidecar"
Within-guest None Should be Disks+net None None
changes? none may change
Extra RAM use V. slight None ~9Mb/guest >=~20Mb/guest >=~29Mb/guest
Migration OK OK OK[4] OK Unsupported[2]
Guest mem adj OK OK OK Broken[2] Unsupported[2]
vcpu hotplug OK OK OK OK Unsupported[2]
Solution Indefinite Indefinite Indefinite Indefinite Limited
lifetime [7] [6]
[1] PVH is supported in Xen 4.8 only with the 4.8 "Comet" security
release branch.
[2] Some features in PVH/HVM shim guests are not inherently broken,
but buggy in the currently available versions. These may be fixed in
future proper releases of the same feature.
[3] Most unikernels have Xen support based on a version of mini-os.
mini-os master can boot PVH. But this is very recent.
[4] Some guests which have support for Xen PV fail to boot properly in
Xen HVM. Some such guests can made to boot HVM by disabling the
PV-on-HVM support entirely in the guest or in Xen; in that case the
guest may work but IO performance will be poor. Some PV-supporting
guests can boot as HVM, with PV drivers, but fail when migrated.
[5] The Comet and Vixen shim hypervisors direct-map all of their
"physical" memory, and that direct-map can be accessed using Meltdown
by unprivileged processes in the guest. So the guest is vulnerable to
within-guest Meltdown attacks and the guest operating system cannot
protect itself.
[6] "Vixen" HVM shim is not expected to be incorporated in future Xen
stable releases. At some point, support for it will be withdrawn.
However, HVM shim functionality may be available in a future Xen 4.10
stable point release and would then probably be useable with the
existing conversion script provided in this advisory.
[7] The lifetime of the special Comet branches is limited, but we will
not desupport them until some time after the same functionality is in
appropriate Xen stable point releases.
[8] The 64-bit x86 PV guest ABI precludes a guest from mapping its
kernel and userspace in the same address space. So these guests are
inherently immune to within-guest Meltdown attacks, without
within-guest patching. (This applies to 64-bit x86 PV guests only.)
[9] It is not possible to run dom0 as HVM. dom0 PVH is a planned
enhancement which is not yet available even in preview form.
ATTACHMENTS
===========
$ sha256sum xsa254*/*
1cba14ff83844d001d6c8a74afc3f764f49182cc7a06bb4463548450ac96cc2f xsa254/README.comet
cddd78cd7a00df9fa254156993f0309cea825d600f5ad8b36243148cf686bc9b xsa254/README.pti
3ef42381879befc84aa78b67d3a9b7b0cd862a2ffa445810466e90be6c6a5e86 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJaYOmqAAoJEIP+FMlX6CvZ9yQH/RrybJAcL4F48T8OoNIsPjz7
YCdKxAWLSugLM0oQ1AcWvF6oSoKrqzJndInmRlpK2WFxu3xsRSZepgwpLQ8uyr5J
BGfyqdT5JbswvaO9xCnl679Hi6iPnKsVEOtOQWHHT5h8B6A1kP5B80bW0u2Y6VP4
EiTF4UbGy/jrpfLLiNG4p5fmQxC5QCuUEUm4jKRzMq9DzAZTMQVnSzMyPruwGYeP
3UjgIQ1crMRdeBsUts6AF8FW355w53I1vwXnXZqVq+V65jlwurXaC6n5CJRKiItu
PYWVSdOBKCrUbvBf6hOPMBrz5259IXVBcukzsuobEP2S/yK9AyVG+bjXU3fdZLY=
=FFWp
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 2896 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
Versions for Xen 4.8 and 4.10 are available.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-3
- For 4.8: 4.8.3pre-shim-comet-2 and 4.10.0-shim-comet-3
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Build instructions: 4.8
-----------------------
The code for shim itself is not backported to 4.8. 4.8 users should
use a shim built from 4.10-based source code; this can be simply
dropped into a Xen 4.8 installation.
1. Build a 4.8+ system with support for running PVH, and for pvshim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.8.3pre-shim-comet-2
Do a build and install as normal.
2. Build a 4.10+ system to be the shim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
./configure
make -C tools/firmware/xen-dir
And then install the shim executable where
the 4.8 pv shim mode tools expect to find it
cp tools/firmware/xen-dir/xen-shim /usr/lib/xen/boot/xen-shim
cp tools/firmware/xen-dir/xen-shim /usr/local/lib/xen/boot/xen-shim
This step is only needed to boot guests in "PVH with PV shim"
mode; it is not needed when booting PVH-supporting guests as PVH.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
* There is no need to reboot the host.
[-- Attachment #3: xsa254/README.pti --]
[-- Type: application/octet-stream, Size: 2280 bytes --]
Xen page-table isolation (XPTI)
===============================
Summary
-------
This README gives references for one of three mitigation strategies
for Meltdown.
This series is a first-class migitation pagetable isolation series for
Xen. It is available for Xen 4.6 to Xen 4.10 and later.
Precise git commits are as follows:
4.10:
7cccd6f748ec724cf9408cec6b3ec8e54a8a2c1f x86: allow Meltdown band-aid to be disabled
234f481337ea1a93db968d614649a6bdfdc8418a x86: Meltdown band-aid against malicious 64-bit PV guests
57dc197cf0d36c56ba1d9d32c6a1454bb52605bb x86/mm: Always set _PAGE_ACCESSED on L4e updates
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
dc7d46580d9c633a59be1c3776f79c01dd0cb98b x86: allow Meltdown band-aid to be disabled
1e0974638d65d9b8acf9ac7511d747188f38bcc3 x86: Meltdown band-aid against malicious 64-bit PV guests
87ea7816247090e8e5bc5653b16c412943a058b5 x86/mm: Always set _PAGE_ACCESSED on L4e updates
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
31d38d633a306b2b06767b5a5f5a8a00269f3c92 x86: allow Meltdown band-aid to be disabled
1ba477bde737bf9b28cc455bef1e9a6bc76d66fc x86: Meltdown band-aid against malicious 64-bit PV guests
049e2f45bfa488967494466ec6506c3ecae5fe0e x86/mm: Always set _PAGE_ACCESSED on L4e updates
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
e19d0af4ee2ae9e42a85db639fd6848e72f5658b x86: allow Meltdown band-aid to be disabled
e19517a3355acaaa2ff83018bc41e7fd044161e5 x86: Meltdown band-aid against malicious 64-bit PV guests
9b76908e6e074d7efbeafe6bad066ecc5f3c3c43 x86/mm: Always set _PAGE_ACCESSED on L4e updates
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
44ad7f6895da9861042d7a41e635d42d83cb2660 x86: allow Meltdown band-aid to be disabled
91dc902fdf41659c210329d6f6578f8132ee4770 x86: Meltdown band-aid against malicious 64-bit PV guests
a065841b3ae9f0ef49b9823cd205c79ee0c22b9c x86/mm: Always set _PAGE_ACCESSED on L4e updates
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #4: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2738 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #5: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #6: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #7: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-02-23 19:17 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-02-23 19:17 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 14515 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 11
Information leak via side effects of speculative execution
UPDATES IN VERSION 11
=====================
Information provided about migitation for Spectre variant 2.
Mention whether CPU hardware virtualisation extensions are required
in the SP3 mitigations summary table.
An additional patch "x86: fix GET_STACK_END" is required to fix a
possible build failure in the PTI patches. README.pti updated
accordingly.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1.
SP2 can be mitigated by a combination of new microcode and compiler
and hypervisor changes. See Resolution below.
SP3 can be mitigated by page-table isolation ("PTI").
See Resolution below.
SP3 can, alternatively, be mitigated by running guests in HVM or PVH
mode. (Within-guest attacks are still possible unless the guest OS
has also been updated with an SP3 mitigation series such as
KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10 and Xen 4.8. Please read
README.which-shim to determine which shim is suitable for you.
RESOLUTION
==========
These are hardware bugs, so technically speaking they cannot be
properly fixed in software. However, it is possible in many cases to
provide patches to software to work around the problems.
There is no available resolution for SP1. A solution may be available
in the future.
SP2 can be mitigated on x86 by combinations of new CPU microcode and
new hypervisor code. The required hypervisor changes for Xen 4.6,
4.7, 4.8, 4.9 and 4.10 are detailed in the attached README.bti.
For AMD hardware, and for Intel hardware pre-dating the Skylake
microarchitecture, the hypervisor changes alone are sufficient to
mitigate the issue for Xen itself. No microcode updates are required.
For the Intel Skylake microarchitecture the hypervisor changes are
insufficient to protect Xen without appropriate new microcode.
Microcode updates are required in any event to guard against one guest
attacking another.
Consult Intel, your hardware vendor, or your dom0 OS distributor for the
microcode updates.
Additionally, compiler support for `indirect thunk' is required.
Again, without appropriate compiler support, the hypervisor patches
are insufficient. Consult your compiler distributor.
SP2 is mitigated on ARM 32-bit by a set of changes to the hypervisor
alone. SP2 can be mitigated on ARM 64-bit (aarch64) by a combination
of new PSCI firmware and new hypervisor code. The required hypervisor
changes for Xen 4.6, 4.7, 4.8, 4.9 and 4.10 are detailed in the
attached README.bti.
For ARM 32-bit these changes are complete.
For ARM 64-bit the hypervisor changes are still in development and are
expected to be available soon.
SP3 can be mitigated by page-table isolation ("PTI").
We have a "stage 1" implementation. It allows 64-bit PV guests to be
run natively while restricting what can be accessed via SP3 to the Xen
stack of the current pcpu (which may contain remnants of information
from other guests, but should be much more difficult to attack
reliably).
Unfortunately these "stage 1" patches incur a non-negligible
performance overhead; about equivalent to the "PV shim" approaches
above. Moving to plain HVM or PVH guests is recommended where
possible. For more information on that, see below.
Patches for the "stage-1" PTI implementation are available in the Xen
staging-NN branches for each Xen revision. See README.pti for
specific revisons.
SP3 MITIGATION OPTIONS SUMMARY TABLE FOR 64-bit X86 PV GUESTS
=============================================================
Everything in this section applies to 64-bit PV x86 guests only.
Xen PTI Use PVH Use HVM PVH shim HVM shim
"stage 1" "Comet" "Vixen"
How to use README.pti type="pvh" type="hvm" README.comet README.vixen
Guest All Linux 4.11+ Most[4] All All
support ?unikernels?[3]
Xen 4.6+ 4.10+ All 4.10, 4.8 All
versions 4.8-comet[1]
Testing Limited 4.10: Good Very good Moderate Very good
status Very new 4.8: Moderate
Performance Fair Excellent Varies[4] Fair Fair
Hypervisor Needed No need No need No need No need
changes
SP3 guest Substantially Protected Protected Protected Protected
to host protected
SP3 within Protected Guest Guest Vulnerable Vulnerable
guest patches patches [5] [5]
SP3 from Protected n/a; vuln. n/a; vuln. n/a; vuln. n/a; vuln.
dom0 user [9] [9] [9] [9]
Device model No dm No dm Qemu No dm Qemu
Config change None type="pvh" type="hvm"/ type="pvh" Tool to rewrite
builder="hvm" pvshim=1 Needs "sidecar"
Within-guest None Should be Disks+net None None
changes? none may change
CPU hw virt Not needed Needed Needed Needed Needed
feature (VT-x)
Extra RAM use V. slight None ~9Mb/guest >=~20Mb/guest >=~29Mb/guest
Migration OK OK OK[4] OK Unsupported[2]
Guest mem adj OK OK OK Broken[2] Unsupported[2]
vcpu hotplug OK OK OK OK Unsupported[2]
Solution Indefinite Indefinite Indefinite Indefinite Limited
lifetime [7] [6]
[1] PVH is supported in Xen 4.8 only with the 4.8 "Comet" security
release branch.
[2] Some features in PVH/HVM shim guests are not inherently broken,
but buggy in the currently available versions. These may be fixed in
future proper releases of the same feature.
[3] Most unikernels have Xen support based on a version of mini-os.
mini-os master can boot PVH. But this is very recent.
[4] Some guests which have support for Xen PV fail to boot properly in
Xen HVM. Some such guests can made to boot HVM by disabling the
PV-on-HVM support entirely in the guest or in Xen; in that case the
guest may work but IO performance will be poor. Some PV-supporting
guests can boot as HVM, with PV drivers, but fail when migrated.
[5] The Comet and Vixen shim hypervisors direct-map all of their
"physical" memory, and that direct-map can be accessed using Meltdown
by unprivileged processes in the guest. So the guest is vulnerable to
within-guest Meltdown attacks and the guest operating system cannot
protect itself.
[6] "Vixen" HVM shim is not expected to be incorporated in future Xen
stable releases. At some point, support for it will be withdrawn.
However, HVM shim functionality may be available in a future Xen 4.10
stable point release and would then probably be useable with the
existing conversion script provided in this advisory.
[7] The lifetime of the special Comet branches is limited, but we will
not desupport them until some time after the same functionality is in
appropriate Xen stable point releases.
[8] The 64-bit x86 PV guest ABI precludes a guest from mapping its
kernel and userspace in the same address space. So these guests are
inherently immune to within-guest Meltdown attacks, without
within-guest patching. (This applies to 64-bit x86 PV guests only.)
[9] It is not possible to run dom0 as HVM. dom0 PVH is a planned
enhancement which is not yet available even in preview form.
ATTACHMENTS
===========
$ sha256sum xsa254*/*
c5f2d8f87169edc9be890416a4d261cfc11d9f8d898d83a8922360b210676015 xsa254/README.bti
1cba14ff83844d001d6c8a74afc3f764f49182cc7a06bb4463548450ac96cc2f xsa254/README.comet
208453583ee3c7bb427aa2f70fc5fdc687ba084341129624e511eb6c064fb801 xsa254/README.pti
3ef42381879befc84aa78b67d3a9b7b0cd862a2ffa445810466e90be6c6a5e86 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJakGiYAAoJEIP+FMlX6CvZTo0H/jmtssoZhVRYDbi5UP07eWla
ZefMHnwagNUeMEf4rZgWoGSuftiRPMXH73V4r02SDfIauC/7qTPJTxg3ozBLP6RK
d3bQtdb+Hr/i5mtYnD/ubjmg+VgB04Q4CF5Ikgc8Yx8qiUuSxo5HTHQV72a175eZ
ze6xRBvUSt4hw25X7kNGYpkpN1Hoyydv2/pHPdkuAfP90ZTlxPq+UWDwa37Z55ON
E/hVjBcvsnpvmgfztablVz5kFA+6O1aXzFuouNCQz0x62necQCrRgz9T173dlB1+
uQlvNN8gXV513ePaYjVP3B7c7P3QjMszX4WlK498KZTwo4ck+h0XtYdLtPAAZrg=
=2SNf
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.bti --]
[-- Type: application/octet-stream, Size: 22223 bytes --]
Branch Target Injection (BTI)
===============================
Summary
-------
This README gives references for the mitigation for Spectre v2.
Determining whether the migitation is enabled on x86
----------------------------------------------------
In general, compiler and CPU microcode updates are also required.
When the mitigation is fully active, on AMD hardware,
Xen prints at least the following messages:
Speculative mitigation facilities:
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk LFENCE
On pre-Skylake Intel hardware:
Speculative mitigation facilities:
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk RETPOLINE
On Skylake (or later) Intel hardware:
Speculative mitigation facilities:
Hardware features: IBRS/IBPB STIBP
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk JMP, Others: IBRS+ IBPB
Note however that on release builds none of these messages are visible
by default; "loglvl=all" needs to be passed to see all of them.
However production systems should not be run with "loglvl=all" as that
exposes a log spew (denial of service) vulnerability to guests.
"loglvl=info" (which is perhaps better) is sufficient to see
BTI mitigations: ...
listing the mitigations Xen actually uses.
If you are not sure whether your Intel CPU is pre- or post-Skylake,
please look your cpu model number (printed in /proc/cpuinfo on Linux)
up on Wikipedia.
Precise git commits
-------------------
4.10:
3181472a5ca45ae5e77abbcf024d025d9ba79ced x86/idle: Clear SPEC_CTRL while idle
5644514050b9ae7d75cdd95fd07912b9930cae08 x86/cpuid: Offer Indirect Branch Controls to guests
db12743f2d24fc59d5b9cefc15eb3d56cdaf549d x86/ctxt: Issue a speculation barrier between vcpu contexts
bc0e599a83d17f06ec7da1708721cede2df8274e x86/boot: Calculate the most appropriate BTI mitigation to use
fc81946ceaae2c27fce2ba0f3f29fa9df3975951 x86/entry: Avoid using alternatives in NMI/#MC paths
ce7d7c01685569d9ff1f971c0f0622573bfe8bf3 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
a695f8dce7c3f137f61c8c8a880b24b1b4cf319c x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
92efbe865813d84873a0e7262b1fa414842306b6 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
8baba874d6c76c1d6dd69b1d9aa06abdc344a1f5 x86/migrate: Move MSR_SPEC_CTRL on migrate
79891ef9442acb998f354b969e7302d81245ab0b x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
641c11ef293c7f3a58c1856138835c06e09d6b07 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
65ee6e043a6dc61bece75a9dfe24c7ee70c6597c x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
129880dd8f28bc728f93e3aad4675622c1ee2aad x86/feature: Definitions for Indirect Branch Controls
c513244d8e5b8aa0326c6f2d5fb2382811c97d6d x86: Introduce alternative indirect thunks
0e12c2c881aa12016bb659ab1eb4c7289244b3e7 x86/amd: Try to set lfence as being Dispatch Serialising
6aaf353f2ecbe8ae57e16812a6d74a4f089def3a x86/boot: Report details of speculative mitigations
32babfc19ad3a3123f8ed4466df3c79492a2212b x86: Support indirect thunks from assembly code
47bbcb2dd1291d61062fe58da807010631fe1b3a x86: Support compiling with indirect branch thunks
8743fc2ef7d107104c17b773eadee15fefa64e53 common/wait: Clarifications to wait infrastructure
1830b20b6b83be38738784ea162d62fcf85f3178 x86/entry: Erase guest GPR state on entry to Xen
ab95cb0d948fdc9fcda215fec0526ac902340b14 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
d02ef3d27485e1429ac480cca78ab3636387df23 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
e32f814160c95094da83fbc813b45eca42d5397a x86: Introduce a common cpuid_policy_updated()
c534ab4e940ae3fbddf0b4840c3549c03654921f x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
be3138b6f65955196d67c1d54aea3d6a3bf33934 x86/alt: Introduce ALTERNATIVE{,_2} macros
79012ead937f0533ec591c4ece925e4d23568874 x86/alt: Break out alternative-asm into a separate header file
bbd093c5033d87c0043cf90aa782efdc141dc0e7 xen/arm32: entry: Document the purpose of r11 in the traps handler
a69a8b5fdc9cc90aa4faf522c355abd849f11001 xen/arm32: Invalidate icache on guest exist for Cortex-A15
f167ebf6b33c4dbdb0135c350c0d927980191ac5 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
c4c0187839bacadc82a5729cea739e8c485f6c60 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
19ad8a7287298f701b557e55e4be689a702194c0 xen/arm32: entry: Add missing trap_reset entry
3caf32c470f2f7eb3452c8a61d6224d10e56f9a3 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
df7be94f26757a77747bf4fbfb84bbe2a3da3b4f xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
728fadb586a2a14a244dabd70463bcc1654ecc85 xen/arm: cpuerrata: Remove percpu.h include
928112900e5b4a92ccebb2eea11665fd76aa0f0d xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
cae6e1572f39a1906be0fc3bdaf49fe514c6a9c0 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
d1f4283a1d8405a480b4121e1efcfaec8bbdbffa xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
0f7a4faafb2d79920cc63457cfca3e03990af4cc xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
b829d42829c1ff626a02756acae4dd482fc20c9a xen/arm: Introduce enable callback to enable a capabilities on each online CPU
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
88fbabc49158b0b858248fa124ef590c5df7782f x86/PV: correctly count MSRs to migrate
7d5f8b36be149c169215b3afe20e1cfba8456170 x86/idle: Clear SPEC_CTRL while idle
59999aecdad6fc4f446958b65e2869e02530b1a6 x86/cpuid: Offer Indirect Branch Controls to guests
79d519795231110f222a24379e3a43243db6e55f x86/ctxt: Issue a speculation barrier between vcpu contexts
68c76d71e045a4e8510704270fc570fb9d797dfd x86/boot: Calculate the most appropriate BTI mitigation to use
bda328363ffef58c3475105e93016fcac486c5d5 x86/entry: Avoid using alternatives in NMI/#MC paths
a24b7553f92517b3d81cad1ad4798ef74b42055b x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
13a30ba54caa1b33f707137279d27d5cd39e8844 x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
0177bf5d25c66e700e15024913a3bc71c7cf507d x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
2fdee60ec12c238358bff209378c7d91e4817fa7 x86/migrate: Move MSR_SPEC_CTRL on migrate
e57d4d043b0df8f9953b3d211feacc3a54401817 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
1dcfd3951999e875f911fb0513391774af8d5fb4 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
764804938c69b69e1ee369a9b5480e89b18e453a x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
602633eb73ed2d9918da2dae7bebf279a057ea20 x86/feature: Definitions for Indirect Branch Controls
6fef46d6fb9fa4578b97f8d6a0cb240abec48587 x86: Introduce alternative indirect thunks
30b99299d6ea0c5008f5e4f41eb1f48e1ae566ce x86/amd: Try to set lfence as being Dispatch Serialising
447dce891f05c0585ec67c47ed22eb2e073ce0ab x86/boot: Report details of speculative mitigations
29df8a5c4d6271d52231bbecc52a7c3eb38aac13 x86: Support indirect thunks from assembly code
6403b5048d6f1ac5bc8524937b7975f96b597046 x86: Support compiling with indirect branch thunks
628b6af24f9727f201f677a4ad98104c00cc76c1 common/wait: Clarifications to wait infrastructure
237a58b1d0c35201e1e9ed7c32deacf9cd804229 x86/entry: Erase guest GPR state on entry to Xen
f0f7ce5e82b5bd511ef3eed8fe8b8b27a23f4365 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
d6e972508ed6ae84c5a46580af12ebdcb88de702 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
9aaa2088863d63168986f9e69c0f482839a24d80 x86: Introduce a common cpuid_policy_updated()
40f9ae9d0532a3c7dbb2a1e740c2cebe2aeb1d72 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
ade9554f87262b0c6dcc21aca194f3139a31fcfa x86/alt: Introduce ALTERNATIVE{,_2} macros
a0ed0349ff212b41dbfab37141cccb71bc1c3031 x86/alt: Break out alternative-asm into a separate header file
4d01dbc7133e0c55aecb31d95cd461580241c576 xen/arm32: entry: Document the purpose of r11 in the traps handler
22379b6adce0249ffc05a3a7870f2293368337e1 xen/arm32: Invalidate icache on guest exist for Cortex-A15
6e13ad777d331cd534928df720dbf542497231ba xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
0d32237d5f4db419f84da891761abb4f6b1a8f52 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
4ba59bdc26bd69bdd84bcb2bd597fee144e845d9 xen/arm32: entry: Add missing trap_reset entry
2997c5e628dd588ff4adb3733b7f48bb0521a243 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
751c8791d086831f2038fe18217e553f612a5600 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
a2567d6b54b7b187ecc0165021b6dd07dafaf06a xen/arm: cpuerrata: Remove percpu.h include
9f79e8d846e8413c828f5fc7cc6ac733728dff00 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
fba48eff18c02d716c95b92df804a755620be82e xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
3790833ef16b95653424ec9b145e460ec1a56d16 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
50450c1f33dc72f2138a671d738934f796be3318 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
2ec7ccbffc6b788f65e55498e4347c1ee3a44b01 xen/arm: Introduce enable callback to enable a capabilities on each online CPU
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
5938aa17b49595150cade3ddc2c1929ecd0df39a x86/PV: correctly count MSRs to migrate
99ed7863b29ea170e50749fe22991b964cbce6ba x86/idle: Clear SPEC_CTRL while idle
76bdfe894ab2205f597e52448d620982b84565c4 x86/cpuid: Offer Indirect Branch Controls to guests
fee4689c5c60b699f4dea21a21a2ba17887d2f49 x86/ctxt: Issue a speculation barrier between vcpu contexts
c0bfde68ccd941b14a2f0ca54c61a83796156ea6 x86/boot: Calculate the most appropriate BTI mitigation to use
64c1742b206344c51db130b0bb47fc299a1462ca x86/entry: Avoid using alternatives in NMI/#MC paths
86153856f857f786b95ecc4f81260477d75dc15c x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
e09a5c2917506cf9d95d85f65b2df158a494649c x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
ff570a3ee0b42a036df1e8c2b05730192ad4bd90 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
e6bcb416a5f5489366fc20f45fd92a703ad96e15 x86/migrate: Move MSR_SPEC_CTRL on migrate
29e7171e9dd0aa8e35f790157d781dff22f6a970 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
c3d195cd91385531ed12af2576bfedcab3118211 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
532ccf4fd55cfd916f56279a71852585d726ab23 x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
da49e518d79ca6c405a244889cab57ac8ed097cb x86/feature: Definitions for Indirect Branch Controls
ca9583d9e705aaa74da121e920ebf77d9f7995b2 x86: Introduce alternative indirect thunks
479b879a7dd0bbf02920d2f6053d9bee271797ce x86/amd: Try to set lfence as being Dispatch Serialising
2eefd926bbc8217cf511bc096c897ae4c56dd0c2 x86/boot: Report details of speculative mitigations
60c50f2b0bf5d3f894ca428cf4b4374fbea2d082 x86: Support indirect thunks from assembly code
1838e21521497cdfa6d3b1dfac0374bcce717eba x86: Support compiling with indirect branch thunks
5732a8ef2885633cdffc56fe9d8df40f76bfb2c2 common/wait: Clarifications to wait infrastructure
987b08d56cd8d439bdf435099218b96de901199d x86/entry: Erase guest GPR state on entry to Xen
eadcd8318c46f53ed8ee6516ca876271f75930fa x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
ef2464c56e8dab194cd956498c3d5215f1b6b97b x86/entry: Rearrange RESTORE_ALL to restore register in stack order
17bfbc8289c487bcb5f446f79de54869f12786cb x86: Introduce a common cpuid_policy_updated()
499391b50b85d31fa3dd4c427a816e10facb1fe4 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
87cb0e2090fce317c4e6775f343d5caba66f61f1 x86/alt: Introduce ALTERNATIVE{,_2} macros
3efcd7fb40a900bc7d4f9063f2d43ee27b0a5270 x86/alt: Break out alternative-asm into a separate header file
11875b7d5706f8aef86d306a43d7fe3b7011aaa2 xen/arm32: entry: Document the purpose of r11 in the traps handler
1105f3a92df83f3bfcda78d66c4d28458123e1bb xen/arm32: Invalidate icache on guest exist for Cortex-A15
754345c01933f1eed3d1601fa8fdbf62f52c9d80 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
7336d0d2a719d6135b8d02801401e449b0dbbfb6 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
cf95bba7b7406ef1929ea4c6c36388ed43b4f9bb xen/arm32: entry: Add missing trap_reset entry
a586cbd9f0cbb3835de1f8ab4d9a105e08b2ac5a xen/arm32: Add missing MIDR values for Cortex-A17 and A12
6082e3ba8941b3d10c3cb73f445759c19e89afc9 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
6f6786ef0d7f7025860d360f6b1267193ffd1b27 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
44139fed7c794eb4e47a9bb93061e325bd57fe8c xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
cf0b584c8c5030588bc47a3614ad860af7482c53 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
85990bf53addcdb0ce8e458a3d8fad199710ac59 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
946dd2eefae2faeecbeb9662e66935c8070f64f5 xen/arm: Introduce enable callback to enable a capabilities on each online CPU
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
ade3bcafd25883130fc234121ed7416d531e456d x86/PV: correctly count MSRs to migrate
aac4cbe3644738d485d38bd551046d63c00cc670 x86: fix build with older tool chain
68420b47d9b813ca48891b604fab379d40aa594e x86/idle: Clear SPEC_CTRL while idle
e09548d28a1cffafc0fa5ed9f97ac58514491ab8 x86/cpuid: Offer Indirect Branch Controls to guests
be261bd97f7b4fc76db7c11bb3366974f5635a04 x86/ctxt: Issue a speculation barrier between vcpu contexts
327a7836744ca8d7e1cfc6dc476d51d7c63f68ea x86/boot: Calculate the most appropriate BTI mitigation to use
9f08fce3b942180d62bc773cab840fa4533d0a51 x86/entry: Avoid using alternatives in NMI/#MC paths
4a38ec26bafde70f2af36d7bc2bec7f218145982 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
65c9e06429f629249a84d01231be5fa643460547 x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
84d47acc05af516d813f1952e853c4ca2be2adba x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
b7dae55c0eaae6d5a34bfdd3a62fe938673f53cf x86/migrate: Move MSR_SPEC_CTRL on migrate
b2b7fe128f6fbecf54e97cdd2d71923d0a852535 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
c947e1e23d1db17da0dd211b9410f311248b6c13 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
e9220b40c67a6c1eab6b3613f6054adfacea65eb x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
f9616884e16b8028c257c8b01fb12daff7fe3454 x86/feature: Definitions for Indirect Branch Controls
91f7e4627b6597536ded5b8326da3ca504b1772f x86: Introduce alternative indirect thunks
f291c01cd6d405927ceb022bdef6479de8b9fb9a x86/amd: Try to set lfence as being Dispatch Serialising
3cf4e29f8df5fc18f65baa08408a3d7cf3269d03 x86/boot: Report details of speculative mitigations
88602190f698aeace6d7e028954a1349997ee0be x86: Support indirect thunks from assembly code
62a2624e3c6250c6be8a9248c8fe5a3211834d4d x86: Support compiling with indirect branch thunks
c3f8df3df224eeac0e78533644010ed096de7a34 common/wait: Clarifications to wait infrastructure
3877c024ea4916ede177ef0067a081f73ee16c4d x86/entry: Erase guest GPR state on entry to Xen
f0ed5f95cb373fb55d9eb2eb3fe0cba442e80eb2 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
160b53c824011b9ddb89e67f0f682f471335747d x86/entry: Rearrange RESTORE_ALL to restore register in stack order
e1313098e43c41598d5b378e6344d691dcf29f2f x86: Introduce a common cpuid_policy_updated()
9ede1acbe91cb127b23d5e711470025b462f5d50 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
d0cfbe81d01b2ac1dc9d02d70d3249249d5cb5bc x86/alt: Introduce ALTERNATIVE{,_2} macros
d596e6a0a6ddfebbe657d07d0d64159cc4eb7a68 x86/alt: Break out alternative-asm into a separate header file
f50ea840b9a860927c7aca5fa64eb34e14f17164 xen/arm32: entry: Document the purpose of r11 in the traps handler
de3bdaa717002e4ec917bd0494943eb1660d71b8 xen/arm32: Invalidate icache on guest exist for Cortex-A15
766990b0b64336d1b859b6caa36033ec5338d563 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
4ac0229bc5312a01664b747261ee1cc7ea52c4b5 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
bafd63f8be2e8a78c0e85444e4c255e679303282 xen/arm32: entry: Add missing trap_reset entry
d5bb425dac6718d3fba64b863b07d7314c857067 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
003ec3e00a05935ea6a31430da65ee62363900f9 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
fd884d61991cd0de588ae51728cd0602375dfa71 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
50c68df8182bf332525ebf6120d3b1e0fdf77545 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
1bdcc9f7ef438ab9c219a5099726b112b93a4fbe xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
2914ef5753c9328889df314f33bb12ece1bd4fbe xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
62b9706dba3b6a3d9881329bca604216313c82dc xen/arm: Introduce enable callback to enable a capabilities on each online CPU
624abdcf2d30ae48e0653fb511b4c90d3ccdd2af xen/arm: Detect silicon revision and set cap bits accordingly
d7b73edd0fe6bb0c46aa883229f900643b4726e9 xen/arm: cpufeature: Provide an helper to check if a capability is supported
112c49c114ffe37e068fc9f13e960a8f275379d2 xen/arm: Add cpu_hwcap bitmap
a5b0fa4871b0895da203fb2dac16840d24c6be21 xen/arm: Add macros to handle the MIDR
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
0fbf30a7f863139dd0ac556e44f92f5787654847 x86/hvm: Don't corrupt the HVM context stream when writing the MSR record
7e20b9b2ddbb04c6ebb60613b1117e05edc8a5ea x86/PV: correctly count MSRs to migrate
75bdd693033e6dbd6fe5ae235f79961d2f0aa84d x86/idle: Clear SPEC_CTRL while idle
8994cf3cf730422ded6596ecb18dc0d8b6579493 x86/ctxt: Issue a speculation barrier between vcpu contexts
642c6037bba310538b00c0cbb5d91525bd1eed0a x86/boot: Calculate the most appropriate BTI mitigation to use
c25ea9a1393c1eb5d6732ec366baa1091db5e7db x86/entry: Avoid using alternatives in NMI/#MC paths
feba571a5d9586778e0978b8df5b9166275b8680 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
0163087ed6175b00966f4ee991d8c424ad7eb59d x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
44c2666589fefc13049edc874c7ef063823bad90 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
db743b04998a9cbf6866b5f328855239a73220e5 x86/migrate: Move MSR_SPEC_CTRL on migrate
41a5ccec99e81a768a66995f483f424f848f5b5e x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
4e1b9e98dffbc2f29a0a90a4ae43b9e19f323089 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
4d2154914e3f44bae123dc6a93fbb3f1b39c0fee x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
ff4800cac63756f7755e6c251571cd29fd5171eb x86/feature: Definitions for Indirect Branch Controls
2613a1bc709ed4b46af36b0bab3200ed9d3c86d0 x86: Introduce alternative indirect thunks
8335c8aedacd9a50b4796afb533dc8205f2129e4 x86/amd: Try to set lfence as being Dispatch Serialising
ab20c5c804ae814de9bed5f85d55fecc894dc78f x86/boot: Report details of speculative mitigations
9089da9cd06875be6c1022d59a6651cf3919da2e x86: Support indirect thunks from assembly code
8edfc82f67f25137909dda13e6658cba4d1e5d26 x86: Support compiling with indirect branch thunks
af5b61af9e350bcc2c8b0f053682e3c7a700b46f common/wait: Clarifications to wait infrastructure
ec05090403ef4d760fbe701e31afd0f0edc414d5 x86/entry: Erase guest GPR state on entry to Xen
75263f7908a02f5673c25df9bcdaed9fe5f9de5c x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
f7e273a07ccf993063727675589f10da206f1683 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
03c7d2cd1b4bb9868c10c4a3db2b092d211d055a x86/alt: Introduce ALTERNATIVE{,_2} macros
9ce1a7180050353c07321980cf1ed0b0baebf38a x86/alt: Break out alternative-asm into a separate header file
a735c7ae8046024925927406747d4a6ca5bf7fcc x86/microcode: Add support for fam17h microcode loading
9d534c12bf71babb76f1338029841f757191f729 xen/arm32: entry: Document the purpose of r11 in the traps handler
dbb3553130241ae99d444a6a08b7dc32ce90a272 xen/arm32: Invalidate icache on guest exist for Cortex-A15
e54a8c617ceb5ba3481e6aa122ad3f835c1915b8 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
8005ed3ef14c6c8b31a9e1a5ae2576a4b4c66528 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
9a852e0eebc6300585db89669dbade625be18a12 xen/arm32: entry: Add missing trap_reset entry
d779cc1f9c6a5f1d40db9e85f779a79c8eed2ccf xen/arm32: Add missing MIDR values for Cortex-A17 and A12
c93bcf9409e0da14cbc4bf43bf138bfaaecefa2c xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
15adcf395923499eb1eaaca1e67c032956428191 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
d7b8190d3222156e89ccefb7ac74ad0410337097 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
2b1457f955a98007cd51be67f78d1690711e8849 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
a3578802a2882afbbfe730f0227e075b5f42b4a6 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
ee23fcc2539ce8143ae4ce58a7c140fa46a4359b xen/arm: Introduce enable callback to enable a capabilities on each online CPU
56510154bbd21f10080993b7888c1a47a802c3e2 xen/arm: Detect silicon revision and set cap bits accordingly
225e9c7050e8f2694df3dc92c95b06a46e57130e xen/arm: cpufeature: Provide an helper to check if a capability is supported
3c706195565910b961eb5a7e64f34948deb2a545 xen/arm: Add cpu_hwcap bitmap
1222333a8220638747e77b40b6418daa85270265 xen/arm: Add macros to handle the MIDR
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #3: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 2896 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
Versions for Xen 4.8 and 4.10 are available.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-3
- For 4.8: 4.8.3pre-shim-comet-2 and 4.10.0-shim-comet-3
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Build instructions: 4.8
-----------------------
The code for shim itself is not backported to 4.8. 4.8 users should
use a shim built from 4.10-based source code; this can be simply
dropped into a Xen 4.8 installation.
1. Build a 4.8+ system with support for running PVH, and for pvshim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.8.3pre-shim-comet-2
Do a build and install as normal.
2. Build a 4.10+ system to be the shim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
./configure
make -C tools/firmware/xen-dir
And then install the shim executable where
the 4.8 pv shim mode tools expect to find it
cp tools/firmware/xen-dir/xen-shim /usr/lib/xen/boot/xen-shim
cp tools/firmware/xen-dir/xen-shim /usr/local/lib/xen/boot/xen-shim
This step is only needed to boot guests in "PVH with PV shim"
mode; it is not needed when booting PVH-supporting guests as PVH.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
* There is no need to reboot the host.
[-- Attachment #4: xsa254/README.pti --]
[-- Type: application/octet-stream, Size: 2536 bytes --]
Xen page-table isolation (XPTI)
===============================
Summary
-------
This README gives references for one of three mitigation strategies
for Meltdown.
This series is a first-class migitation pagetable isolation series for
Xen. It is available for Xen 4.6 to Xen 4.10 and later.
Precise git commits are as follows:
4.10:
05eba93a0a344ec189e71722bd542cdc7949a8a5 x86: fix GET_STACK_END
7cccd6f748ec724cf9408cec6b3ec8e54a8a2c1f x86: allow Meltdown band-aid to be disabled
234f481337ea1a93db968d614649a6bdfdc8418a x86: Meltdown band-aid against malicious 64-bit PV guests
57dc197cf0d36c56ba1d9d32c6a1454bb52605bb x86/mm: Always set _PAGE_ACCESSED on L4e updates
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
f11cf29f274e90e6451aaaa5ab52df2ed63eb30d x86: fix GET_STACK_END
dc7d46580d9c633a59be1c3776f79c01dd0cb98b x86: allow Meltdown band-aid to be disabled
1e0974638d65d9b8acf9ac7511d747188f38bcc3 x86: Meltdown band-aid against malicious 64-bit PV guests
87ea7816247090e8e5bc5653b16c412943a058b5 x86/mm: Always set _PAGE_ACCESSED on L4e updates
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
2cd189eb55af8b04185b473ac2885f76b3d87efe x86: fix GET_STACK_END
31d38d633a306b2b06767b5a5f5a8a00269f3c92 x86: allow Meltdown band-aid to be disabled
1ba477bde737bf9b28cc455bef1e9a6bc76d66fc x86: Meltdown band-aid against malicious 64-bit PV guests
049e2f45bfa488967494466ec6506c3ecae5fe0e x86/mm: Always set _PAGE_ACCESSED on L4e updates
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
b1ae1264baf8617df036a298461a1bb43eae79c1 x86: fix GET_STACK_END
e19d0af4ee2ae9e42a85db639fd6848e72f5658b x86: allow Meltdown band-aid to be disabled
e19517a3355acaaa2ff83018bc41e7fd044161e5 x86: Meltdown band-aid against malicious 64-bit PV guests
9b76908e6e074d7efbeafe6bad066ecc5f3c3c43 x86/mm: Always set _PAGE_ACCESSED on L4e updates
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
44ad7f6895da9861042d7a41e635d42d83cb2660 x86: allow Meltdown band-aid to be disabled
91dc902fdf41659c210329d6f6578f8132ee4770 x86: Meltdown band-aid against malicious 64-bit PV guests
a065841b3ae9f0ef49b9823cd205c79ee0c22b9c x86/mm: Always set _PAGE_ACCESSED on L4e updates
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #5: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2738 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #6: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #7: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #8: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution
@ 2018-02-23 19:35 Xen.org security team
0 siblings, 0 replies; 10+ messages in thread
From: Xen.org security team @ 2018-02-23 19:35 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 14600 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-5753,CVE-2017-5715,CVE-2017-5754 / XSA-254
version 12
Information leak via side effects of speculative execution
UPDATES IN VERSION 12
=====================
Corrections to ARM SP2 information:
* ARM 32-bit requires new firmware on some CPUs.
* Provide link to the ARM firmware page, accordingly.
* ARM 32-bit mitigations are complete for Cortex-A CPUs.
We do not have information for other ARM CPUs at this time.
ISSUE DESCRIPTION
=================
Processors give the illusion of a sequence of instructions executed
one-by-one. However, in order to most efficiently use cpu resources,
modern superscalar processors actually begin executing many
instructions in parallel. In cases where instructions depend on the
result of previous instructions or checks which have not yet
completed, execution happens based on guesses about what the outcome
will be. If the guess is correct, execution has been sped up. If the
guess is incorrect, partially-executed instructions are cancelled and
architectural state changes (to registers, memory, and so on)
reverted; but the whole process is no slower than if no guess had been
made at all. This is sometimes called "speculative execution".
Unfortunately, although architectural state is rolled back, there are
other side effects, such as changes to TLB or cache state, which are
not rolled back. These side effects can subsequently be detected by
an attacker to determine information about what happened during the
speculative execution phase. If an attacker can cause speculative
execution to access sensitive memory areas, they may be able to infer
what that sensitive memory contained.
Furthermore, these guesses can often be 'poisoned', such that attacker
can cause logic to reliably 'guess' the way the attacker chooses.
This advisory discusses three ways to cause speculative execution to
access sensitive memory areas (named here according to the
discoverer's naming scheme):
"Bounds-check bypass" (aka SP1, "Variant 1", Spectre CVE-2017-5753):
Poison the branch predictor, such that victim code is speculatively
executed past boundary and security checks. This would allow an
attacker to, for instance, cause speculative code in the normal
hypercall / emulation path to execute with wild array indexes.
"Branch Target Injection" (aka SP2, "Variant 2", Spectre CVE-2017-5715):
Poison the branch predictor. Well-abstracted code often involves
calling function pointers via indirect branches; reading these
function pointers may involve a (slow) memory access, so the CPU
attempts to guess where indirect branches will lead. Poisoning this
enables an attacker to speculatively branch to any code that is
executable by the victim (eg, anywhere in the hypervisor).
"Rogue Data Load" (aka SP3, "Variant 3", Meltdown, CVE-2017-5754):
On some processors, certain pagetable permission checks only happen
when the instruction is retired; effectively meaning that speculative
execution is not subject to pagetable permission checks. On such
processors, an attacker can speculatively execute arbitrary code in
userspace with, effectively, the highest privilege level.
More information is available here:
https://meltdownattack.com/
https://spectreattack.com/
https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
Additional Xen-specific background:
Xen hypervisors on most systems map all of physical RAM, so code
speculatively executed in a hypervisor context can read all of system
RAM.
When running PV guests, the guest and the hypervisor share the address
space; guest kernels run in a lower privilege level, and Xen runs in
the highest privilege level. (x86 HVM and PVH guests, and ARM guests,
run in a separate address space to the hypervisor.) However, only
64-bit PV guests can generate addresses large enough to point to
hypervisor memory.
IMPACT
======
Xen guests may be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.
An attacker's choice of code to speculatively execute (and thus the
ease of extracting useful information) goes up with the numbers. For
SP1, an attacker is limited to windows of code after bound checks of
user-supplied indexes. For SP2, the attacker will in many cases will
be limited to executing arbitrary pre-existing code inside of Xen.
For SP3 (and other cases for SP2), an attacker can write arbitrary
code to speculatively execute.
Additionally, in general, attacks within a guest (from guest user to
guest kernel) will be the same as on real hardware. Consult your
operating system provider for more information.
NOTE ON TIMING
==============
This vulnerability was originally scheduled to be made public on 9
January. It was accelerated at the request of the discloser due to
one of the issues being made public.
VULNERABLE SYSTEMS
==================
Systems running all versions of Xen are affected.
For SP1 and SP2, both Intel and AMD are vulnerable. Vulnerability of
ARM processors to SP1 and SP2 varies by model and manufacturer. ARM
has information on affected models on the following website:
https://developer.arm.com/support/security-update
For SP3, only Intel processors are vulnerable. (The hypervisor cannot
be attacked using SP3 on any ARM processors, even those that are
listed as affected by SP3.)
Furthermore, only 64-bit PV guests can exploit SP3 against Xen. PVH,
HVM, and 32-bit PV guests cannot exploit SP3.
MITIGATION
==========
There is no mitigation for SP1.
SP2 can be mitigated by a combination of new microcode and compiler
and hypervisor changes. See Resolution below.
SP3 can be mitigated by page-table isolation ("PTI").
See Resolution below.
SP3 can, alternatively, be mitigated by running guests in HVM or PVH
mode. (Within-guest attacks are still possible unless the guest OS
has also been updated with an SP3 mitigation series such as
KPTI/Kaiser.)
For guests with legacy PV kernels which cannot be run in HVM or PVH
mode directly, we have developed two "shim" hypervisors that allow PV
guests to run in HVM mode or PVH mode. This prevents attacks on the
host, but it leaves the guest vulnerable to Meltdown attacks by its
own unprivileged processes, even if the guest OS has KPTI or similar
Meltdown mitigation.
The HVM shim (codenamed "Vixen") is available now, as is the PVH shim
(codenamed "Comet") for Xen 4.10 and Xen 4.8. Please read
README.which-shim to determine which shim is suitable for you.
RESOLUTION
==========
These are hardware bugs, so technically speaking they cannot be
properly fixed in software. However, it is possible in many cases to
provide patches to software to work around the problems.
There is no available resolution for SP1. A solution may be available
in the future.
SP2 can be mitigated on x86 by combinations of new CPU microcode and
new hypervisor code. The required hypervisor changes for Xen 4.6,
4.7, 4.8, 4.9 and 4.10 are detailed in the attached README.bti.
For AMD hardware, and for Intel hardware pre-dating the Skylake
microarchitecture, the hypervisor changes alone are sufficient to
mitigate the issue for Xen itself. No microcode updates are required.
For the Intel Skylake microarchitecture the hypervisor changes are
insufficient to protect Xen without appropriate new microcode.
Microcode updates are required in any event to guard against one guest
attacking another.
Consult Intel, your hardware vendor, or your dom0 OS distributor for the
microcode updates.
Additionally, compiler support for `indirect thunk' is required.
Again, without appropriate compiler support, the hypervisor patches
are insufficient. Consult your compiler distributor.
SP2 is mitigated on ARM 32-bit by a set of changes to the hypervisor;
on some processors, in combination with new firmware. SP2 can be
mitigated on ARM 64-bit (aarch64) by a combination of new PSCI
firmware and new hypervisor code. The required hypervisor changes for
Xen 4.6, 4.7, 4.8, 4.9 and 4.10 are detailed in the attached
README.bti.
For ARM 32-bit these changes are complete for Cortex-A processors.
For other processors, please contact the vendor for information.
For ARM 64-bit the hypervisor changes are still in development and are
expected to be available soon.
SP3 can be mitigated by page-table isolation ("PTI").
We have a "stage 1" implementation. It allows 64-bit PV guests to be
run natively while restricting what can be accessed via SP3 to the Xen
stack of the current pcpu (which may contain remnants of information
from other guests, but should be much more difficult to attack
reliably).
Unfortunately these "stage 1" patches incur a non-negligible
performance overhead; about equivalent to the "PV shim" approaches
above. Moving to plain HVM or PVH guests is recommended where
possible. For more information on that, see below.
Patches for the "stage-1" PTI implementation are available in the Xen
staging-NN branches for each Xen revision. See README.pti for
specific revisons.
SP3 MITIGATION OPTIONS SUMMARY TABLE FOR 64-bit X86 PV GUESTS
=============================================================
Everything in this section applies to 64-bit PV x86 guests only.
Xen PTI Use PVH Use HVM PVH shim HVM shim
"stage 1" "Comet" "Vixen"
How to use README.pti type="pvh" type="hvm" README.comet README.vixen
Guest All Linux 4.11+ Most[4] All All
support ?unikernels?[3]
Xen 4.6+ 4.10+ All 4.10, 4.8 All
versions 4.8-comet[1]
Testing Limited 4.10: Good Very good Moderate Very good
status Very new 4.8: Moderate
Performance Fair Excellent Varies[4] Fair Fair
Hypervisor Needed No need No need No need No need
changes
SP3 guest Substantially Protected Protected Protected Protected
to host protected
SP3 within Protected Guest Guest Vulnerable Vulnerable
guest patches patches [5] [5]
SP3 from Protected n/a; vuln. n/a; vuln. n/a; vuln. n/a; vuln.
dom0 user [9] [9] [9] [9]
Device model No dm No dm Qemu No dm Qemu
Config change None type="pvh" type="hvm"/ type="pvh" Tool to rewrite
builder="hvm" pvshim=1 Needs "sidecar"
Within-guest None Should be Disks+net None None
changes? none may change
CPU hw virt Not needed Needed Needed Needed Needed
feature (VT-x)
Extra RAM use V. slight None ~9Mb/guest >=~20Mb/guest >=~29Mb/guest
Migration OK OK OK[4] OK Unsupported[2]
Guest mem adj OK OK OK Broken[2] Unsupported[2]
vcpu hotplug OK OK OK OK Unsupported[2]
Solution Indefinite Indefinite Indefinite Indefinite Limited
lifetime [7] [6]
[1] PVH is supported in Xen 4.8 only with the 4.8 "Comet" security
release branch.
[2] Some features in PVH/HVM shim guests are not inherently broken,
but buggy in the currently available versions. These may be fixed in
future proper releases of the same feature.
[3] Most unikernels have Xen support based on a version of mini-os.
mini-os master can boot PVH. But this is very recent.
[4] Some guests which have support for Xen PV fail to boot properly in
Xen HVM. Some such guests can made to boot HVM by disabling the
PV-on-HVM support entirely in the guest or in Xen; in that case the
guest may work but IO performance will be poor. Some PV-supporting
guests can boot as HVM, with PV drivers, but fail when migrated.
[5] The Comet and Vixen shim hypervisors direct-map all of their
"physical" memory, and that direct-map can be accessed using Meltdown
by unprivileged processes in the guest. So the guest is vulnerable to
within-guest Meltdown attacks and the guest operating system cannot
protect itself.
[6] "Vixen" HVM shim is not expected to be incorporated in future Xen
stable releases. At some point, support for it will be withdrawn.
However, HVM shim functionality may be available in a future Xen 4.10
stable point release and would then probably be useable with the
existing conversion script provided in this advisory.
[7] The lifetime of the special Comet branches is limited, but we will
not desupport them until some time after the same functionality is in
appropriate Xen stable point releases.
[8] The 64-bit x86 PV guest ABI precludes a guest from mapping its
kernel and userspace in the same address space. So these guests are
inherently immune to within-guest Meltdown attacks, without
within-guest patching. (This applies to 64-bit x86 PV guests only.)
[9] It is not possible to run dom0 as HVM. dom0 PVH is a planned
enhancement which is not yet available even in preview form.
ATTACHMENTS
===========
$ sha256sum xsa254*/*
c5f2d8f87169edc9be890416a4d261cfc11d9f8d898d83a8922360b210676015 xsa254/README.bti
1cba14ff83844d001d6c8a74afc3f764f49182cc7a06bb4463548450ac96cc2f xsa254/README.comet
208453583ee3c7bb427aa2f70fc5fdc687ba084341129624e511eb6c064fb801 xsa254/README.pti
3ef42381879befc84aa78b67d3a9b7b0cd862a2ffa445810466e90be6c6a5e86 xsa254/README.vixen
7e816160c1c1d1cd93ec3c3dd9753c8f3957fefe86b7aa967e9e77833828f849 xsa254/README.which-shim
1d2098ad3890a5be49444560406f8f271c716e9f80e7dfe11ff5c818277f33f8 xsa254/pvshim-converter.pl
$
NOTE ON LACK OF EMBARGO
=======================
The timetable and process were set by the discloser.
After the intensive initial response period for these vulnerabilities
is over, we will prepare and publish a full timeline, as we have done
in a handful of other cases of significant public interest where we
saw opportunities for process improvement.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJakG0FAAoJEIP+FMlX6CvZDR0H/0P2j85tnOXt1ipeT7UUVY8P
0bkWJ1OhKcSZFwPkuybK0xcfsyyPYX8HjFcRlacPgq8r6AY16RIh/ZpAhC2F6DJu
UrFhMVW9bdApBNaKMDo1/QCcLnloOcEXx65+Nn29qTX+IKdkdlzUTLpjZRianMPQ
AJnSumiP1RXyi/FDWbNfxlChHonCIEwYurA8z9KIqq3qeGF1tT7BB+oSFvHoICoX
Q0CX3StuHMFK53X+BKbvJy62MOjJIHRWx8lBBF/VQxfFQp3LPjGALeSBhn1BlZUF
KpXguxQAici4mj9yM7LUZ9lV2OrCQLTiWwSMAMOvjs5eHSS3tU2CZvJ+Xg711ZM=
=Kl89
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa254/README.bti --]
[-- Type: application/octet-stream, Size: 22223 bytes --]
Branch Target Injection (BTI)
===============================
Summary
-------
This README gives references for the mitigation for Spectre v2.
Determining whether the migitation is enabled on x86
----------------------------------------------------
In general, compiler and CPU microcode updates are also required.
When the mitigation is fully active, on AMD hardware,
Xen prints at least the following messages:
Speculative mitigation facilities:
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk LFENCE
On pre-Skylake Intel hardware:
Speculative mitigation facilities:
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk RETPOLINE
On Skylake (or later) Intel hardware:
Speculative mitigation facilities:
Hardware features: IBRS/IBPB STIBP
Compiled-in support: INDIRECT_THUNK
BTI mitigations: Thunk JMP, Others: IBRS+ IBPB
Note however that on release builds none of these messages are visible
by default; "loglvl=all" needs to be passed to see all of them.
However production systems should not be run with "loglvl=all" as that
exposes a log spew (denial of service) vulnerability to guests.
"loglvl=info" (which is perhaps better) is sufficient to see
BTI mitigations: ...
listing the mitigations Xen actually uses.
If you are not sure whether your Intel CPU is pre- or post-Skylake,
please look your cpu model number (printed in /proc/cpuinfo on Linux)
up on Wikipedia.
Precise git commits
-------------------
4.10:
3181472a5ca45ae5e77abbcf024d025d9ba79ced x86/idle: Clear SPEC_CTRL while idle
5644514050b9ae7d75cdd95fd07912b9930cae08 x86/cpuid: Offer Indirect Branch Controls to guests
db12743f2d24fc59d5b9cefc15eb3d56cdaf549d x86/ctxt: Issue a speculation barrier between vcpu contexts
bc0e599a83d17f06ec7da1708721cede2df8274e x86/boot: Calculate the most appropriate BTI mitigation to use
fc81946ceaae2c27fce2ba0f3f29fa9df3975951 x86/entry: Avoid using alternatives in NMI/#MC paths
ce7d7c01685569d9ff1f971c0f0622573bfe8bf3 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
a695f8dce7c3f137f61c8c8a880b24b1b4cf319c x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
92efbe865813d84873a0e7262b1fa414842306b6 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
8baba874d6c76c1d6dd69b1d9aa06abdc344a1f5 x86/migrate: Move MSR_SPEC_CTRL on migrate
79891ef9442acb998f354b969e7302d81245ab0b x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
641c11ef293c7f3a58c1856138835c06e09d6b07 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
65ee6e043a6dc61bece75a9dfe24c7ee70c6597c x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
129880dd8f28bc728f93e3aad4675622c1ee2aad x86/feature: Definitions for Indirect Branch Controls
c513244d8e5b8aa0326c6f2d5fb2382811c97d6d x86: Introduce alternative indirect thunks
0e12c2c881aa12016bb659ab1eb4c7289244b3e7 x86/amd: Try to set lfence as being Dispatch Serialising
6aaf353f2ecbe8ae57e16812a6d74a4f089def3a x86/boot: Report details of speculative mitigations
32babfc19ad3a3123f8ed4466df3c79492a2212b x86: Support indirect thunks from assembly code
47bbcb2dd1291d61062fe58da807010631fe1b3a x86: Support compiling with indirect branch thunks
8743fc2ef7d107104c17b773eadee15fefa64e53 common/wait: Clarifications to wait infrastructure
1830b20b6b83be38738784ea162d62fcf85f3178 x86/entry: Erase guest GPR state on entry to Xen
ab95cb0d948fdc9fcda215fec0526ac902340b14 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
d02ef3d27485e1429ac480cca78ab3636387df23 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
e32f814160c95094da83fbc813b45eca42d5397a x86: Introduce a common cpuid_policy_updated()
c534ab4e940ae3fbddf0b4840c3549c03654921f x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
be3138b6f65955196d67c1d54aea3d6a3bf33934 x86/alt: Introduce ALTERNATIVE{,_2} macros
79012ead937f0533ec591c4ece925e4d23568874 x86/alt: Break out alternative-asm into a separate header file
bbd093c5033d87c0043cf90aa782efdc141dc0e7 xen/arm32: entry: Document the purpose of r11 in the traps handler
a69a8b5fdc9cc90aa4faf522c355abd849f11001 xen/arm32: Invalidate icache on guest exist for Cortex-A15
f167ebf6b33c4dbdb0135c350c0d927980191ac5 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
c4c0187839bacadc82a5729cea739e8c485f6c60 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
19ad8a7287298f701b557e55e4be689a702194c0 xen/arm32: entry: Add missing trap_reset entry
3caf32c470f2f7eb3452c8a61d6224d10e56f9a3 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
df7be94f26757a77747bf4fbfb84bbe2a3da3b4f xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
728fadb586a2a14a244dabd70463bcc1654ecc85 xen/arm: cpuerrata: Remove percpu.h include
928112900e5b4a92ccebb2eea11665fd76aa0f0d xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
cae6e1572f39a1906be0fc3bdaf49fe514c6a9c0 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
d1f4283a1d8405a480b4121e1efcfaec8bbdbffa xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
0f7a4faafb2d79920cc63457cfca3e03990af4cc xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
b829d42829c1ff626a02756acae4dd482fc20c9a xen/arm: Introduce enable callback to enable a capabilities on each online CPU
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
88fbabc49158b0b858248fa124ef590c5df7782f x86/PV: correctly count MSRs to migrate
7d5f8b36be149c169215b3afe20e1cfba8456170 x86/idle: Clear SPEC_CTRL while idle
59999aecdad6fc4f446958b65e2869e02530b1a6 x86/cpuid: Offer Indirect Branch Controls to guests
79d519795231110f222a24379e3a43243db6e55f x86/ctxt: Issue a speculation barrier between vcpu contexts
68c76d71e045a4e8510704270fc570fb9d797dfd x86/boot: Calculate the most appropriate BTI mitigation to use
bda328363ffef58c3475105e93016fcac486c5d5 x86/entry: Avoid using alternatives in NMI/#MC paths
a24b7553f92517b3d81cad1ad4798ef74b42055b x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
13a30ba54caa1b33f707137279d27d5cd39e8844 x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
0177bf5d25c66e700e15024913a3bc71c7cf507d x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
2fdee60ec12c238358bff209378c7d91e4817fa7 x86/migrate: Move MSR_SPEC_CTRL on migrate
e57d4d043b0df8f9953b3d211feacc3a54401817 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
1dcfd3951999e875f911fb0513391774af8d5fb4 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
764804938c69b69e1ee369a9b5480e89b18e453a x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
602633eb73ed2d9918da2dae7bebf279a057ea20 x86/feature: Definitions for Indirect Branch Controls
6fef46d6fb9fa4578b97f8d6a0cb240abec48587 x86: Introduce alternative indirect thunks
30b99299d6ea0c5008f5e4f41eb1f48e1ae566ce x86/amd: Try to set lfence as being Dispatch Serialising
447dce891f05c0585ec67c47ed22eb2e073ce0ab x86/boot: Report details of speculative mitigations
29df8a5c4d6271d52231bbecc52a7c3eb38aac13 x86: Support indirect thunks from assembly code
6403b5048d6f1ac5bc8524937b7975f96b597046 x86: Support compiling with indirect branch thunks
628b6af24f9727f201f677a4ad98104c00cc76c1 common/wait: Clarifications to wait infrastructure
237a58b1d0c35201e1e9ed7c32deacf9cd804229 x86/entry: Erase guest GPR state on entry to Xen
f0f7ce5e82b5bd511ef3eed8fe8b8b27a23f4365 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
d6e972508ed6ae84c5a46580af12ebdcb88de702 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
9aaa2088863d63168986f9e69c0f482839a24d80 x86: Introduce a common cpuid_policy_updated()
40f9ae9d0532a3c7dbb2a1e740c2cebe2aeb1d72 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
ade9554f87262b0c6dcc21aca194f3139a31fcfa x86/alt: Introduce ALTERNATIVE{,_2} macros
a0ed0349ff212b41dbfab37141cccb71bc1c3031 x86/alt: Break out alternative-asm into a separate header file
4d01dbc7133e0c55aecb31d95cd461580241c576 xen/arm32: entry: Document the purpose of r11 in the traps handler
22379b6adce0249ffc05a3a7870f2293368337e1 xen/arm32: Invalidate icache on guest exist for Cortex-A15
6e13ad777d331cd534928df720dbf542497231ba xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
0d32237d5f4db419f84da891761abb4f6b1a8f52 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
4ba59bdc26bd69bdd84bcb2bd597fee144e845d9 xen/arm32: entry: Add missing trap_reset entry
2997c5e628dd588ff4adb3733b7f48bb0521a243 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
751c8791d086831f2038fe18217e553f612a5600 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
a2567d6b54b7b187ecc0165021b6dd07dafaf06a xen/arm: cpuerrata: Remove percpu.h include
9f79e8d846e8413c828f5fc7cc6ac733728dff00 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
fba48eff18c02d716c95b92df804a755620be82e xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
3790833ef16b95653424ec9b145e460ec1a56d16 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
50450c1f33dc72f2138a671d738934f796be3318 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
2ec7ccbffc6b788f65e55498e4347c1ee3a44b01 xen/arm: Introduce enable callback to enable a capabilities on each online CPU
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
5938aa17b49595150cade3ddc2c1929ecd0df39a x86/PV: correctly count MSRs to migrate
99ed7863b29ea170e50749fe22991b964cbce6ba x86/idle: Clear SPEC_CTRL while idle
76bdfe894ab2205f597e52448d620982b84565c4 x86/cpuid: Offer Indirect Branch Controls to guests
fee4689c5c60b699f4dea21a21a2ba17887d2f49 x86/ctxt: Issue a speculation barrier between vcpu contexts
c0bfde68ccd941b14a2f0ca54c61a83796156ea6 x86/boot: Calculate the most appropriate BTI mitigation to use
64c1742b206344c51db130b0bb47fc299a1462ca x86/entry: Avoid using alternatives in NMI/#MC paths
86153856f857f786b95ecc4f81260477d75dc15c x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
e09a5c2917506cf9d95d85f65b2df158a494649c x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
ff570a3ee0b42a036df1e8c2b05730192ad4bd90 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
e6bcb416a5f5489366fc20f45fd92a703ad96e15 x86/migrate: Move MSR_SPEC_CTRL on migrate
29e7171e9dd0aa8e35f790157d781dff22f6a970 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
c3d195cd91385531ed12af2576bfedcab3118211 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
532ccf4fd55cfd916f56279a71852585d726ab23 x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
da49e518d79ca6c405a244889cab57ac8ed097cb x86/feature: Definitions for Indirect Branch Controls
ca9583d9e705aaa74da121e920ebf77d9f7995b2 x86: Introduce alternative indirect thunks
479b879a7dd0bbf02920d2f6053d9bee271797ce x86/amd: Try to set lfence as being Dispatch Serialising
2eefd926bbc8217cf511bc096c897ae4c56dd0c2 x86/boot: Report details of speculative mitigations
60c50f2b0bf5d3f894ca428cf4b4374fbea2d082 x86: Support indirect thunks from assembly code
1838e21521497cdfa6d3b1dfac0374bcce717eba x86: Support compiling with indirect branch thunks
5732a8ef2885633cdffc56fe9d8df40f76bfb2c2 common/wait: Clarifications to wait infrastructure
987b08d56cd8d439bdf435099218b96de901199d x86/entry: Erase guest GPR state on entry to Xen
eadcd8318c46f53ed8ee6516ca876271f75930fa x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
ef2464c56e8dab194cd956498c3d5215f1b6b97b x86/entry: Rearrange RESTORE_ALL to restore register in stack order
17bfbc8289c487bcb5f446f79de54869f12786cb x86: Introduce a common cpuid_policy_updated()
499391b50b85d31fa3dd4c427a816e10facb1fe4 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
87cb0e2090fce317c4e6775f343d5caba66f61f1 x86/alt: Introduce ALTERNATIVE{,_2} macros
3efcd7fb40a900bc7d4f9063f2d43ee27b0a5270 x86/alt: Break out alternative-asm into a separate header file
11875b7d5706f8aef86d306a43d7fe3b7011aaa2 xen/arm32: entry: Document the purpose of r11 in the traps handler
1105f3a92df83f3bfcda78d66c4d28458123e1bb xen/arm32: Invalidate icache on guest exist for Cortex-A15
754345c01933f1eed3d1601fa8fdbf62f52c9d80 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
7336d0d2a719d6135b8d02801401e449b0dbbfb6 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
cf95bba7b7406ef1929ea4c6c36388ed43b4f9bb xen/arm32: entry: Add missing trap_reset entry
a586cbd9f0cbb3835de1f8ab4d9a105e08b2ac5a xen/arm32: Add missing MIDR values for Cortex-A17 and A12
6082e3ba8941b3d10c3cb73f445759c19e89afc9 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
6f6786ef0d7f7025860d360f6b1267193ffd1b27 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
44139fed7c794eb4e47a9bb93061e325bd57fe8c xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
cf0b584c8c5030588bc47a3614ad860af7482c53 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
85990bf53addcdb0ce8e458a3d8fad199710ac59 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
946dd2eefae2faeecbeb9662e66935c8070f64f5 xen/arm: Introduce enable callback to enable a capabilities on each online CPU
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
ade3bcafd25883130fc234121ed7416d531e456d x86/PV: correctly count MSRs to migrate
aac4cbe3644738d485d38bd551046d63c00cc670 x86: fix build with older tool chain
68420b47d9b813ca48891b604fab379d40aa594e x86/idle: Clear SPEC_CTRL while idle
e09548d28a1cffafc0fa5ed9f97ac58514491ab8 x86/cpuid: Offer Indirect Branch Controls to guests
be261bd97f7b4fc76db7c11bb3366974f5635a04 x86/ctxt: Issue a speculation barrier between vcpu contexts
327a7836744ca8d7e1cfc6dc476d51d7c63f68ea x86/boot: Calculate the most appropriate BTI mitigation to use
9f08fce3b942180d62bc773cab840fa4533d0a51 x86/entry: Avoid using alternatives in NMI/#MC paths
4a38ec26bafde70f2af36d7bc2bec7f218145982 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
65c9e06429f629249a84d01231be5fa643460547 x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
84d47acc05af516d813f1952e853c4ca2be2adba x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
b7dae55c0eaae6d5a34bfdd3a62fe938673f53cf x86/migrate: Move MSR_SPEC_CTRL on migrate
b2b7fe128f6fbecf54e97cdd2d71923d0a852535 x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
c947e1e23d1db17da0dd211b9410f311248b6c13 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
e9220b40c67a6c1eab6b3613f6054adfacea65eb x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
f9616884e16b8028c257c8b01fb12daff7fe3454 x86/feature: Definitions for Indirect Branch Controls
91f7e4627b6597536ded5b8326da3ca504b1772f x86: Introduce alternative indirect thunks
f291c01cd6d405927ceb022bdef6479de8b9fb9a x86/amd: Try to set lfence as being Dispatch Serialising
3cf4e29f8df5fc18f65baa08408a3d7cf3269d03 x86/boot: Report details of speculative mitigations
88602190f698aeace6d7e028954a1349997ee0be x86: Support indirect thunks from assembly code
62a2624e3c6250c6be8a9248c8fe5a3211834d4d x86: Support compiling with indirect branch thunks
c3f8df3df224eeac0e78533644010ed096de7a34 common/wait: Clarifications to wait infrastructure
3877c024ea4916ede177ef0067a081f73ee16c4d x86/entry: Erase guest GPR state on entry to Xen
f0ed5f95cb373fb55d9eb2eb3fe0cba442e80eb2 x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
160b53c824011b9ddb89e67f0f682f471335747d x86/entry: Rearrange RESTORE_ALL to restore register in stack order
e1313098e43c41598d5b378e6344d691dcf29f2f x86: Introduce a common cpuid_policy_updated()
9ede1acbe91cb127b23d5e711470025b462f5d50 x86/hvm: Rename update_guest_vendor() callback to cpuid_policy_changed()
d0cfbe81d01b2ac1dc9d02d70d3249249d5cb5bc x86/alt: Introduce ALTERNATIVE{,_2} macros
d596e6a0a6ddfebbe657d07d0d64159cc4eb7a68 x86/alt: Break out alternative-asm into a separate header file
f50ea840b9a860927c7aca5fa64eb34e14f17164 xen/arm32: entry: Document the purpose of r11 in the traps handler
de3bdaa717002e4ec917bd0494943eb1660d71b8 xen/arm32: Invalidate icache on guest exist for Cortex-A15
766990b0b64336d1b859b6caa36033ec5338d563 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
4ac0229bc5312a01664b747261ee1cc7ea52c4b5 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
bafd63f8be2e8a78c0e85444e4c255e679303282 xen/arm32: entry: Add missing trap_reset entry
d5bb425dac6718d3fba64b863b07d7314c857067 xen/arm32: Add missing MIDR values for Cortex-A17 and A12
003ec3e00a05935ea6a31430da65ee62363900f9 xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
fd884d61991cd0de588ae51728cd0602375dfa71 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
50c68df8182bf332525ebf6120d3b1e0fdf77545 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
1bdcc9f7ef438ab9c219a5099726b112b93a4fbe xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
2914ef5753c9328889df314f33bb12ece1bd4fbe xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
62b9706dba3b6a3d9881329bca604216313c82dc xen/arm: Introduce enable callback to enable a capabilities on each online CPU
624abdcf2d30ae48e0653fb511b4c90d3ccdd2af xen/arm: Detect silicon revision and set cap bits accordingly
d7b73edd0fe6bb0c46aa883229f900643b4726e9 xen/arm: cpufeature: Provide an helper to check if a capability is supported
112c49c114ffe37e068fc9f13e960a8f275379d2 xen/arm: Add cpu_hwcap bitmap
a5b0fa4871b0895da203fb2dac16840d24c6be21 xen/arm: Add macros to handle the MIDR
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
0fbf30a7f863139dd0ac556e44f92f5787654847 x86/hvm: Don't corrupt the HVM context stream when writing the MSR record
7e20b9b2ddbb04c6ebb60613b1117e05edc8a5ea x86/PV: correctly count MSRs to migrate
75bdd693033e6dbd6fe5ae235f79961d2f0aa84d x86/idle: Clear SPEC_CTRL while idle
8994cf3cf730422ded6596ecb18dc0d8b6579493 x86/ctxt: Issue a speculation barrier between vcpu contexts
642c6037bba310538b00c0cbb5d91525bd1eed0a x86/boot: Calculate the most appropriate BTI mitigation to use
c25ea9a1393c1eb5d6732ec366baa1091db5e7db x86/entry: Avoid using alternatives in NMI/#MC paths
feba571a5d9586778e0978b8df5b9166275b8680 x86/entry: Organise the clobbering of the RSB/RAS on entry to Xen
0163087ed6175b00966f4ee991d8c424ad7eb59d x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point
44c2666589fefc13049edc874c7ef063823bad90 x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL,PRED_CMD}
db743b04998a9cbf6866b5f328855239a73220e5 x86/migrate: Move MSR_SPEC_CTRL on migrate
41a5ccec99e81a768a66995f483f424f848f5b5e x86/msr: Emulation of MSR_{SPEC_CTRL,PRED_CMD} for guests
4e1b9e98dffbc2f29a0a90a4ae43b9e19f323089 x86/cpuid: Handling of IBRS/IBPB, STIBP and IBRS for guests
4d2154914e3f44bae123dc6a93fbb3f1b39c0fee x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB
ff4800cac63756f7755e6c251571cd29fd5171eb x86/feature: Definitions for Indirect Branch Controls
2613a1bc709ed4b46af36b0bab3200ed9d3c86d0 x86: Introduce alternative indirect thunks
8335c8aedacd9a50b4796afb533dc8205f2129e4 x86/amd: Try to set lfence as being Dispatch Serialising
ab20c5c804ae814de9bed5f85d55fecc894dc78f x86/boot: Report details of speculative mitigations
9089da9cd06875be6c1022d59a6651cf3919da2e x86: Support indirect thunks from assembly code
8edfc82f67f25137909dda13e6658cba4d1e5d26 x86: Support compiling with indirect branch thunks
af5b61af9e350bcc2c8b0f053682e3c7a700b46f common/wait: Clarifications to wait infrastructure
ec05090403ef4d760fbe701e31afd0f0edc414d5 x86/entry: Erase guest GPR state on entry to Xen
75263f7908a02f5673c25df9bcdaed9fe5f9de5c x86/hvm: Use SAVE_ALL to construct the cpu_user_regs frame after VMExit
f7e273a07ccf993063727675589f10da206f1683 x86/entry: Rearrange RESTORE_ALL to restore register in stack order
03c7d2cd1b4bb9868c10c4a3db2b092d211d055a x86/alt: Introduce ALTERNATIVE{,_2} macros
9ce1a7180050353c07321980cf1ed0b0baebf38a x86/alt: Break out alternative-asm into a separate header file
a735c7ae8046024925927406747d4a6ca5bf7fcc x86/microcode: Add support for fam17h microcode loading
9d534c12bf71babb76f1338029841f757191f729 xen/arm32: entry: Document the purpose of r11 in the traps handler
dbb3553130241ae99d444a6a08b7dc32ce90a272 xen/arm32: Invalidate icache on guest exist for Cortex-A15
e54a8c617ceb5ba3481e6aa122ad3f835c1915b8 xen/arm32: Invalidate BTB on guest exit for Cortex A17 and 12
8005ed3ef14c6c8b31a9e1a5ae2576a4b4c66528 xen/arm32: Add skeleton to harden branch predictor aliasing attacks
9a852e0eebc6300585db89669dbade625be18a12 xen/arm32: entry: Add missing trap_reset entry
d779cc1f9c6a5f1d40db9e85f779a79c8eed2ccf xen/arm32: Add missing MIDR values for Cortex-A17 and A12
c93bcf9409e0da14cbc4bf43bf138bfaaecefa2c xen/arm32: entry: Consolidate DEFINE_TRAP_ENTRY_* macros
15adcf395923499eb1eaaca1e67c032956428191 xen/arm64: Implement branch predictor hardening for affected Cortex-A CPUs
d7b8190d3222156e89ccefb7ac74ad0410337097 xen/arm64: Add skeleton to harden the branch predictor aliasing attacks
2b1457f955a98007cd51be67f78d1690711e8849 xen/arm: cpuerrata: Add MIDR_ALL_VERSIONS
a3578802a2882afbbfe730f0227e075b5f42b4a6 xen/arm64: Add missing MIDR values for Cortex-A72, A73 and A75
ee23fcc2539ce8143ae4ce58a7c140fa46a4359b xen/arm: Introduce enable callback to enable a capabilities on each online CPU
56510154bbd21f10080993b7888c1a47a802c3e2 xen/arm: Detect silicon revision and set cap bits accordingly
225e9c7050e8f2694df3dc92c95b06a46e57130e xen/arm: cpufeature: Provide an helper to check if a capability is supported
3c706195565910b961eb5a7e64f34948deb2a545 xen/arm: Add cpu_hwcap bitmap
1222333a8220638747e77b40b6418daa85270265 xen/arm: Add macros to handle the MIDR
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #3: xsa254/README.comet --]
[-- Type: application/octet-stream, Size: 2896 bytes --]
PV-in-PVH shim
==============
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as PVH guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Comet".
Unlike Vixen, Comet requires modifications to the toolstack and host
hypervisor.
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
Versions for Xen 4.8 and 4.10 are available.
What you will need
------------------
* You will need the xen.git with the following tags:
- For 4.10: 4.10.0-shim-comet-3
- For 4.8: 4.8.3pre-shim-comet-2 and 4.10.0-shim-comet-3
Build instructions: 4.10
------------------------
1. Build a 4.10+ system
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
Do a build and install as normal. The shim will be built as part of the
normal build process, and placed with other 'system' binaries where the
toostack knows how to find it.
Build instructions: 4.8
-----------------------
The code for shim itself is not backported to 4.8. 4.8 users should
use a shim built from 4.10-based source code; this can be simply
dropped into a Xen 4.8 installation.
1. Build a 4.8+ system with support for running PVH, and for pvshim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.8.3pre-shim-comet-2
Do a build and install as normal.
2. Build a 4.10+ system to be the shim:
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.10.0-shim-comet-3
./configure
make -C tools/firmware/xen-dir
And then install the shim executable where
the 4.8 pv shim mode tools expect to find it
cp tools/firmware/xen-dir/xen-shim /usr/lib/xen/boot/xen-shim
cp tools/firmware/xen-dir/xen-shim /usr/local/lib/xen/boot/xen-shim
This step is only needed to boot guests in "PVH with PV shim"
mode; it is not needed when booting PVH-supporting guests as PVH.
Usage instructions
------------------
* Converting a PV config to a PVH shim config
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following two lines:
type="pvh"
pvshim=1
* Converting a PV config to a PVH config
If you have a kernel capable of booting PVH, then PVH mode is both
faster and more secure than PV or PVH-shim mode.
- Remove any reference to 'builder' (e.g., `builder="generic"`)
- Add the following line:
type="pvh"
* There is no need to reboot the host.
[-- Attachment #4: xsa254/README.pti --]
[-- Type: application/octet-stream, Size: 2536 bytes --]
Xen page-table isolation (XPTI)
===============================
Summary
-------
This README gives references for one of three mitigation strategies
for Meltdown.
This series is a first-class migitation pagetable isolation series for
Xen. It is available for Xen 4.6 to Xen 4.10 and later.
Precise git commits are as follows:
4.10:
05eba93a0a344ec189e71722bd542cdc7949a8a5 x86: fix GET_STACK_END
7cccd6f748ec724cf9408cec6b3ec8e54a8a2c1f x86: allow Meltdown band-aid to be disabled
234f481337ea1a93db968d614649a6bdfdc8418a x86: Meltdown band-aid against malicious 64-bit PV guests
57dc197cf0d36c56ba1d9d32c6a1454bb52605bb x86/mm: Always set _PAGE_ACCESSED on L4e updates
910dd005da20f27f3415b7eccdf436874989506b x86/entry: Remove support for partial cpu_user_regs frames
4.9:
f11cf29f274e90e6451aaaa5ab52df2ed63eb30d x86: fix GET_STACK_END
dc7d46580d9c633a59be1c3776f79c01dd0cb98b x86: allow Meltdown band-aid to be disabled
1e0974638d65d9b8acf9ac7511d747188f38bcc3 x86: Meltdown band-aid against malicious 64-bit PV guests
87ea7816247090e8e5bc5653b16c412943a058b5 x86/mm: Always set _PAGE_ACCESSED on L4e updates
2213ffe1a2d82c3c9c4a154ea6ee252395aa8693 x86/entry: Remove support for partial cpu_user_regs frames
4.8:
2cd189eb55af8b04185b473ac2885f76b3d87efe x86: fix GET_STACK_END
31d38d633a306b2b06767b5a5f5a8a00269f3c92 x86: allow Meltdown band-aid to be disabled
1ba477bde737bf9b28cc455bef1e9a6bc76d66fc x86: Meltdown band-aid against malicious 64-bit PV guests
049e2f45bfa488967494466ec6506c3ecae5fe0e x86/mm: Always set _PAGE_ACCESSED on L4e updates
a7cf0a3b818377a8a49baed3606bfa2f214cd645 x86/entry: Remove support for partial cpu_user_regs frames
4.7:
b1ae1264baf8617df036a298461a1bb43eae79c1 x86: fix GET_STACK_END
e19d0af4ee2ae9e42a85db639fd6848e72f5658b x86: allow Meltdown band-aid to be disabled
e19517a3355acaaa2ff83018bc41e7fd044161e5 x86: Meltdown band-aid against malicious 64-bit PV guests
9b76908e6e074d7efbeafe6bad066ecc5f3c3c43 x86/mm: Always set _PAGE_ACCESSED on L4e updates
0e6c6fc449000d97f9fa87ed1fbe23f0cf21406b x86/entry: Remove support for partial cpu_user_regs frames
4.6:
44ad7f6895da9861042d7a41e635d42d83cb2660 x86: allow Meltdown band-aid to be disabled
91dc902fdf41659c210329d6f6578f8132ee4770 x86: Meltdown band-aid against malicious 64-bit PV guests
a065841b3ae9f0ef49b9823cd205c79ee0c22b9c x86/mm: Always set _PAGE_ACCESSED on L4e updates
c6e9e6095669b3c63b92d21fddb326441c73712c x86/entry: Remove support for partial cpu_user_regs frames
[-- Attachment #5: xsa254/README.vixen --]
[-- Type: application/octet-stream, Size: 2738 bytes --]
PV-in-HVM shim with "sidecar" ISO
=================================
Summary
-------
This README describes one of three mitigation strategies for Meltdown.
The basic principle is to run PV guests (which can read all of host
memory due to the hardware bugs) as HVM guests (which cannot, at least
not due to Meltdown). The PV environment is still provided to the
guest by an embedded copy of Xen, the "shim". This version of the
shim is codenamed "Vixen".
In order to boot the shim with an unmodified toolstack, you also
provide a special disk containing the shim and the guest kernel (or
pvgrub); this is called the "sidecar".
Note that both of these shim-based approaches prevent attacks on the
host, but leave the guest vulnerable to Meltdown attacks by its own
unprivileged processes; this is true even if the guest OS has KPTI or
similar Meltdown mitigation.
What you will need
------------------
* Your host must be able to run grub-mkrescue to generate a .iso
* You will therefore need xorriso and mtools
* You must be using xl and able to use an alternative your guest config
* You will need the script "pvshim-converter.pl"
- This relies on perl-json
* You will need the xen.git tag 4.9.1-shim-vixen-1
Instructions
------------
1. On a suitable system (perhaps a different host)
git clone git://xenbits.xenproject.org/xen.git xen.git
cd xen.git
git checkout 4.9.1-shim-vixen-1
If you need bi-directional console and don't mind a less-tested patch,
you can apply the patch found in this email:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
build a xen hypervisor binary as usual:
make xen
If your default version of python is python 3, you may need to add the following:
make PYTHON=python2 xen
This will build a file
xen/xen.gz
2. Copy that file to your dom0.
Ideally someplace like /usr/lib/xen/boot/xen-vixen.gz
3. Copy the script pvshim-converter to your dom0 and make
it executable:
chmod +x pvshim-converter.pl
4. For each guest
(i) if the guest is currently booted with pygrub you must first
switch to direct kernel boot (by manually copying the kernel and
initramfs out of the guest, and configuring the command line in the
domain configuration file), or pvgrub.
(ii) run
./pvshim-converter.pl --shim=/usr/lib/xen/boot/xen-vixen.gz /etc/xen/GUEST.cfg /etc/xen/GUEST.with-shim-cfg
(iii) shut the guest down cleanly
(iv) create the guest with the new config
xl create /etc/xen/GUEST.with-shim-cfg
(v) Check that it boots properly. xl console should work.
(vi) Make arrangements so that autostarting of the guest will use
the new config file rather than the old one
[-- Attachment #6: xsa254/README.which-shim --]
[-- Type: application/octet-stream, Size: 4010 bytes --]
How to decide which shim to use
===============================
A work-around to Meltdown (aka "SP3" or "Variant 3") on Intel
processors is to run guests in HVM or PVH mode.
Note this shim-based approach prevents attacks on the host, but leaves
the guest vulnerable to Meltdown attacks by its own unprivileged
processes; this is true even if the guest OS has KPTI or similar
Meltdown mitigation.
Some guests are difficult to convert to running in HVM or PVH mode,
either due to lack of partitioning / MBR, or due to kernel
compatibilities. As an emergency backstop, there are two approaches,
which we've codenamed "Vixen" and "Comet". Both involve running an
embedded copy of Xen (called a "shim") within the HVM or PVH guest to
provide the native PV interface.
Below describes the properties, and who might want to use each one.
NOTE: Both shims require host patches to boot on AMD hosts. This
shouldn't be an issue, as SP3 does not affect systems running on AMD.
Vixen
-----
Vixen has the following properties:
* Runs the shim in an HVM guest.
* It requires no hypervisor or toolstack changes, nor does it require
a host reboot.
* It has been extensively tested in Amazon's deployment for versions
of Xen going back to 3.4
* Guest reboots are required
* Guest configs must be fed through a converter program
* The converter program spits out a small guest-specific .iso
image (we call this a "sidecar") used for booting
* Because the result is an HVM guest, this approach involves
running qemu as a PC emulator (this is done automatically)
* Some common features are not supported:
- Ballooning
- Migration
- vcpu hotplug
- bidirectional console support (console is write-only)
* Direct-boot kernels and pvgrub (both pvgrub1 and pvgrub2) are
supported by the conversion program. 'pygrub' is not supported.
* xl and xm domain configs can be converted; libvirt domain
configuration arrangements are not supported.
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You want to deploy a fix immediately
- You can tolerate the loss of within-guest security
- You can't, or would like to avoid, updating to Xen 4.8 or newer
- You'd like to avoid patching and rebooting your host
- You are able to:
- Run a script to modify each domain config
- Afford an extra 80MiB per guest
- Tolerate having an extra QEMU around
- You don't need migration, memory ballooning, vcpu hotplug,
or a bi-directional console
To use this solution, see README.vixen.
Bi-directional console is available as an extra patch, but hasn't been
widely tested:
marc.info/?i=<1515604552-9205-1-git-send-email-srn@prgmr.com>
Comet
-----
Comet has the following properties:
* Runs the shim in a PVH guest.
* PVH mode is available in Xen 4.10, and will be backported to Xen
4.9 and 4.8 but no farther
* Requires host hypervisor and toolstack patches (and host reboot),
even for Xen 4.10
* Requires minimal guest config changes, and no "sidecar"
* Bootloading is identical to native PV guests; direct-boot, pvgrub,
and pygrub all work equally well
* Because the result is a PVH guest, this approach involves no PC emulator.
* The following features not available in Vixen are supported:
- Memory ballooning
- Guest migration
- vcpu hotplug
- bidirectional console support
* Guest userspace can read all of guest memory, within each guest,
and a guest migitation for this is not possible.
You might consider this approach if:
- You're on 4.8 or later already
- You can tolerate the loss of within-guest security
- You can patch and reboot your host
- You don't want an extra QEMU around
- You need migration, memory ballooning, or vcpu hotplug, or a
bidirectional console
- You need pygrub
- You need to use libvirt
At the moment, Comet is available for 4.10. We expect to have
backports to 4.8 and 4.8 available within a few working days.
[-- Attachment #7: xsa254/pvshim-converter.pl --]
[-- Type: application/octet-stream, Size: 6762 bytes --]
#!/usr/bin/perl -w
#
# usage:
# pvshim-converter [OPTIONS] OLD-CONFIG NEW-CONFIG
#
# options:
# --qemu PATH-TO-QEMU filename of qemu-system-i386
# --sidecars-directory DIR default is /var/lib/xen/pvshim-sidecars
# --shim SHIM overrides domain config file
# --debug verbose, and leaves sidecar prep dir around
#
# What we do
#
# read existing config file using python
# determine kernel, ramdisk and cmdline
# use them to produce sidecar and save it under domain name
# mess with the things that need to be messed with
# spit out new config file
use strict;
use Getopt::Long;
use JSON;
use IO::Handle;
use POSIX;
use Fcntl qw(:flock);
our $debug;
sub runcmd {
print STDERR "+ @_\n" if $debug;
$!=0; $?=0; system @_ and die "$_[0]: $! $?";
}
our $qemu;
our $shim;
our $sidecars_dir = '/var/lib/xen/pvshim-sidecars';
GetOptions('qemu=s' => \$qemu,
'sidecars-directory=s' => \$sidecars_dir,
'shim=s' => \$shim,
'debug' => \$debug)
or die "pvshim-converter: bad options\n";
@ARGV==2 or die "pvshim-converter: need old and new config filenames";
our ($in,$out) = @ARGV;
our $indata;
if ($in ne '-') {
open I, '<', "$in" or die "open input config file: $!\n";
} else {
open I, '<&STDIN' or die $!;
}
{
local $/;
$indata = <I>;
}
I->error and die $!;
close I;
open P, "-|", qw(python2 -c), <<END, $indata or die $!;
import sys
import json
l = {}
exec sys.argv[1] in l
for k in l.keys():
if k.startswith("_"):
del l[k]
print json.dumps(l)
END
our $c;
{
local $/;
$_ = <P>;
$!=0; $?=0; close P or die "$! $?";
$c = decode_json $_;
}
die "no domain name ?" unless exists $c->{name};
die "bootloader not yet supported" if $c->{bootloader};
die "no kernel" unless $c->{kernel};
our $sidecar = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.iso";
our $dmwrap = $c->{pvshim_sidecar_path} || "$sidecars_dir/$c->{name}.dm";
$shim ||= $c->{pvshim_path};
$shim ||= '/usr/local/lib/xen/boot/xen-shim';
our $shim_cmdline = $c->{pvshim_cmdline} || 'console=com1 com1=115200n1';
$shim_cmdline .= ' '.$c->{pvshim_extra} if $c->{pvshim_extra};
our $kernel_cmdline = $c->{cmdline} || '';
$kernel_cmdline .= ' root='.$c->{root} if $c->{root};
$kernel_cmdline .= ' '.$c->{extra} if $c->{extra};
print "pvshim-converter: creating sidecar in $sidecar\n";
runcmd qw(mkdir -m700 -p --), $sidecars_dir;
open L, ">", "$sidecar.lock" or die "$sidecar.lock: open $!";
flock L, LOCK_EX or die "$sidecar.lock: lock: $!";
my $sd = "$sidecar.dir";
system qw(rm -rf --), $sd;
mkdir $sd, 0700;
runcmd qw(cp --), $shim, "$sd/shim";
runcmd qw(cp --), $c->{kernel}, "$sd/kernel";
runcmd qw(cp --), $c->{ramdisk}, "$sd/ramdisk" if $c->{ramdisk};
my $grubcfg = <<END;
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal_input serial
terminal_output serial
set timeout=0
menuentry 'Xen shim' {
insmod gzio
insmod xzio
multiboot (cd)/shim placeholder $shim_cmdline
module (cd)/kernel placeholder $kernel_cmdline
module (cd)/ramdisk
}
END
runcmd qw(mkdir -p --), "$sd/boot/grub";
open G, ">", "$sd/boot/grub/grub.cfg" or die "$sd, grub.cfg: $!";
print G $grubcfg or die $!;
close G or die $!;
unlink "$sidecar.new" or $!==ENOENT or die "$sidecar.new: rm: $!";
runcmd qw(grub-mkrescue -o), "$sidecar.new", "$sidecar.dir";
if (!stat "$sidecar.new") {
$!==ENOENT or die "$sidecar.new: stat: $!";
print STDERR <<END;
pvshim-converter: grub-mkrescue exited with status zero but failed to make iso.
NB that grub-mkrescue has a tendency to lie in its error messages.
END
my $missing;
foreach my $check (qw(xorriso mformat)) {
$missing |= system qw(sh -c), "type $check";
}
if ($missing) {
print STDERR <<END;
You seem to have some program(s) missing which grub-mkrescue depends on,
see above. ("mformat" is normally in the package "mtools".)
Installing those programs will probably help.
END
} else {
print STDERR <<END;
And older grub-mkrescue has a tendency not to notice certain problems.
Maybe strace will tell you what is wrong. :-/
END
}
die "pvshim-converter: grub-mkrescue did not make iso\n";
}
runcmd qw(rm -rf --), "$sidecar.dir" unless $debug;
open Q, ">", "$dmwrap.new" or die "$dmwrap: $!";
print Q <<'END_DMWRAP' or die $!;
#!/bin/bash
set -x
: "$@"
set +x
newargs=()
newarg () {
newargs+=("$1")
}
while [ $# -gt 1 ]; do
case "$1" in
-no-shutdown|-nodefaults|-no-user-config)
newarg "$1"; shift
;;
-xen-domid|-chardev|-mon|-display|-boot|-m|-machine)
newarg "$1"; shift
newarg "$1"; shift
;;
-name)
newarg "$1"; shift
name="$1"; shift
newarg "$name"
;;
-netdev|-cdrom)
: fixme
newarg "$1"; shift
newarg "$1"; shift
;;
-drive|-kernel|-initrd|-append|-vnc)
shift; shift
;;
-device)
shift
case "$1" in
XXXrtl8139*)
newarg "-device"
newarg "$1"; shift
;;
*)
shift
;;
esac
;;
*)
echo >&2 "warning: unexpected argument $1 being passed through"
newarg "$1"; shift
;;
esac
done
#if [ "x$name" != x ]; then
# logdir=/var/log/xen
# logfile="$logdir/shim-$name.log"
# savelog "$logfile" ||:
# newarg -serial
# newarg "file:$logfile"
#fi
END_DMWRAP
if ($qemu) {
printf Q <<'END_DMWRAP', $qemu or die $!;
exec '%s' "${newargs[@]}"
END_DMWRAP
} else {
print Q <<'END_DMWRAP' or die $!;
set -x
for path in /usr/local/lib/xen/bin /usr/lib/xen/bin /usr/local/bin /usr/bin; do
if test -e $path/qemu-system-i386; then
exec $path/qemu-system-i386 "${newargs[@]}"
fi
done
echo >&2 'could not exec qemu'
exit 127
END_DMWRAP
}
chmod 0755, "$dmwrap.new" or die "$dmwrap: chmod: $!";
close Q or die $!;
rename "$sidecar.new", $sidecar or die "$sidecar: install: $!";
rename "$dmwrap.new", $dmwrap or die "$dmwrap: install: $!";
print STDERR <<END;
pvshim-converter: wrote qemu wrapper to $dmwrap
pvshim-converter: wrote sidecar to $sidecar
END
my $append = <<END;
builder='hvm'
type='hvm'
device_model_version='qemu-xen'
device_model_override='$dmwrap'
device_model_args_hvm=['-cdrom','$sidecar']
boot='c'
serial='pty'
END
if ($out ne '-') {
open O, ">", "$out.tmp" or die "open output config temp: $out.tmp: $!\n";
} else {
open O, ">&STDOUT" or die $!;
}
print O $indata, "\n", $append or die "write output: $!";
close O or die "close output: $!";
if ($out ne '-') {
rename "$out.tmp", $out or die "install output: $!";
print STDERR "pvshim-converter: wrote new guest config to $out\n";
} else {
print STDERR "pvshim-converter: wrote new guest config to stdout\n";
}
[-- Attachment #8: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-02-23 19:35 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-17 17:13 Xen Security Advisory 254 (CVE-2017-5753, CVE-2017-5715, CVE-2017-5754) - Information leak via side effects of speculative execution Xen.org security team
-- strict thread matches above, loose matches on Subject: below --
2018-02-23 19:35 Xen.org security team
2018-02-23 19:17 Xen.org security team
2018-01-18 18:38 Xen.org security team
2018-01-16 17:43 Xen.org security team
2018-01-12 17:46 Xen.org security team
2018-01-12 17:36 Xen.org security team
2018-01-12 12:15 Xen.org security team
2018-01-11 20:09 Xen.org security team
2018-01-05 18:44 Xen.org security team
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).