From: "Alex Bennée" <alex.bennee@linaro.org>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, "David Woodhouse" <dwmw@amazon.co.uk>,
"Cleber Rosa" <crosa@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Wainer dos Santos Moschetta" <wainersm@redhat.com>,
"Beraldo Leal" <bleal@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"open list:Overall KVM CPUs" <kvm@vger.kernel.org>
Subject: Re: [RFC PATCH] tests/avocado: Test Xen guest support under KVM
Date: Wed, 29 Mar 2023 21:56:04 +0100 [thread overview]
Message-ID: <87y1nfp98n.fsf@linaro.org> (raw)
In-Reply-To: <20230324160719.1790792-1-alex.bennee@linaro.org>
Alex Bennée <alex.bennee@linaro.org> writes:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Exercise guests with a few different modes for interrupt delivery. In
> particular we want to cover:
>
> • Xen event channel delivery via GSI to the I/O APIC
> • Xen event channel delivery via GSI to the i8259 PIC
> • MSIs routed to PIRQ event channels
> • GSIs routed to PIRQ event channels
>
> As well as some variants of normal non-Xen stuff like MSI to vAPIC and
> PCI INTx going to the I/O APIC and PIC, which ought to still work even
> in Xen mode.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>
> ---
> v2 (ajb)
> - switch to plain QemuSystemTest + LinuxSSHMixIn
> - switch from fedora to custom kernel and buildroot
> - removed some unused code
> TODO:
> - properly probe for host support to skip test
So any idea for the best thing to check for here?
> ---
> tests/avocado/kvm_xen_guest.py | 160 +++++++++++++++++++++++++++++++++
> 1 file changed, 160 insertions(+)
> create mode 100644 tests/avocado/kvm_xen_guest.py
>
> diff --git a/tests/avocado/kvm_xen_guest.py b/tests/avocado/kvm_xen_guest.py
> new file mode 100644
> index 0000000000..1b4524d31c
> --- /dev/null
> +++ b/tests/avocado/kvm_xen_guest.py
> @@ -0,0 +1,160 @@
> +# KVM Xen guest functional tests
> +#
> +# Copyright © 2021 Red Hat, Inc.
> +# Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
> +#
> +# Author:
> +# David Woodhouse <dwmw2@infradead.org>
> +# Alex Bennée <alex.bennee@linaro.org>
> +#
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +
> +import os
> +
> +from avocado_qemu import LinuxSSHMixIn
> +from avocado_qemu import QemuSystemTest
> +from avocado_qemu import wait_for_console_pattern
> +
> +class KVMXenGuest(QemuSystemTest, LinuxSSHMixIn):
> + """
> + :avocado: tags=arch:x86_64
> + :avocado: tags=machine:q35
> + :avocado: tags=accel:kvm
> + :avocado: tags=kvm_xen_guest
> + """
> +
> + KERNEL_DEFAULT = 'printk.time=0 root=/dev/xvda console=ttyS0'
> +
> + kernel_path = None
> + kernel_params = None
> +
> + # Fetch assets from the kvm-xen-guest subdir of my shared test
> + # images directory on fileserver.linaro.org where you can find
> + # build instructions for how they where assembled.
> + def get_asset(self, name, sha1):
> + base_url = ('https://fileserver.linaro.org/s/'
> + 'kE4nCFLdQcoBF9t/download?'
> + 'path=%2Fkvm-xen-guest&files=' )
> + url = base_url + name
> + # use explicit name rather than failing to neatly parse the
> + # URL into a unique one
> + return self.fetch_asset(name=name, locations=(url), asset_hash=sha1)
> +
> + def common_vm_setup(self):
> +
> + # TODO: we also need to check host kernel version/support
> + self.require_accelerator("kvm")
> +
> + self.vm.set_console()
> +
> + self.vm.add_args("-accel", "kvm,xen-version=0x4000a,kernel-irqchip=split")
> + self.vm.add_args("-smp", "2")
> +
> + self.kernel_path = self.get_asset("bzImage",
> + "367962983d0d32109998a70b45dcee4672d0b045")
> + self.rootfs = self.get_asset("rootfs.ext4",
> + "f1478401ea4b3fa2ea196396be44315bab2bb5e4")
> +
> + def run_and_check(self):
> + self.vm.add_args('-kernel', self.kernel_path,
> + '-append', self.kernel_params,
> + '-drive', f"file={self.rootfs},if=none,id=drv0",
> + '-device', 'xen-disk,drive=drv0,vdev=xvda',
> + '-device', 'virtio-net-pci,netdev=unet',
> + '-netdev', 'user,id=unet,hostfwd=:127.0.0.1:0-:22')
> +
> + self.vm.launch()
> + self.log.info('VM launched, waiting for sshd')
> + console_pattern = 'Starting dropbear sshd: OK'
> + wait_for_console_pattern(self, console_pattern, 'Oops')
> + self.log.info('sshd ready')
> + self.ssh_connect('root', '', False)
> +
> + self.ssh_command('cat /proc/cmdline')
> + self.ssh_command('dmesg | grep -e "Grant table initialized"')
> +
> + def test_kvm_xen_guest(self):
> + """
> + :avocado: tags=kvm_xen_guest
> + """
> +
> + self.common_vm_setup()
> +
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks')
> + self.run_and_check()
> + self.ssh_command('grep xen-pirq.*msi /proc/interrupts')
> +
> + def test_kvm_xen_guest_nomsi(self):
> + """
> + :avocado: tags=kvm_xen_guest_nomsi
> + """
> +
> + self.common_vm_setup()
> +
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks pci=nomsi')
> + self.run_and_check()
> + self.ssh_command('grep xen-pirq.* /proc/interrupts')
> +
> + def test_kvm_xen_guest_noapic_nomsi(self):
> + """
> + :avocado: tags=kvm_xen_guest_noapic_nomsi
> + """
> +
> + self.common_vm_setup()
> +
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks noapic pci=nomsi')
> + self.run_and_check()
> + self.ssh_command('grep xen-pirq /proc/interrupts')
> +
> + def test_kvm_xen_guest_vapic(self):
> + """
> + :avocado: tags=kvm_xen_guest_vapic
> + """
> +
> + self.common_vm_setup()
> + self.vm.add_args('-cpu', 'host,+xen-vapic')
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks')
> + self.run_and_check()
> + self.ssh_command('grep xen-pirq /proc/interrupts')
> + self.ssh_command('grep PCI-MSI /proc/interrupts')
> +
> + def test_kvm_xen_guest_novector(self):
> + """
> + :avocado: tags=kvm_xen_guest_novector
> + """
> +
> + self.common_vm_setup()
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks' +
> + ' xen_no_vector_callback')
> + self.run_and_check()
> + self.ssh_command('grep xen-platform-pci /proc/interrupts')
> +
> + def test_kvm_xen_guest_novector_nomsi(self):
> + """
> + :avocado: tags=kvm_xen_guest_novector_nomsi
> + """
> +
> + self.common_vm_setup()
> +
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks pci=nomsi' +
> + ' xen_no_vector_callback')
> + self.run_and_check()
> + self.ssh_command('grep xen-platform-pci /proc/interrupts')
> +
> + def test_kvm_xen_guest_novector_noapic(self):
> + """
> + :avocado: tags=kvm_xen_guest_novector_noapic
> + """
> +
> + self.common_vm_setup()
> + self.kernel_params = (self.KERNEL_DEFAULT +
> + ' xen_emul_unplug=ide-disks' +
> + ' xen_no_vector_callback noapic')
> + self.run_and_check()
> + self.ssh_command('grep xen-platform-pci /proc/interrupts')
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
next prev parent reply other threads:[~2023-03-29 20:57 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-24 16:07 [RFC PATCH] tests/avocado: Test Xen guest support under KVM Alex Bennée
2023-03-29 20:56 ` Alex Bennée [this message]
2023-04-03 12:21 ` [EXTERNAL][RFC " David Woodhouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87y1nfp98n.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=bleal@redhat.com \
--cc=crosa@redhat.com \
--cc=dwmw@amazon.co.uk \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=wainersm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).