* [PATCH] KVM test: Add PCI device assignment support
@ 2009-12-27 23:55 Lucas Meneghel Rodrigues
2010-01-12 10:17 ` Yolkfull Chow
0 siblings, 1 reply; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-12-27 23:55 UTC (permalink / raw)
To: autotest; +Cc: kvm
Add support to PCI device assignment on the kvm test. It supports
both SR-IOV virtual functions and physical NIC card device
assignment.
Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
be shared amongst multiple virtual machines while retaining the
performance benefit of assigning a PCI device to a virtual machine.
A common example is where a single SR-IOV capable NIC - with perhaps
only a single physical network port - might be shared with multiple
virtual machines by assigning a virtual function to each VM.
SR-IOV support is implemented in the kernel. The core implementation
is contained in the PCI subsystem, but there must also be driver support
for both the Physical Function (PF) and Virtual Function (VF) devices.
With an SR-IOV capable device one can allocate VFs from a PF. The VFs
surface as PCI devices which are backed on the physical PCI device by
resources (queues, and register sets).
Device support:
In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
SR-IOV capable device supported. The igb driver has PF support and the
igbvf has VF support.
In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
the same vxge driver for the PF as well as the VFs.
In order to configure the test:
* For SR-IOV virtual functions passthrough, we could specify the
module parameter 'max_vfs' in config file.
* For physical NIC card pass through, we should specify the device
name(s).
4th try: Implemented Yolkfull's suggestion of keeping 'max_vfs' and
'assignable_devices' as sepparated parameters. Yolkfull, please test this
on your environment. Thank you!
* Naming is consistent with "PCI assignment" instead of
"PCI passthrough", as it's a more correct term.
* No more device database file, as all information about devices
is stored on an attribute of the VM class (an instance of the
PciAssignable class), so we don't have to bother dumping this
info to a file.
* Code simplified to avoid duplication
As it's a fairly involved feature, the more reviews we get the better.
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_utils.py | 281 ++++++++++++++++++++++++++++++++
client/tests/kvm/kvm_vm.py | 59 +++++++
client/tests/kvm/tests_base.cfg.sample | 20 +++
3 files changed, 360 insertions(+), 0 deletions(-)
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 2bbbe22..59c72a9 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -924,3 +924,284 @@ def create_report(report_dir, results_dir):
reporter = os.path.join(report_dir, 'html_report.py')
html_file = os.path.join(results_dir, 'results.html')
os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def get_full_pci_id(pci_id):
+ """
+ Get full PCI ID of pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
+ status, full_id = commands.getstatusoutput(cmd)
+ if status != 0:
+ return None
+ return full_id
+
+
+def get_vendor_from_pci_id(pci_id):
+ """
+ Check out the device vendor ID according to pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
+ return re.sub(":", " ", commands.getoutput(cmd))
+
+
+class PciAssignable(object):
+ """
+ Request PCI assignable devices on host. It will check whether to request
+ PF (physical Functions) or VF (Virtual Functions).
+ """
+ def __init__(self, type="nic_vf", driver=None, driver_option=None,
+ names=None, devices_requested=None):
+ """
+ Initialize parameter 'type' which could be:
+ nic_vf: Virtual Functions
+ nic_pf: Physical Function (actual hardware)
+ mixed: Both includes VFs and PFs
+
+ If pass through Physical NIC cards, we need to specify which devices
+ to be assigned, e.g. 'eth1 eth2'.
+
+ If pass through Virtual Functions, we need to specify how many vfs
+ are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
+ config file.
+
+ @param type: PCI device type.
+ @param driver: Kernel module for the PCI assignable device.
+ @param driver_option: Module option to specify the maximum number of
+ VFs (eg 'max_vfs=7')
+ @param names: Physical NIC cards correspondent network interfaces,
+ e.g.'eth1 eth2 ...'
+ """
+ self.type = type
+ self.driver = driver
+ self.driver_option = driver_option
+ if names:
+ self.name_list = names.split()
+ if devices_requested:
+ self.devices_requested = int(devices_requested)
+
+
+ def _get_pf_pci_id(self, name, search_str):
+ """
+ Get the PF PCI ID according to name.
+
+ @param name: Name of the PCI device.
+ @param search_str: Search string to be used on lspci.
+ """
+ cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
+ s, pci_id = commands.getstatusoutput(cmd)
+ if not (s or "Cannot get driver information" in pci_id):
+ return pci_id[5:]
+ cmd = "lspci | awk '/%s/ {print $1}'" % search_str
+ pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
+ nic_id = int(re.search('[0-9]+', name).group(0))
+ if (len(pci_ids) - 1) < nic_id:
+ return None
+ return pci_ids[nic_id]
+
+
+ def _release_dev(self, pci_id):
+ """
+ Release a single PCI device.
+
+ @param pci_id: PCI ID of a given PCI device.
+ """
+ base_dir = "/sys/bus/pci"
+ full_id = get_full_pci_id(pci_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ if 'pci-stub' in os.readlink(drv_path):
+ cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
+ if os.system(cmd):
+ return False
+
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
+ if os.system(cmd):
+ return False
+
+ driver = self.dev_drivers[pci_id]
+ cmd = "echo '%s' > %s/bind" % (full_id, driver)
+ if os.system(cmd):
+ return False
+
+ return True
+
+
+ def get_vf_devs(self):
+ """
+ Catch all VFs PCI IDs.
+
+ @return: List with all PCI IDs for the Virtual Functions avaliable
+ """
+ if not self.sr_iov_setup():
+ return []
+
+ cmd = "lspci | awk '/Virtual Function/ {print $1}'"
+ return commands.getoutput(cmd).split()
+
+
+ def get_pf_devs(self):
+ """
+ Catch all PFs PCI IDs.
+
+ @return: List with all PCI IDs for the physical hardware requested
+ """
+ pf_ids = []
+ for name in self.name_list:
+ pf_id = self._get_pf_pci_id(name, "Ethernet")
+ if not pf_id:
+ continue
+ pf_ids.append(pf_id)
+ return pf_ids
+
+
+ def get_devs(self, count):
+ """
+ Check out all devices' PCI IDs according to their name.
+
+ @param count: count number of PCI devices needed for pass through
+ @return: a list of all devices' PCI IDs
+ """
+ if self.type == "nic_vf":
+ vf_ids = self.get_vf_devs()
+ elif self.type == "nic_pf":
+ vf_ids = self.get_pf_devs()
+ elif self.type == "mixed":
+ vf_ids = self.get_vf_devs()
+ vf_ids.extend(self.get_pf_devs())
+ return vf_ids[0:count]
+
+
+ def get_vfs_count(self):
+ """
+ Get VFs count number according to lspci.
+ """
+ cmd = "lspci | grep 'Virtual Function' | wc -l"
+ # For each VF we'll see 2 prints of 'Virtual Function', so let's
+ # divide the result per 2
+ return int(commands.getoutput(cmd)) / 2
+
+
+ def check_vfs_count(self):
+ """
+ Check VFs count number according to the parameter driver_options.
+ """
+ return (self.get_vfs_count == self.devices_requested)
+
+
+ def is_binded_to_stub(self, full_id):
+ """
+ Verify whether the device with full_id is already binded to pci-stub.
+
+ @param full_id: Full ID for the given PCI device
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ if os.path.exists(os.path.join(stub_path, full_id)):
+ return True
+ return False
+
+
+ def sr_iov_setup(self):
+ """
+ Ensure the PCI device is working in sr_iov mode.
+
+ Check if the PCI hardware device drive is loaded with the appropriate,
+ parameters (number of VFs), and if it's not, perform setup.
+
+ @return: True, if the setup was completed successfuly, False otherwise.
+ """
+ re_probe = False
+ s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
+ if s:
+ re_probe = True
+ elif not self.check_vfs_count():
+ os.system("modprobe -r %s" % self.driver)
+ re_probe = True
+
+ # Re-probe driver with proper number of VFs
+ if re_probe:
+ cmd = "modprobe %s %s" % (self.driver, self.driver_option)
+ s, o = commands.getstatusoutput(cmd)
+ if s:
+ return False
+ if not self.check_vfs_count():
+ return False
+ return True
+
+
+ def request_devs(self):
+ """
+ Implement setup process: unbind the PCI device and then bind it
+ to the pci-stub driver.
+
+ @return: a list of successfully requested devices' PCI IDs.
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+
+ self.pci_ids = self.get_devs(self.devices_requested)
+ logging.debug("The following pci_ids were found: %s" % self.pci_ids)
+ requested_pci_ids = []
+ self.dev_drivers = {}
+
+ # Setup all devices specified for assignment to guest
+ for pci_id in self.pci_ids:
+ full_id = get_full_pci_id(pci_id)
+ if not full_id:
+ continue
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ dev_prev_driver= os.path.realpath(os.path.join(drv_path,
+ os.readlink(drv_path)))
+ self.dev_drivers[pci_id] = dev_prev_driver
+
+ # Judge whether the device driver has been binded to stub
+ if not self.is_binded_to_stub(full_id):
+ logging.debug("Binding device %s to stub" % full_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ stub_new_id = os.path.join(stub_path, 'new_id')
+ unbind_dev = os.path.join(drv_path, 'unbind')
+ stub_bind = os.path.join(stub_path, 'bind')
+
+ info_write_to_files = [(vendor_id, stub_new_id),
+ (full_id, unbind_dev),
+ (full_id, stub_bind)]
+
+ for content, file in info_write_to_files:
+ try:
+ utils.open_write_close(content, file)
+ except IOError:
+ logging.debug("Failed to write %s to file %s" %
+ (content, file))
+ continue
+
+ if not self.is_binded_to_stub(full_id):
+ logging.error("Binding device %s to stub failed" %
+ pci_id)
+ continue
+ else:
+ logging.debug("Device %s already binded to stub" % pci_id)
+ requested_pci_ids.append(pci_id)
+ self.pci_ids = requested_pci_ids
+ return self.pci_ids
+
+
+ def release_devs(self):
+ """
+ Release all PCI devices currently assigned to VMs back to the
+ virtualization host.
+ """
+ try:
+ for pci_id in self.dev_drivers:
+ if not self._release_dev(pci_id):
+ logging.error("Failed to release device %s to host" %
+ pci_id)
+ else:
+ logging.info("Released device %s successfully" % pci_id)
+ except:
+ return
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index cc314d4..a86c124 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -304,6 +304,12 @@ class VM:
elif params.get("uuid"):
qemu_cmd += " -uuid %s" % params.get("uuid")
+ # If the PCI assignment step went OK, add each one of the PCI assigned
+ # devices to the qemu command line.
+ if self.pci_assignable:
+ for pci_id in self.pa_pci_ids:
+ qemu_cmd += " -pcidevice host=%s" % pci_id
+
return qemu_cmd
@@ -392,6 +398,50 @@ class VM:
self.uuid = f.read().strip()
f.close()
+ if not params.get("pci_assignable") == "no":
+ pa_type = params.get("pci_assignable")
+ pa_devices_requested = params.get("devices_requested")
+
+ # Virtual Functions (VF) assignable devices
+ if pa_type == "vf":
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ devices_requested=pa_devices_requested)
+ # Physical NIC (PF) assignable devices
+ elif pa_type == "pf":
+ pa_device_names = params.get("device_names")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+ # Working with both VF and PF
+ elif pa_type == "mixed":
+ pa_device_names = params.get("device_names")
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+
+ self.pa_pci_ids = self.pci_assignable.request_devs()
+
+ if self.pa_pci_ids:
+ logging.debug("Successfuly assigned devices: %s" %
+ self.pa_pci_ids)
+ else:
+ logging.error("No PCI assignable devices were assigned "
+ "and 'pci_assignable' is defined to %s "
+ "on your config file. Aborting VM creation." %
+ pa_type)
+ return False
+
+ else:
+ self.pci_assignable = None
+
# Make qemu command
qemu_command = self.make_qemu_command()
@@ -537,6 +587,8 @@ class VM:
# Is it already dead?
if self.is_dead():
logging.debug("VM is already down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.debug("Destroying VM with PID %d..." %
@@ -557,6 +609,9 @@ class VM:
return
finally:
session.close()
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
+
# Try to destroy with a monitor command
logging.debug("Trying to kill VM with monitor command...")
@@ -566,6 +621,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
# If the VM isn't dead yet...
@@ -575,6 +632,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.error("Process %s is a zombie!" % self.process.get_pid())
diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index a403399..b7ee2e1 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -884,3 +884,23 @@ variants:
pre_command = "/usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage"
extra_params += " -mem-path /mnt/kvm_hugepage"
+
+variants:
+ - @no_pci_assignable:
+ pci_assignable = no
+ - pf_assignable:
+ pci_assignable = pf
+ device_names = eth1
+ - vf_assignable:
+ pci_assignable = vf
+ # Driver (kernel module) that supports SR-IOV hardware.
+ # As of today (30-11-2009), we have 2 drivers for this type of hardware:
+ # Intel® 82576 Gigabit Ethernet Controller - igb
+ # Neterion® X3100™ - vxge
+ driver = igb
+ # Driver option to specify the maximum number of virtual functions
+ # (on vxge the option is , for example, is max_config_dev)
+ # the default below is for the igb driver
+ driver_option = "max_vfs=7"
+ # Number of devices that are going to be requested.
+ devices_requested = 7
--
1.6.5.2
_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH] KVM test: Add PCI device assignment support
2009-12-27 23:55 [PATCH] KVM test: Add PCI device assignment support Lucas Meneghel Rodrigues
@ 2010-01-12 10:17 ` Yolkfull Chow
0 siblings, 0 replies; 7+ messages in thread
From: Yolkfull Chow @ 2010-01-12 10:17 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: autotest, kvm
On Sun, Dec 27, 2009 at 09:55:56PM -0200, Lucas Meneghel Rodrigues wrote:
> Add support to PCI device assignment on the kvm test. It supports
> both SR-IOV virtual functions and physical NIC card device
> assignment.
>
> Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
> be shared amongst multiple virtual machines while retaining the
> performance benefit of assigning a PCI device to a virtual machine.
> A common example is where a single SR-IOV capable NIC - with perhaps
> only a single physical network port - might be shared with multiple
> virtual machines by assigning a virtual function to each VM.
>
> SR-IOV support is implemented in the kernel. The core implementation
> is contained in the PCI subsystem, but there must also be driver support
> for both the Physical Function (PF) and Virtual Function (VF) devices.
> With an SR-IOV capable device one can allocate VFs from a PF. The VFs
> surface as PCI devices which are backed on the physical PCI device by
> resources (queues, and register sets).
>
> Device support:
>
> In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
> SR-IOV capable device supported. The igb driver has PF support and the
> igbvf has VF support.
>
> In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
> the same vxge driver for the PF as well as the VFs.
>
> In order to configure the test:
>
> * For SR-IOV virtual functions passthrough, we could specify the
> module parameter 'max_vfs' in config file.
> * For physical NIC card pass through, we should specify the device
> name(s).
>
> 4th try: Implemented Yolkfull's suggestion of keeping 'max_vfs' and
> 'assignable_devices' as sepparated parameters. Yolkfull, please test this
> on your environment. Thank you!
Hi Lucas,
Sorry for the late reply. I just tested this patch and found some problems,
please see comments below:
>
> * Naming is consistent with "PCI assignment" instead of
> "PCI passthrough", as it's a more correct term.
> * No more device database file, as all information about devices
> is stored on an attribute of the VM class (an instance of the
> PciAssignable class), so we don't have to bother dumping this
> info to a file.
> * Code simplified to avoid duplication
>
> As it's a fairly involved feature, the more reviews we get the better.
>
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> ---
> client/tests/kvm/kvm_utils.py | 281 ++++++++++++++++++++++++++++++++
> client/tests/kvm/kvm_vm.py | 59 +++++++
> client/tests/kvm/tests_base.cfg.sample | 20 +++
> 3 files changed, 360 insertions(+), 0 deletions(-)
>
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index 2bbbe22..59c72a9 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -924,3 +924,284 @@ def create_report(report_dir, results_dir):
> reporter = os.path.join(report_dir, 'html_report.py')
> html_file = os.path.join(results_dir, 'results.html')
> os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
> +
> +
> +def get_full_pci_id(pci_id):
> + """
> + Get full PCI ID of pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
> + status, full_id = commands.getstatusoutput(cmd)
> + if status != 0:
> + return None
> + return full_id
> +
> +
> +def get_vendor_from_pci_id(pci_id):
> + """
> + Check out the device vendor ID according to pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
> + return re.sub(":", " ", commands.getoutput(cmd))
> +
> +
> +class PciAssignable(object):
> + """
> + Request PCI assignable devices on host. It will check whether to request
> + PF (physical Functions) or VF (Virtual Functions).
> + """
> + def __init__(self, type="nic_vf", driver=None, driver_option=None,
> + names=None, devices_requested=None):
> + """
> + Initialize parameter 'type' which could be:
> + nic_vf: Virtual Functions
> + nic_pf: Physical Function (actual hardware)
> + mixed: Both includes VFs and PFs
> +
> + If pass through Physical NIC cards, we need to specify which devices
> + to be assigned, e.g. 'eth1 eth2'.
> +
> + If pass through Virtual Functions, we need to specify how many vfs
> + are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
> + config file.
> +
> + @param type: PCI device type.
> + @param driver: Kernel module for the PCI assignable device.
> + @param driver_option: Module option to specify the maximum number of
> + VFs (eg 'max_vfs=7')
> + @param names: Physical NIC cards correspondent network interfaces,
> + e.g.'eth1 eth2 ...'
Add parameter interpretation for 'devices_requested'.
> + """
> + self.type = type
> + self.driver = driver
> + self.driver_option = driver_option
> + if names:
> + self.name_list = names.split()
> + if devices_requested:
> + self.devices_requested = int(devices_requested)
We need anyway to initialize 'self.devices_requested' since following codes use
this attribution, like:
...
def check_vfs_count(self):
> + """
> + Check VFs count number according to the parameter driver_options.
> + """
> + return (self.get_vfs_count == self.devices_requested)
...
> +
> +
> + def _get_pf_pci_id(self, name, search_str):
> + """
> + Get the PF PCI ID according to name.
> +
> + @param name: Name of the PCI device.
> + @param search_str: Search string to be used on lspci.
> + """
> + cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
> + s, pci_id = commands.getstatusoutput(cmd)
> + if not (s or "Cannot get driver information" in pci_id):
> + return pci_id[5:]
> + cmd = "lspci | awk '/%s/ {print $1}'" % search_str
> + pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
> + nic_id = int(re.search('[0-9]+', name).group(0))
> + if (len(pci_ids) - 1) < nic_id:
> + return None
> + return pci_ids[nic_id]
> +
> +
> + def _release_dev(self, pci_id):
> + """
> + Release a single PCI device.
> +
> + @param pci_id: PCI ID of a given PCI device.
> + """
> + base_dir = "/sys/bus/pci"
> + full_id = get_full_pci_id(pci_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + if 'pci-stub' in os.readlink(drv_path):
> + cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
> + if os.system(cmd):
> + return False
> +
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
> + if os.system(cmd):
> + return False
> +
> + driver = self.dev_drivers[pci_id]
> + cmd = "echo '%s' > %s/bind" % (full_id, driver)
> + if os.system(cmd):
> + return False
> +
> + return True
> +
> +
> + def get_vf_devs(self):
> + """
> + Catch all VFs PCI IDs.
> +
> + @return: List with all PCI IDs for the Virtual Functions avaliable
> + """
> + if not self.sr_iov_setup():
> + return []
> +
> + cmd = "lspci | awk '/Virtual Function/ {print $1}'"
> + return commands.getoutput(cmd).split()
> +
> +
> + def get_pf_devs(self):
> + """
> + Catch all PFs PCI IDs.
> +
> + @return: List with all PCI IDs for the physical hardware requested
> + """
> + pf_ids = []
> + for name in self.name_list:
> + pf_id = self._get_pf_pci_id(name, "Ethernet")
> + if not pf_id:
> + continue
> + pf_ids.append(pf_id)
> + return pf_ids
> +
> +
> + def get_devs(self, count):
> + """
> + Check out all devices' PCI IDs according to their name.
> +
> + @param count: count number of PCI devices needed for pass through
> + @return: a list of all devices' PCI IDs
> + """
> + if self.type == "nic_vf":
> + vf_ids = self.get_vf_devs()
> + elif self.type == "nic_pf":
> + vf_ids = self.get_pf_devs()
> + elif self.type == "mixed":
> + vf_ids = self.get_vf_devs()
> + vf_ids.extend(self.get_pf_devs())
> + return vf_ids[0:count]
> +
> +
> + def get_vfs_count(self):
> + """
> + Get VFs count number according to lspci.
> + """
> + cmd = "lspci | grep 'Virtual Function' | wc -l"
> + # For each VF we'll see 2 prints of 'Virtual Function', so let's
> + # divide the result per 2
> + return int(commands.getoutput(cmd)) / 2
> +
> +
> + def check_vfs_count(self):
> + """
> + Check VFs count number according to the parameter driver_options.
> + """
> + return (self.get_vfs_count == self.devices_requested)
> +
> +
> + def is_binded_to_stub(self, full_id):
> + """
> + Verify whether the device with full_id is already binded to pci-stub.
> +
> + @param full_id: Full ID for the given PCI device
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + if os.path.exists(os.path.join(stub_path, full_id)):
> + return True
> + return False
> +
> +
> + def sr_iov_setup(self):
> + """
> + Ensure the PCI device is working in sr_iov mode.
> +
> + Check if the PCI hardware device drive is loaded with the appropriate,
> + parameters (number of VFs), and if it's not, perform setup.
> +
> + @return: True, if the setup was completed successfuly, False otherwise.
> + """
> + re_probe = False
> + s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
> + if s:
> + re_probe = True
> + elif not self.check_vfs_count():
> + os.system("modprobe -r %s" % self.driver)
> + re_probe = True
> +
> + # Re-probe driver with proper number of VFs
> + if re_probe:
> + cmd = "modprobe %s %s" % (self.driver, self.driver_option)
> + s, o = commands.getstatusoutput(cmd)
> + if s:
> + return False
> + if not self.check_vfs_count():
> + return False
> + return True
> +
> +
> + def request_devs(self):
> + """
> + Implement setup process: unbind the PCI device and then bind it
> + to the pci-stub driver.
> +
> + @return: a list of successfully requested devices' PCI IDs.
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> +
> + self.pci_ids = self.get_devs(self.devices_requested)
> + logging.debug("The following pci_ids were found: %s" % self.pci_ids)
> + requested_pci_ids = []
> + self.dev_drivers = {}
> +
> + # Setup all devices specified for assignment to guest
> + for pci_id in self.pci_ids:
> + full_id = get_full_pci_id(pci_id)
> + if not full_id:
> + continue
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + dev_prev_driver= os.path.realpath(os.path.join(drv_path,
> + os.readlink(drv_path)))
> + self.dev_drivers[pci_id] = dev_prev_driver
> +
> + # Judge whether the device driver has been binded to stub
> + if not self.is_binded_to_stub(full_id):
> + logging.debug("Binding device %s to stub" % full_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + stub_new_id = os.path.join(stub_path, 'new_id')
> + unbind_dev = os.path.join(drv_path, 'unbind')
> + stub_bind = os.path.join(stub_path, 'bind')
> +
> + info_write_to_files = [(vendor_id, stub_new_id),
> + (full_id, unbind_dev),
> + (full_id, stub_bind)]
> +
> + for content, file in info_write_to_files:
> + try:
> + utils.open_write_close(content, file)
> + except IOError:
> + logging.debug("Failed to write %s to file %s" %
> + (content, file))
> + continue
> +
> + if not self.is_binded_to_stub(full_id):
> + logging.error("Binding device %s to stub failed" %
> + pci_id)
> + continue
> + else:
> + logging.debug("Device %s already binded to stub" % pci_id)
> + requested_pci_ids.append(pci_id)
> + self.pci_ids = requested_pci_ids
> + return self.pci_ids
> +
> +
> + def release_devs(self):
> + """
> + Release all PCI devices currently assigned to VMs back to the
> + virtualization host.
> + """
> + try:
> + for pci_id in self.dev_drivers:
> + if not self._release_dev(pci_id):
> + logging.error("Failed to release device %s to host" %
> + pci_id)
> + else:
> + logging.info("Released device %s successfully" % pci_id)
> + except:
> + return
> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
> index cc314d4..a86c124 100755
> --- a/client/tests/kvm/kvm_vm.py
> +++ b/client/tests/kvm/kvm_vm.py
> @@ -304,6 +304,12 @@ class VM:
> elif params.get("uuid"):
> qemu_cmd += " -uuid %s" % params.get("uuid")
>
> + # If the PCI assignment step went OK, add each one of the PCI assigned
> + # devices to the qemu command line.
> + if self.pci_assignable:
> + for pci_id in self.pa_pci_ids:
> + qemu_cmd += " -pcidevice host=%s" % pci_id
> +
> return qemu_cmd
>
>
> @@ -392,6 +398,50 @@ class VM:
> self.uuid = f.read().strip()
> f.close()
>
> + if not params.get("pci_assignable") == "no":
> + pa_type = params.get("pci_assignable")
> + pa_devices_requested = params.get("devices_requested")
> +
> + # Virtual Functions (VF) assignable devices
> + if pa_type == "vf":
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + devices_requested=pa_devices_requested)
> + # Physical NIC (PF) assignable devices
> + elif pa_type == "pf":
> + pa_device_names = params.get("device_names")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> + # Working with both VF and PF
> + elif pa_type == "mixed":
> + pa_device_names = params.get("device_names")
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> +
> + self.pa_pci_ids = self.pci_assignable.request_devs()
> +
> + if self.pa_pci_ids:
> + logging.debug("Successfuly assigned devices: %s" %
> + self.pa_pci_ids)
> + else:
> + logging.error("No PCI assignable devices were assigned "
> + "and 'pci_assignable' is defined to %s "
> + "on your config file. Aborting VM creation." %
> + pa_type)
> + return False
> +
> + else:
> + self.pci_assignable = None
It's weird that even though we initialize 'self.pci_assignable' when params.get("pci_assignable") == "no", we still
got backtrace:
03:27:38 ERROR| child process failed
03:27:38 INFO | FAIL kvm.Fedora.11.64.boot kvm.Fedora.11.64.boot timestamp=1263284858 localtime=Jan 12 03:27:38 Unhandled AttributeError: VM instance has no attribute 'pci_assignable'
Traceback (most recent call last):
File "/root/pci_assign/client/common_lib/test.py", line 595, in _call_test_function
return func(*args, **dargs)
File "/root/pci_assign/client/common_lib/test.py", line 281, in execute
postprocess_profiled_run, args, dargs)
File "/root/pci_assign/client/common_lib/test.py", line 202, in _call_run_once
self.run_once(*args, **dargs)
File "/root/pci_assign/client/tests/kvm/kvm.py", line 69, in run_once
kvm_preprocessing.postprocess(self, params, env)
File "/root/pci_assign/client/tests/kvm/kvm_preprocessing.py", line 271, in postprocess
process(test, params, env, postprocess_image, postprocess_vm)
File "/root/pci_assign/client/tests/kvm/kvm_preprocessing.py", line 178, in process
vm_func(test, vm_params, env, vm_name)
File "/root/pci_assign/client/tests/kvm/kvm_preprocessing.py", line 126, in postprocess_vm
vm.destroy(gracefully = params.get("kill_vm_gracefully") == "yes")
File "/root/pci_assign/client/tests/kvm/kvm_vm.py", line 590, in destroy
if self.pci_assignable:
AttributeError: VM instance has no attribute 'pci_assignable'
03:27:38 INFO | END FAIL kvm.Fedora.11.64.boot kvm.Fedora.11.64.boot timestamp=1263284858 localtime=Jan 12 03:27:38
So I added 'self.pci_assignable = None' in __init__() and fixed the problem.
> +
> # Make qemu command
> qemu_command = self.make_qemu_command()
>
> @@ -537,6 +587,8 @@ class VM:
> # Is it already dead?
> if self.is_dead():
> logging.debug("VM is already down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.debug("Destroying VM with PID %d..." %
> @@ -557,6 +609,9 @@ class VM:
> return
> finally:
> session.close()
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> +
>
> # Try to destroy with a monitor command
> logging.debug("Trying to kill VM with monitor command...")
> @@ -566,6 +621,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> # If the VM isn't dead yet...
> @@ -575,6 +632,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.error("Process %s is a zombie!" % self.process.get_pid())
> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
> index a403399..b7ee2e1 100644
> --- a/client/tests/kvm/tests_base.cfg.sample
> +++ b/client/tests/kvm/tests_base.cfg.sample
> @@ -884,3 +884,23 @@ variants:
> pre_command = "/usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage"
> extra_params += " -mem-path /mnt/kvm_hugepage"
>
> +
> +variants:
> + - @no_pci_assignable:
> + pci_assignable = no
> + - pf_assignable:
> + pci_assignable = pf
> + device_names = eth1
> + - vf_assignable:
> + pci_assignable = vf
> + # Driver (kernel module) that supports SR-IOV hardware.
> + # As of today (30-11-2009), we have 2 drivers for this type of hardware:
> + # Intel® 82576 Gigabit Ethernet Controller - igb
> + # Neterion® X3100™ - vxge
> + driver = igb
> + # Driver option to specify the maximum number of virtual functions
> + # (on vxge the option is , for example, is max_config_dev)
> + # the default below is for the igb driver
> + driver_option = "max_vfs=7"
> + # Number of devices that are going to be requested.
> + devices_requested = 7
> --
> 1.6.5.2
After fixing above two problems, patch runs good:
# ./scan_results.py
Test Status Seconds Info
---- ------ ------- ----
(Result file: ../../results/default/status)
Fedora.11.64.boot GOOD 51 completed successfully
pf_assignable.Fedora.11.64.boot GOOD 19 completed successfully
vf_assignable.Fedora.11.64.boot GOOD 14 completed successfully
---- GOOD 117
Thanks very much for improving this patch. :-)
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH] KVM test: Add PCI device assignment support
@ 2010-01-12 18:28 Lucas Meneghel Rodrigues
2010-01-13 2:19 ` Yolkfull Chow
0 siblings, 1 reply; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2010-01-12 18:28 UTC (permalink / raw)
To: autotest; +Cc: kvm, mgoldish, Lucas Meneghel Rodrigues, Yolkfull Chow
Add support to PCI device assignment on the kvm test. It supports
both SR-IOV virtual functions and physical NIC card device
assignment.
Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
be shared amongst multiple virtual machines while retaining the
performance benefit of assigning a PCI device to a virtual machine.
A common example is where a single SR-IOV capable NIC - with perhaps
only a single physical network port - might be shared with multiple
virtual machines by assigning a virtual function to each VM.
SR-IOV support is implemented in the kernel. The core implementation
is contained in the PCI subsystem, but there must also be driver support
for both the Physical Function (PF) and Virtual Function (VF) devices.
With an SR-IOV capable device one can allocate VFs from a PF. The VFs
surface as PCI devices which are backed on the physical PCI device by
resources (queues, and register sets).
Device support:
In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
SR-IOV capable device supported. The igb driver has PF support and the
igbvf has VF support.
In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
the same vxge driver for the PF as well as the VFs.
In order to configure the test:
* For SR-IOV virtual functions passthrough, we could specify the
module parameter 'max_vfs' in config file.
* For physical NIC card pass through, we should specify the device
name(s).
Signed-off-by: Yolkfull Chow <yzhou@redhat.com>
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_utils.py | 284 ++++++++++++++++++++++++++++++++
client/tests/kvm/kvm_vm.py | 60 +++++++
client/tests/kvm/tests_base.cfg.sample | 22 +++-
3 files changed, 365 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 2bbbe22..a2d9607 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -924,3 +924,287 @@ def create_report(report_dir, results_dir):
reporter = os.path.join(report_dir, 'html_report.py')
html_file = os.path.join(results_dir, 'results.html')
os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def get_full_pci_id(pci_id):
+ """
+ Get full PCI ID of pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
+ status, full_id = commands.getstatusoutput(cmd)
+ if status != 0:
+ return None
+ return full_id
+
+
+def get_vendor_from_pci_id(pci_id):
+ """
+ Check out the device vendor ID according to pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
+ return re.sub(":", " ", commands.getoutput(cmd))
+
+
+class PciAssignable(object):
+ """
+ Request PCI assignable devices on host. It will check whether to request
+ PF (physical Functions) or VF (Virtual Functions).
+ """
+ def __init__(self, type="nic_vf", driver=None, driver_option=None,
+ names=None, devices_requested=None):
+ """
+ Initialize parameter 'type' which could be:
+ nic_vf: Virtual Functions
+ nic_pf: Physical Function (actual hardware)
+ mixed: Both includes VFs and PFs
+
+ If pass through Physical NIC cards, we need to specify which devices
+ to be assigned, e.g. 'eth1 eth2'.
+
+ If pass through Virtual Functions, we need to specify how many vfs
+ are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
+ config file.
+
+ @param type: PCI device type.
+ @param driver: Kernel module for the PCI assignable device.
+ @param driver_option: Module option to specify the maximum number of
+ VFs (eg 'max_vfs=7')
+ @param names: Physical NIC cards correspondent network interfaces,
+ e.g.'eth1 eth2 ...'
+ @param devices_requested: Number of devices being requested.
+ """
+ self.type = type
+ self.driver = driver
+ self.driver_option = driver_option
+ if names:
+ self.name_list = names.split()
+ if devices_requested:
+ self.devices_requested = int(devices_requested)
+ else:
+ self.devices_requested = None
+
+
+ def _get_pf_pci_id(self, name, search_str):
+ """
+ Get the PF PCI ID according to name.
+
+ @param name: Name of the PCI device.
+ @param search_str: Search string to be used on lspci.
+ """
+ cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
+ s, pci_id = commands.getstatusoutput(cmd)
+ if not (s or "Cannot get driver information" in pci_id):
+ return pci_id[5:]
+ cmd = "lspci | awk '/%s/ {print $1}'" % search_str
+ pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
+ nic_id = int(re.search('[0-9]+', name).group(0))
+ if (len(pci_ids) - 1) < nic_id:
+ return None
+ return pci_ids[nic_id]
+
+
+ def _release_dev(self, pci_id):
+ """
+ Release a single PCI device.
+
+ @param pci_id: PCI ID of a given PCI device.
+ """
+ base_dir = "/sys/bus/pci"
+ full_id = get_full_pci_id(pci_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ if 'pci-stub' in os.readlink(drv_path):
+ cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
+ if os.system(cmd):
+ return False
+
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
+ if os.system(cmd):
+ return False
+
+ driver = self.dev_drivers[pci_id]
+ cmd = "echo '%s' > %s/bind" % (full_id, driver)
+ if os.system(cmd):
+ return False
+
+ return True
+
+
+ def get_vf_devs(self):
+ """
+ Catch all VFs PCI IDs.
+
+ @return: List with all PCI IDs for the Virtual Functions avaliable
+ """
+ if not self.sr_iov_setup():
+ return []
+
+ cmd = "lspci | awk '/Virtual Function/ {print $1}'"
+ return commands.getoutput(cmd).split()
+
+
+ def get_pf_devs(self):
+ """
+ Catch all PFs PCI IDs.
+
+ @return: List with all PCI IDs for the physical hardware requested
+ """
+ pf_ids = []
+ for name in self.name_list:
+ pf_id = self._get_pf_pci_id(name, "Ethernet")
+ if not pf_id:
+ continue
+ pf_ids.append(pf_id)
+ return pf_ids
+
+
+ def get_devs(self, count):
+ """
+ Check out all devices' PCI IDs according to their name.
+
+ @param count: count number of PCI devices needed for pass through
+ @return: a list of all devices' PCI IDs
+ """
+ if self.type == "nic_vf":
+ vf_ids = self.get_vf_devs()
+ elif self.type == "nic_pf":
+ vf_ids = self.get_pf_devs()
+ elif self.type == "mixed":
+ vf_ids = self.get_vf_devs()
+ vf_ids.extend(self.get_pf_devs())
+ return vf_ids[0:count]
+
+
+ def get_vfs_count(self):
+ """
+ Get VFs count number according to lspci.
+ """
+ cmd = "lspci | grep 'Virtual Function' | wc -l"
+ # For each VF we'll see 2 prints of 'Virtual Function', so let's
+ # divide the result per 2
+ return int(commands.getoutput(cmd)) / 2
+
+
+ def check_vfs_count(self):
+ """
+ Check VFs count number according to the parameter driver_options.
+ """
+ return (self.get_vfs_count == self.devices_requested)
+
+
+ def is_binded_to_stub(self, full_id):
+ """
+ Verify whether the device with full_id is already binded to pci-stub.
+
+ @param full_id: Full ID for the given PCI device
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ if os.path.exists(os.path.join(stub_path, full_id)):
+ return True
+ return False
+
+
+ def sr_iov_setup(self):
+ """
+ Ensure the PCI device is working in sr_iov mode.
+
+ Check if the PCI hardware device drive is loaded with the appropriate,
+ parameters (number of VFs), and if it's not, perform setup.
+
+ @return: True, if the setup was completed successfuly, False otherwise.
+ """
+ re_probe = False
+ s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
+ if s:
+ re_probe = True
+ elif not self.check_vfs_count():
+ os.system("modprobe -r %s" % self.driver)
+ re_probe = True
+
+ # Re-probe driver with proper number of VFs
+ if re_probe:
+ cmd = "modprobe %s %s" % (self.driver, self.driver_option)
+ s, o = commands.getstatusoutput(cmd)
+ if s:
+ return False
+ if not self.check_vfs_count():
+ return False
+ return True
+
+
+ def request_devs(self):
+ """
+ Implement setup process: unbind the PCI device and then bind it
+ to the pci-stub driver.
+
+ @return: a list of successfully requested devices' PCI IDs.
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+
+ self.pci_ids = self.get_devs(self.devices_requested)
+ logging.debug("The following pci_ids were found: %s" % self.pci_ids)
+ requested_pci_ids = []
+ self.dev_drivers = {}
+
+ # Setup all devices specified for assignment to guest
+ for pci_id in self.pci_ids:
+ full_id = get_full_pci_id(pci_id)
+ if not full_id:
+ continue
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ dev_prev_driver= os.path.realpath(os.path.join(drv_path,
+ os.readlink(drv_path)))
+ self.dev_drivers[pci_id] = dev_prev_driver
+
+ # Judge whether the device driver has been binded to stub
+ if not self.is_binded_to_stub(full_id):
+ logging.debug("Binding device %s to stub" % full_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ stub_new_id = os.path.join(stub_path, 'new_id')
+ unbind_dev = os.path.join(drv_path, 'unbind')
+ stub_bind = os.path.join(stub_path, 'bind')
+
+ info_write_to_files = [(vendor_id, stub_new_id),
+ (full_id, unbind_dev),
+ (full_id, stub_bind)]
+
+ for content, file in info_write_to_files:
+ try:
+ utils.open_write_close(content, file)
+ except IOError:
+ logging.debug("Failed to write %s to file %s" %
+ (content, file))
+ continue
+
+ if not self.is_binded_to_stub(full_id):
+ logging.error("Binding device %s to stub failed" %
+ pci_id)
+ continue
+ else:
+ logging.debug("Device %s already binded to stub" % pci_id)
+ requested_pci_ids.append(pci_id)
+ self.pci_ids = requested_pci_ids
+ return self.pci_ids
+
+
+ def release_devs(self):
+ """
+ Release all PCI devices currently assigned to VMs back to the
+ virtualization host.
+ """
+ try:
+ for pci_id in self.dev_drivers:
+ if not self._release_dev(pci_id):
+ logging.error("Failed to release device %s to host" %
+ pci_id)
+ else:
+ logging.info("Released device %s successfully" % pci_id)
+ except:
+ return
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 7229b79..78a16ac 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -119,6 +119,7 @@ class VM:
self.params = params
self.root_dir = root_dir
self.address_cache = address_cache
+ self.pci_assignable = None
# Find available monitor filename
while True:
@@ -304,6 +305,12 @@ class VM:
elif params.get("uuid"):
qemu_cmd += " -uuid %s" % params.get("uuid")
+ # If the PCI assignment step went OK, add each one of the PCI assigned
+ # devices to the qemu command line.
+ if self.pci_assignable:
+ for pci_id in self.pa_pci_ids:
+ qemu_cmd += " -pcidevice host=%s" % pci_id
+
return qemu_cmd
@@ -392,6 +399,50 @@ class VM:
self.uuid = f.read().strip()
f.close()
+ if not params.get("pci_assignable") == "no":
+ pa_type = params.get("pci_assignable")
+ pa_devices_requested = params.get("devices_requested")
+
+ # Virtual Functions (VF) assignable devices
+ if pa_type == "vf":
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ devices_requested=pa_devices_requested)
+ # Physical NIC (PF) assignable devices
+ elif pa_type == "pf":
+ pa_device_names = params.get("device_names")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+ # Working with both VF and PF
+ elif pa_type == "mixed":
+ pa_device_names = params.get("device_names")
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+
+ self.pa_pci_ids = self.pci_assignable.request_devs()
+
+ if self.pa_pci_ids:
+ logging.debug("Successfuly assigned devices: %s" %
+ self.pa_pci_ids)
+ else:
+ logging.error("No PCI assignable devices were assigned "
+ "and 'pci_assignable' is defined to %s "
+ "on your config file. Aborting VM creation." %
+ pa_type)
+ return False
+
+ else:
+ self.pci_assignable = None
+
# Make qemu command
qemu_command = self.make_qemu_command()
@@ -537,6 +588,8 @@ class VM:
# Is it already dead?
if self.is_dead():
logging.debug("VM is already down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.debug("Destroying VM with PID %d..." %
@@ -557,6 +610,9 @@ class VM:
return
finally:
session.close()
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
+
# Try to destroy with a monitor command
logging.debug("Trying to kill VM with monitor command...")
@@ -566,6 +622,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
# If the VM isn't dead yet...
@@ -575,6 +633,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.error("Process %s is a zombie!" % self.process.get_pid())
diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index b8f25f4..49b6d6b 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -889,7 +889,27 @@ variants:
extra_params += " -mem-path /mnt/kvm_hugepage"
+variants:
+ - @no_pci_assignable:
+ pci_assignable = no
+ - pf_assignable:
+ pci_assignable = pf
+ device_names = eth1
+ - vf_assignable:
+ pci_assignable = vf
+ # Driver (kernel module) that supports SR-IOV hardware.
+ # As of today (30-11-2009), we have 2 drivers for this type of hardware:
+ # Intel® 82576 Gigabit Ethernet Controller - igb
+ # Neterion® X3100™ - vxge
+ driver = igb
+ # Driver option to specify the maximum number of virtual functions
+ # (on vxge the option is , for example, is max_config_dev)
+ # the default below is for the igb driver
+ driver_option = "max_vfs=7"
+ # Number of devices that are going to be requested.
+ devices_requested = 7
+
+
image_name.* ?<= images/
cdrom.* ?<= isos/
steps ?<= steps/
-
--
1.6.5.2
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH] KVM test: Add PCI device assignment support
2010-01-12 18:28 Lucas Meneghel Rodrigues
@ 2010-01-13 2:19 ` Yolkfull Chow
2010-01-18 16:49 ` Lucas Meneghel Rodrigues
0 siblings, 1 reply; 7+ messages in thread
From: Yolkfull Chow @ 2010-01-13 2:19 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: autotest, kvm
On Tue, Jan 12, 2010 at 04:28:13PM -0200, Lucas Meneghel Rodrigues wrote:
> Add support to PCI device assignment on the kvm test. It supports
> both SR-IOV virtual functions and physical NIC card device
> assignment.
>
> Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
> be shared amongst multiple virtual machines while retaining the
> performance benefit of assigning a PCI device to a virtual machine.
> A common example is where a single SR-IOV capable NIC - with perhaps
> only a single physical network port - might be shared with multiple
> virtual machines by assigning a virtual function to each VM.
>
> SR-IOV support is implemented in the kernel. The core implementation
> is contained in the PCI subsystem, but there must also be driver support
> for both the Physical Function (PF) and Virtual Function (VF) devices.
> With an SR-IOV capable device one can allocate VFs from a PF. The VFs
> surface as PCI devices which are backed on the physical PCI device by
> resources (queues, and register sets).
>
> Device support:
>
> In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
> SR-IOV capable device supported. The igb driver has PF support and the
> igbvf has VF support.
>
> In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
> the same vxge driver for the PF as well as the VFs.
>
> In order to configure the test:
>
> * For SR-IOV virtual functions passthrough, we could specify the
> module parameter 'max_vfs' in config file.
> * For physical NIC card pass through, we should specify the device
> name(s).
Looks good for me.
Lucas, thank you so much for improving this patch. :-)
>
> Signed-off-by: Yolkfull Chow <yzhou@redhat.com>
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> ---
> client/tests/kvm/kvm_utils.py | 284 ++++++++++++++++++++++++++++++++
> client/tests/kvm/kvm_vm.py | 60 +++++++
> client/tests/kvm/tests_base.cfg.sample | 22 +++-
> 3 files changed, 365 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index 2bbbe22..a2d9607 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -924,3 +924,287 @@ def create_report(report_dir, results_dir):
> reporter = os.path.join(report_dir, 'html_report.py')
> html_file = os.path.join(results_dir, 'results.html')
> os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
> +
> +
> +def get_full_pci_id(pci_id):
> + """
> + Get full PCI ID of pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
> + status, full_id = commands.getstatusoutput(cmd)
> + if status != 0:
> + return None
> + return full_id
> +
> +
> +def get_vendor_from_pci_id(pci_id):
> + """
> + Check out the device vendor ID according to pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
> + return re.sub(":", " ", commands.getoutput(cmd))
> +
> +
> +class PciAssignable(object):
> + """
> + Request PCI assignable devices on host. It will check whether to request
> + PF (physical Functions) or VF (Virtual Functions).
> + """
> + def __init__(self, type="nic_vf", driver=None, driver_option=None,
> + names=None, devices_requested=None):
> + """
> + Initialize parameter 'type' which could be:
> + nic_vf: Virtual Functions
> + nic_pf: Physical Function (actual hardware)
> + mixed: Both includes VFs and PFs
> +
> + If pass through Physical NIC cards, we need to specify which devices
> + to be assigned, e.g. 'eth1 eth2'.
> +
> + If pass through Virtual Functions, we need to specify how many vfs
> + are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
> + config file.
> +
> + @param type: PCI device type.
> + @param driver: Kernel module for the PCI assignable device.
> + @param driver_option: Module option to specify the maximum number of
> + VFs (eg 'max_vfs=7')
> + @param names: Physical NIC cards correspondent network interfaces,
> + e.g.'eth1 eth2 ...'
> + @param devices_requested: Number of devices being requested.
> + """
> + self.type = type
> + self.driver = driver
> + self.driver_option = driver_option
> + if names:
> + self.name_list = names.split()
> + if devices_requested:
> + self.devices_requested = int(devices_requested)
> + else:
> + self.devices_requested = None
> +
> +
> + def _get_pf_pci_id(self, name, search_str):
> + """
> + Get the PF PCI ID according to name.
> +
> + @param name: Name of the PCI device.
> + @param search_str: Search string to be used on lspci.
> + """
> + cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
> + s, pci_id = commands.getstatusoutput(cmd)
> + if not (s or "Cannot get driver information" in pci_id):
> + return pci_id[5:]
> + cmd = "lspci | awk '/%s/ {print $1}'" % search_str
> + pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
> + nic_id = int(re.search('[0-9]+', name).group(0))
> + if (len(pci_ids) - 1) < nic_id:
> + return None
> + return pci_ids[nic_id]
> +
> +
> + def _release_dev(self, pci_id):
> + """
> + Release a single PCI device.
> +
> + @param pci_id: PCI ID of a given PCI device.
> + """
> + base_dir = "/sys/bus/pci"
> + full_id = get_full_pci_id(pci_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + if 'pci-stub' in os.readlink(drv_path):
> + cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
> + if os.system(cmd):
> + return False
> +
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
> + if os.system(cmd):
> + return False
> +
> + driver = self.dev_drivers[pci_id]
> + cmd = "echo '%s' > %s/bind" % (full_id, driver)
> + if os.system(cmd):
> + return False
> +
> + return True
> +
> +
> + def get_vf_devs(self):
> + """
> + Catch all VFs PCI IDs.
> +
> + @return: List with all PCI IDs for the Virtual Functions avaliable
> + """
> + if not self.sr_iov_setup():
> + return []
> +
> + cmd = "lspci | awk '/Virtual Function/ {print $1}'"
> + return commands.getoutput(cmd).split()
> +
> +
> + def get_pf_devs(self):
> + """
> + Catch all PFs PCI IDs.
> +
> + @return: List with all PCI IDs for the physical hardware requested
> + """
> + pf_ids = []
> + for name in self.name_list:
> + pf_id = self._get_pf_pci_id(name, "Ethernet")
> + if not pf_id:
> + continue
> + pf_ids.append(pf_id)
> + return pf_ids
> +
> +
> + def get_devs(self, count):
> + """
> + Check out all devices' PCI IDs according to their name.
> +
> + @param count: count number of PCI devices needed for pass through
> + @return: a list of all devices' PCI IDs
> + """
> + if self.type == "nic_vf":
> + vf_ids = self.get_vf_devs()
> + elif self.type == "nic_pf":
> + vf_ids = self.get_pf_devs()
> + elif self.type == "mixed":
> + vf_ids = self.get_vf_devs()
> + vf_ids.extend(self.get_pf_devs())
> + return vf_ids[0:count]
> +
> +
> + def get_vfs_count(self):
> + """
> + Get VFs count number according to lspci.
> + """
> + cmd = "lspci | grep 'Virtual Function' | wc -l"
> + # For each VF we'll see 2 prints of 'Virtual Function', so let's
> + # divide the result per 2
> + return int(commands.getoutput(cmd)) / 2
> +
> +
> + def check_vfs_count(self):
> + """
> + Check VFs count number according to the parameter driver_options.
> + """
> + return (self.get_vfs_count == self.devices_requested)
> +
> +
> + def is_binded_to_stub(self, full_id):
> + """
> + Verify whether the device with full_id is already binded to pci-stub.
> +
> + @param full_id: Full ID for the given PCI device
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + if os.path.exists(os.path.join(stub_path, full_id)):
> + return True
> + return False
> +
> +
> + def sr_iov_setup(self):
> + """
> + Ensure the PCI device is working in sr_iov mode.
> +
> + Check if the PCI hardware device drive is loaded with the appropriate,
> + parameters (number of VFs), and if it's not, perform setup.
> +
> + @return: True, if the setup was completed successfuly, False otherwise.
> + """
> + re_probe = False
> + s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
> + if s:
> + re_probe = True
> + elif not self.check_vfs_count():
> + os.system("modprobe -r %s" % self.driver)
> + re_probe = True
> +
> + # Re-probe driver with proper number of VFs
> + if re_probe:
> + cmd = "modprobe %s %s" % (self.driver, self.driver_option)
> + s, o = commands.getstatusoutput(cmd)
> + if s:
> + return False
> + if not self.check_vfs_count():
> + return False
> + return True
> +
> +
> + def request_devs(self):
> + """
> + Implement setup process: unbind the PCI device and then bind it
> + to the pci-stub driver.
> +
> + @return: a list of successfully requested devices' PCI IDs.
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> +
> + self.pci_ids = self.get_devs(self.devices_requested)
> + logging.debug("The following pci_ids were found: %s" % self.pci_ids)
> + requested_pci_ids = []
> + self.dev_drivers = {}
> +
> + # Setup all devices specified for assignment to guest
> + for pci_id in self.pci_ids:
> + full_id = get_full_pci_id(pci_id)
> + if not full_id:
> + continue
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + dev_prev_driver= os.path.realpath(os.path.join(drv_path,
> + os.readlink(drv_path)))
> + self.dev_drivers[pci_id] = dev_prev_driver
> +
> + # Judge whether the device driver has been binded to stub
> + if not self.is_binded_to_stub(full_id):
> + logging.debug("Binding device %s to stub" % full_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + stub_new_id = os.path.join(stub_path, 'new_id')
> + unbind_dev = os.path.join(drv_path, 'unbind')
> + stub_bind = os.path.join(stub_path, 'bind')
> +
> + info_write_to_files = [(vendor_id, stub_new_id),
> + (full_id, unbind_dev),
> + (full_id, stub_bind)]
> +
> + for content, file in info_write_to_files:
> + try:
> + utils.open_write_close(content, file)
> + except IOError:
> + logging.debug("Failed to write %s to file %s" %
> + (content, file))
> + continue
> +
> + if not self.is_binded_to_stub(full_id):
> + logging.error("Binding device %s to stub failed" %
> + pci_id)
> + continue
> + else:
> + logging.debug("Device %s already binded to stub" % pci_id)
> + requested_pci_ids.append(pci_id)
> + self.pci_ids = requested_pci_ids
> + return self.pci_ids
> +
> +
> + def release_devs(self):
> + """
> + Release all PCI devices currently assigned to VMs back to the
> + virtualization host.
> + """
> + try:
> + for pci_id in self.dev_drivers:
> + if not self._release_dev(pci_id):
> + logging.error("Failed to release device %s to host" %
> + pci_id)
> + else:
> + logging.info("Released device %s successfully" % pci_id)
> + except:
> + return
> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
> index 7229b79..78a16ac 100755
> --- a/client/tests/kvm/kvm_vm.py
> +++ b/client/tests/kvm/kvm_vm.py
> @@ -119,6 +119,7 @@ class VM:
> self.params = params
> self.root_dir = root_dir
> self.address_cache = address_cache
> + self.pci_assignable = None
>
> # Find available monitor filename
> while True:
> @@ -304,6 +305,12 @@ class VM:
> elif params.get("uuid"):
> qemu_cmd += " -uuid %s" % params.get("uuid")
>
> + # If the PCI assignment step went OK, add each one of the PCI assigned
> + # devices to the qemu command line.
> + if self.pci_assignable:
> + for pci_id in self.pa_pci_ids:
> + qemu_cmd += " -pcidevice host=%s" % pci_id
> +
> return qemu_cmd
>
>
> @@ -392,6 +399,50 @@ class VM:
> self.uuid = f.read().strip()
> f.close()
>
> + if not params.get("pci_assignable") == "no":
> + pa_type = params.get("pci_assignable")
> + pa_devices_requested = params.get("devices_requested")
> +
> + # Virtual Functions (VF) assignable devices
> + if pa_type == "vf":
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + devices_requested=pa_devices_requested)
> + # Physical NIC (PF) assignable devices
> + elif pa_type == "pf":
> + pa_device_names = params.get("device_names")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> + # Working with both VF and PF
> + elif pa_type == "mixed":
> + pa_device_names = params.get("device_names")
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> +
> + self.pa_pci_ids = self.pci_assignable.request_devs()
> +
> + if self.pa_pci_ids:
> + logging.debug("Successfuly assigned devices: %s" %
> + self.pa_pci_ids)
> + else:
> + logging.error("No PCI assignable devices were assigned "
> + "and 'pci_assignable' is defined to %s "
> + "on your config file. Aborting VM creation." %
> + pa_type)
> + return False
> +
> + else:
> + self.pci_assignable = None
> +
> # Make qemu command
> qemu_command = self.make_qemu_command()
>
> @@ -537,6 +588,8 @@ class VM:
> # Is it already dead?
> if self.is_dead():
> logging.debug("VM is already down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.debug("Destroying VM with PID %d..." %
> @@ -557,6 +610,9 @@ class VM:
> return
> finally:
> session.close()
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> +
>
> # Try to destroy with a monitor command
> logging.debug("Trying to kill VM with monitor command...")
> @@ -566,6 +622,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> # If the VM isn't dead yet...
> @@ -575,6 +633,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.error("Process %s is a zombie!" % self.process.get_pid())
> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
> index b8f25f4..49b6d6b 100644
> --- a/client/tests/kvm/tests_base.cfg.sample
> +++ b/client/tests/kvm/tests_base.cfg.sample
> @@ -889,7 +889,27 @@ variants:
> extra_params += " -mem-path /mnt/kvm_hugepage"
>
>
> +variants:
> + - @no_pci_assignable:
> + pci_assignable = no
> + - pf_assignable:
> + pci_assignable = pf
> + device_names = eth1
> + - vf_assignable:
> + pci_assignable = vf
> + # Driver (kernel module) that supports SR-IOV hardware.
> + # As of today (30-11-2009), we have 2 drivers for this type of hardware:
> + # Intel® 82576 Gigabit Ethernet Controller - igb
> + # Neterion® X3100™ - vxge
> + driver = igb
> + # Driver option to specify the maximum number of virtual functions
> + # (on vxge the option is , for example, is max_config_dev)
> + # the default below is for the igb driver
> + driver_option = "max_vfs=7"
> + # Number of devices that are going to be requested.
> + devices_requested = 7
> +
> +
> image_name.* ?<= images/
> cdrom.* ?<= isos/
> steps ?<= steps/
> -
> --
> 1.6.5.2
_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH] KVM test: Add PCI device assignment support
2010-01-13 2:19 ` Yolkfull Chow
@ 2010-01-18 16:49 ` Lucas Meneghel Rodrigues
0 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2010-01-18 16:49 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: autotest, kvm
Ok, after some consideration I have applied the patch:
http://autotest.kernel.org/changeset/4144
Thank you for your work on this test!
On Wed, Jan 13, 2010 at 12:19 AM, Yolkfull Chow <yzhou@redhat.com> wrote:
> On Tue, Jan 12, 2010 at 04:28:13PM -0200, Lucas Meneghel Rodrigues wrote:
>> Add support to PCI device assignment on the kvm test. It supports
>> both SR-IOV virtual functions and physical NIC card device
>> assignment.
>>
>> Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
>> be shared amongst multiple virtual machines while retaining the
>> performance benefit of assigning a PCI device to a virtual machine.
>> A common example is where a single SR-IOV capable NIC - with perhaps
>> only a single physical network port - might be shared with multiple
>> virtual machines by assigning a virtual function to each VM.
>>
>> SR-IOV support is implemented in the kernel. The core implementation
>> is contained in the PCI subsystem, but there must also be driver support
>> for both the Physical Function (PF) and Virtual Function (VF) devices.
>> With an SR-IOV capable device one can allocate VFs from a PF. The VFs
>> surface as PCI devices which are backed on the physical PCI device by
>> resources (queues, and register sets).
>>
>> Device support:
>>
>> In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
>> SR-IOV capable device supported. The igb driver has PF support and the
>> igbvf has VF support.
>>
>> In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
>> the same vxge driver for the PF as well as the VFs.
>>
>> In order to configure the test:
>>
>> * For SR-IOV virtual functions passthrough, we could specify the
>> module parameter 'max_vfs' in config file.
>> * For physical NIC card pass through, we should specify the device
>> name(s).
>
> Looks good for me.
> Lucas, thank you so much for improving this patch. :-)
>
>>
>> Signed-off-by: Yolkfull Chow <yzhou@redhat.com>
>> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
>> ---
>> client/tests/kvm/kvm_utils.py | 284 ++++++++++++++++++++++++++++++++
>> client/tests/kvm/kvm_vm.py | 60 +++++++
>> client/tests/kvm/tests_base.cfg.sample | 22 +++-
>> 3 files changed, 365 insertions(+), 1 deletions(-)
>>
>> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
>> index 2bbbe22..a2d9607 100644
>> --- a/client/tests/kvm/kvm_utils.py
>> +++ b/client/tests/kvm/kvm_utils.py
>> @@ -924,3 +924,287 @@ def create_report(report_dir, results_dir):
>> reporter = os.path.join(report_dir, 'html_report.py')
>> html_file = os.path.join(results_dir, 'results.html')
>> os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
>> +
>> +
>> +def get_full_pci_id(pci_id):
>> + """
>> + Get full PCI ID of pci_id.
>> +
>> + @param pci_id: PCI ID of a device.
>> + """
>> + cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
>> + status, full_id = commands.getstatusoutput(cmd)
>> + if status != 0:
>> + return None
>> + return full_id
>> +
>> +
>> +def get_vendor_from_pci_id(pci_id):
>> + """
>> + Check out the device vendor ID according to pci_id.
>> +
>> + @param pci_id: PCI ID of a device.
>> + """
>> + cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
>> + return re.sub(":", " ", commands.getoutput(cmd))
>> +
>> +
>> +class PciAssignable(object):
>> + """
>> + Request PCI assignable devices on host. It will check whether to request
>> + PF (physical Functions) or VF (Virtual Functions).
>> + """
>> + def __init__(self, type="nic_vf", driver=None, driver_option=None,
>> + names=None, devices_requested=None):
>> + """
>> + Initialize parameter 'type' which could be:
>> + nic_vf: Virtual Functions
>> + nic_pf: Physical Function (actual hardware)
>> + mixed: Both includes VFs and PFs
>> +
>> + If pass through Physical NIC cards, we need to specify which devices
>> + to be assigned, e.g. 'eth1 eth2'.
>> +
>> + If pass through Virtual Functions, we need to specify how many vfs
>> + are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
>> + config file.
>> +
>> + @param type: PCI device type.
>> + @param driver: Kernel module for the PCI assignable device.
>> + @param driver_option: Module option to specify the maximum number of
>> + VFs (eg 'max_vfs=7')
>> + @param names: Physical NIC cards correspondent network interfaces,
>> + e.g.'eth1 eth2 ...'
>> + @param devices_requested: Number of devices being requested.
>> + """
>> + self.type = type
>> + self.driver = driver
>> + self.driver_option = driver_option
>> + if names:
>> + self.name_list = names.split()
>> + if devices_requested:
>> + self.devices_requested = int(devices_requested)
>> + else:
>> + self.devices_requested = None
>> +
>> +
>> + def _get_pf_pci_id(self, name, search_str):
>> + """
>> + Get the PF PCI ID according to name.
>> +
>> + @param name: Name of the PCI device.
>> + @param search_str: Search string to be used on lspci.
>> + """
>> + cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
>> + s, pci_id = commands.getstatusoutput(cmd)
>> + if not (s or "Cannot get driver information" in pci_id):
>> + return pci_id[5:]
>> + cmd = "lspci | awk '/%s/ {print $1}'" % search_str
>> + pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
>> + nic_id = int(re.search('[0-9]+', name).group(0))
>> + if (len(pci_ids) - 1) < nic_id:
>> + return None
>> + return pci_ids[nic_id]
>> +
>> +
>> + def _release_dev(self, pci_id):
>> + """
>> + Release a single PCI device.
>> +
>> + @param pci_id: PCI ID of a given PCI device.
>> + """
>> + base_dir = "/sys/bus/pci"
>> + full_id = get_full_pci_id(pci_id)
>> + vendor_id = get_vendor_from_pci_id(pci_id)
>> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
>> + if 'pci-stub' in os.readlink(drv_path):
>> + cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
>> + if os.system(cmd):
>> + return False
>> +
>> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
>> + cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
>> + if os.system(cmd):
>> + return False
>> +
>> + driver = self.dev_drivers[pci_id]
>> + cmd = "echo '%s' > %s/bind" % (full_id, driver)
>> + if os.system(cmd):
>> + return False
>> +
>> + return True
>> +
>> +
>> + def get_vf_devs(self):
>> + """
>> + Catch all VFs PCI IDs.
>> +
>> + @return: List with all PCI IDs for the Virtual Functions avaliable
>> + """
>> + if not self.sr_iov_setup():
>> + return []
>> +
>> + cmd = "lspci | awk '/Virtual Function/ {print $1}'"
>> + return commands.getoutput(cmd).split()
>> +
>> +
>> + def get_pf_devs(self):
>> + """
>> + Catch all PFs PCI IDs.
>> +
>> + @return: List with all PCI IDs for the physical hardware requested
>> + """
>> + pf_ids = []
>> + for name in self.name_list:
>> + pf_id = self._get_pf_pci_id(name, "Ethernet")
>> + if not pf_id:
>> + continue
>> + pf_ids.append(pf_id)
>> + return pf_ids
>> +
>> +
>> + def get_devs(self, count):
>> + """
>> + Check out all devices' PCI IDs according to their name.
>> +
>> + @param count: count number of PCI devices needed for pass through
>> + @return: a list of all devices' PCI IDs
>> + """
>> + if self.type == "nic_vf":
>> + vf_ids = self.get_vf_devs()
>> + elif self.type == "nic_pf":
>> + vf_ids = self.get_pf_devs()
>> + elif self.type == "mixed":
>> + vf_ids = self.get_vf_devs()
>> + vf_ids.extend(self.get_pf_devs())
>> + return vf_ids[0:count]
>> +
>> +
>> + def get_vfs_count(self):
>> + """
>> + Get VFs count number according to lspci.
>> + """
>> + cmd = "lspci | grep 'Virtual Function' | wc -l"
>> + # For each VF we'll see 2 prints of 'Virtual Function', so let's
>> + # divide the result per 2
>> + return int(commands.getoutput(cmd)) / 2
>> +
>> +
>> + def check_vfs_count(self):
>> + """
>> + Check VFs count number according to the parameter driver_options.
>> + """
>> + return (self.get_vfs_count == self.devices_requested)
>> +
>> +
>> + def is_binded_to_stub(self, full_id):
>> + """
>> + Verify whether the device with full_id is already binded to pci-stub.
>> +
>> + @param full_id: Full ID for the given PCI device
>> + """
>> + base_dir = "/sys/bus/pci"
>> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
>> + if os.path.exists(os.path.join(stub_path, full_id)):
>> + return True
>> + return False
>> +
>> +
>> + def sr_iov_setup(self):
>> + """
>> + Ensure the PCI device is working in sr_iov mode.
>> +
>> + Check if the PCI hardware device drive is loaded with the appropriate,
>> + parameters (number of VFs), and if it's not, perform setup.
>> +
>> + @return: True, if the setup was completed successfuly, False otherwise.
>> + """
>> + re_probe = False
>> + s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
>> + if s:
>> + re_probe = True
>> + elif not self.check_vfs_count():
>> + os.system("modprobe -r %s" % self.driver)
>> + re_probe = True
>> +
>> + # Re-probe driver with proper number of VFs
>> + if re_probe:
>> + cmd = "modprobe %s %s" % (self.driver, self.driver_option)
>> + s, o = commands.getstatusoutput(cmd)
>> + if s:
>> + return False
>> + if not self.check_vfs_count():
>> + return False
>> + return True
>> +
>> +
>> + def request_devs(self):
>> + """
>> + Implement setup process: unbind the PCI device and then bind it
>> + to the pci-stub driver.
>> +
>> + @return: a list of successfully requested devices' PCI IDs.
>> + """
>> + base_dir = "/sys/bus/pci"
>> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
>> +
>> + self.pci_ids = self.get_devs(self.devices_requested)
>> + logging.debug("The following pci_ids were found: %s" % self.pci_ids)
>> + requested_pci_ids = []
>> + self.dev_drivers = {}
>> +
>> + # Setup all devices specified for assignment to guest
>> + for pci_id in self.pci_ids:
>> + full_id = get_full_pci_id(pci_id)
>> + if not full_id:
>> + continue
>> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
>> + dev_prev_driver= os.path.realpath(os.path.join(drv_path,
>> + os.readlink(drv_path)))
>> + self.dev_drivers[pci_id] = dev_prev_driver
>> +
>> + # Judge whether the device driver has been binded to stub
>> + if not self.is_binded_to_stub(full_id):
>> + logging.debug("Binding device %s to stub" % full_id)
>> + vendor_id = get_vendor_from_pci_id(pci_id)
>> + stub_new_id = os.path.join(stub_path, 'new_id')
>> + unbind_dev = os.path.join(drv_path, 'unbind')
>> + stub_bind = os.path.join(stub_path, 'bind')
>> +
>> + info_write_to_files = [(vendor_id, stub_new_id),
>> + (full_id, unbind_dev),
>> + (full_id, stub_bind)]
>> +
>> + for content, file in info_write_to_files:
>> + try:
>> + utils.open_write_close(content, file)
>> + except IOError:
>> + logging.debug("Failed to write %s to file %s" %
>> + (content, file))
>> + continue
>> +
>> + if not self.is_binded_to_stub(full_id):
>> + logging.error("Binding device %s to stub failed" %
>> + pci_id)
>> + continue
>> + else:
>> + logging.debug("Device %s already binded to stub" % pci_id)
>> + requested_pci_ids.append(pci_id)
>> + self.pci_ids = requested_pci_ids
>> + return self.pci_ids
>> +
>> +
>> + def release_devs(self):
>> + """
>> + Release all PCI devices currently assigned to VMs back to the
>> + virtualization host.
>> + """
>> + try:
>> + for pci_id in self.dev_drivers:
>> + if not self._release_dev(pci_id):
>> + logging.error("Failed to release device %s to host" %
>> + pci_id)
>> + else:
>> + logging.info("Released device %s successfully" % pci_id)
>> + except:
>> + return
>> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
>> index 7229b79..78a16ac 100755
>> --- a/client/tests/kvm/kvm_vm.py
>> +++ b/client/tests/kvm/kvm_vm.py
>> @@ -119,6 +119,7 @@ class VM:
>> self.params = params
>> self.root_dir = root_dir
>> self.address_cache = address_cache
>> + self.pci_assignable = None
>>
>> # Find available monitor filename
>> while True:
>> @@ -304,6 +305,12 @@ class VM:
>> elif params.get("uuid"):
>> qemu_cmd += " -uuid %s" % params.get("uuid")
>>
>> + # If the PCI assignment step went OK, add each one of the PCI assigned
>> + # devices to the qemu command line.
>> + if self.pci_assignable:
>> + for pci_id in self.pa_pci_ids:
>> + qemu_cmd += " -pcidevice host=%s" % pci_id
>> +
>> return qemu_cmd
>>
>>
>> @@ -392,6 +399,50 @@ class VM:
>> self.uuid = f.read().strip()
>> f.close()
>>
>> + if not params.get("pci_assignable") == "no":
>> + pa_type = params.get("pci_assignable")
>> + pa_devices_requested = params.get("devices_requested")
>> +
>> + # Virtual Functions (VF) assignable devices
>> + if pa_type == "vf":
>> + pa_driver = params.get("driver")
>> + pa_driver_option = params.get("driver_option")
>> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
>> + driver=pa_driver,
>> + driver_option=pa_driver_option,
>> + devices_requested=pa_devices_requested)
>> + # Physical NIC (PF) assignable devices
>> + elif pa_type == "pf":
>> + pa_device_names = params.get("device_names")
>> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
>> + names=pa_device_names,
>> + devices_requested=pa_devices_requested)
>> + # Working with both VF and PF
>> + elif pa_type == "mixed":
>> + pa_device_names = params.get("device_names")
>> + pa_driver = params.get("driver")
>> + pa_driver_option = params.get("driver_option")
>> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
>> + driver=pa_driver,
>> + driver_option=pa_driver_option,
>> + names=pa_device_names,
>> + devices_requested=pa_devices_requested)
>> +
>> + self.pa_pci_ids = self.pci_assignable.request_devs()
>> +
>> + if self.pa_pci_ids:
>> + logging.debug("Successfuly assigned devices: %s" %
>> + self.pa_pci_ids)
>> + else:
>> + logging.error("No PCI assignable devices were assigned "
>> + "and 'pci_assignable' is defined to %s "
>> + "on your config file. Aborting VM creation." %
>> + pa_type)
>> + return False
>> +
>> + else:
>> + self.pci_assignable = None
>> +
>> # Make qemu command
>> qemu_command = self.make_qemu_command()
>>
>> @@ -537,6 +588,8 @@ class VM:
>> # Is it already dead?
>> if self.is_dead():
>> logging.debug("VM is already down")
>> + if self.pci_assignable:
>> + self.pci_assignable.release_devs()
>> return
>>
>> logging.debug("Destroying VM with PID %d..." %
>> @@ -557,6 +610,9 @@ class VM:
>> return
>> finally:
>> session.close()
>> + if self.pci_assignable:
>> + self.pci_assignable.release_devs()
>> +
>>
>> # Try to destroy with a monitor command
>> logging.debug("Trying to kill VM with monitor command...")
>> @@ -566,6 +622,8 @@ class VM:
>> # Wait for the VM to be really dead
>> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
>> logging.debug("VM is down")
>> + if self.pci_assignable:
>> + self.pci_assignable.release_devs()
>> return
>>
>> # If the VM isn't dead yet...
>> @@ -575,6 +633,8 @@ class VM:
>> # Wait for the VM to be really dead
>> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
>> logging.debug("VM is down")
>> + if self.pci_assignable:
>> + self.pci_assignable.release_devs()
>> return
>>
>> logging.error("Process %s is a zombie!" % self.process.get_pid())
>> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
>> index b8f25f4..49b6d6b 100644
>> --- a/client/tests/kvm/tests_base.cfg.sample
>> +++ b/client/tests/kvm/tests_base.cfg.sample
>> @@ -889,7 +889,27 @@ variants:
>> extra_params += " -mem-path /mnt/kvm_hugepage"
>>
>>
>> +variants:
>> + - @no_pci_assignable:
>> + pci_assignable = no
>> + - pf_assignable:
>> + pci_assignable = pf
>> + device_names = eth1
>> + - vf_assignable:
>> + pci_assignable = vf
>> + # Driver (kernel module) that supports SR-IOV hardware.
>> + # As of today (30-11-2009), we have 2 drivers for this type of hardware:
>> + # Intel® 82576 Gigabit Ethernet Controller - igb
>> + # Neterion® X3100™ - vxge
>> + driver = igb
>> + # Driver option to specify the maximum number of virtual functions
>> + # (on vxge the option is , for example, is max_config_dev)
>> + # the default below is for the igb driver
>> + driver_option = "max_vfs=7"
>> + # Number of devices that are going to be requested.
>> + devices_requested = 7
>> +
>> +
>> image_name.* ?<= images/
>> cdrom.* ?<= isos/
>> steps ?<= steps/
>> -
>> --
>> 1.6.5.2
> _______________________________________________
> Autotest mailing list
> Autotest@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
Lucas
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH] KVM test: Add PCI device assignment support
@ 2009-11-30 21:08 Lucas Meneghel Rodrigues
2009-12-01 3:21 ` Yolkfull Chow
0 siblings, 1 reply; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-11-30 21:08 UTC (permalink / raw)
To: autotest; +Cc: kvm, mgoldish, Lucas Meneghel Rodrigues, Yolkfull Chow
Add support to PCI device assignment on the kvm test. It supports
both SR-IOV virtual functions and physical NIC card device
assignment.
Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
be shared amongst multiple virtual machines while retaining the
performance benefit of assigning a PCI device to a virtual machine.
A common example is where a single SR-IOV capable NIC - with perhaps
only a single physical network port - might be shared with multiple
virtual machines by assigning a virtual function to each VM.
SR-IOV support is implemented in the kernel. The core implementation is
contained in the PCI subsystem, but there must also be driver support
for both the Physical Function (PF) and Virtual Function (VF) devices.
With an SR-IOV capable device one can allocate VFs from a PF. The VFs
surface as PCI devices which are backed on the physical PCI device by
resources (queues, and register sets).
Device support:
In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
SR-IOV capable device supported. The igb driver has PF support and the
igbvf has VF support.
In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
the same vxge driver for the PF as well as the VFs.
In order to configure the test:
* For SR-IOV virtual functions passthrough, we could specify the
module parameter 'max_vfs' in config file.
* For physical NIC card pass through, we should specify the device
name(s).
3rd try: The patch was heavily modified from the first 2 attempts:
* Naming is consistent with "PCI assignment" instead of
"PCI passthrough", as it's a more correct term.
* No more device database file, as all information about devices
is stored on an attribute of the VM class (an instance of the
PciAssignable class), so we don't have to bother dumping this
info to a file.
* Code simplified to avoid duplication
As it's a fairly involved feature, the more reviews we get the better.
Signed-off-by: Yolkfull Chow <yzhou@redhat.com>
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_tests.cfg.sample | 20 +++-
client/tests/kvm/kvm_utils.py | 278 +++++++++++++++++++++++++++++++++
client/tests/kvm/kvm_vm.py | 59 +++++++
3 files changed, 356 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index feffb8d..be60399 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -844,13 +844,31 @@ variants:
only default
image_format = raw
-
variants:
- @smallpages:
- hugepages:
pre_command = "/usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage"
extra_params += " -mem-path /mnt/kvm_hugepage"
+variants:
+ - @no_pci_assignable:
+ pci_assignable = no
+ - pf_assignable:
+ pci_assignable = pf
+ device_names = eth1
+ - vf_assignable:
+ pci_assignable = vf
+ # Driver (kernel module) that supports SR-IOV hardware.
+ # As of today (30-11-2009), we have 2 drivers for this type of hardware:
+ # Intel® 82576 Gigabit Ethernet Controller - igb
+ # Neterion® X3100™ - vxge
+ driver = igb
+ # Driver option to specify the number of virtual functions
+ # (on vxge the option is , for example, is max_config_dev)
+ # the default below is for the igb driver
+ driver_option = max_vfs
+ # Number of devices that are going to be requested.
+ devices_requested = 7
variants:
- @basic:
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index bf25900..fc04745 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -874,3 +874,281 @@ def unmap_url_cache(cachedir, url, expected_hash, method="md5"):
file_path = utils.unmap_url(cachedir, src, cachedir)
return file_path
+
+
+def get_full_pci_id(pci_id):
+ """
+ Get full PCI ID of pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
+ status, full_id = commands.getstatusoutput(cmd)
+ if status != 0:
+ return None
+ return full_id
+
+
+def get_vendor_from_pci_id(pci_id):
+ """
+ Check out the device vendor ID according to pci_id.
+
+ @param pci_id: PCI ID of a device.
+ """
+ cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
+ return re.sub(":", " ", commands.getoutput(cmd))
+
+
+class PciAssignable(object):
+ """
+ Request PCI assignable devices on host. It will check whether to request
+ PF (physical Functions) or VF (Virtual Functions).
+ """
+ def __init__(self, type="nic_vf", driver=None, driver_option=None,
+ names=None, devices_requested=None):
+ """
+ Initialize parameter 'type' which could be:
+ nic_vf: Virtual Functions
+ nic_pf: Physical Function (actual hardware)
+ mixed: Both includes VFs and PFs
+
+ If pass through Physical NIC cards, we need to specify which devices
+ to be assigned, e.g. 'eth1 eth2'.
+
+ If pass through Virtual Functions, we need to specify how many vfs
+ are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
+ config file.
+
+ @param type: PCI device type.
+ @param driver: Kernel module for the PCI assignable device.
+ @param driver_option: Module option to specify the number of VFs needed.
+ @param names: Physical NIC cards correspondent network interfaces,
+ e.g.'eth1 eth2 ...'
+ """
+ self.type = type
+ self.driver = driver
+ self.driver_option = "%s=%s" % (driver_option, devices_requested)
+ if names:
+ self.name_list = names.split()
+ if devices_requested:
+ self.devices_requested = int(devices_requested)
+
+
+ def _get_pf_pci_id(self, name, search_str):
+ """
+ Get the PF PCI ID according to name.
+
+ @param name: Name of the PCI device.
+ @param search_str: Search string to be used on lspci.
+ """
+ cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
+ s, pci_id = commands.getstatusoutput(cmd)
+ if not (s or "Cannot get driver information" in pci_id):
+ return pci_id[5:]
+ cmd = "lspci | awk '/%s/ {print $1}'" % search_str
+ pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
+ nic_id = int(re.search('[0-9]+', name).group(0))
+ if (len(pci_ids) - 1) < nic_id:
+ return None
+ return pci_ids[nic_id]
+
+
+ def _release_dev(self, pci_id):
+ """
+ Release a single PCI device.
+
+ @param pci_id: PCI ID of a given PCI device.
+ """
+ base_dir = "/sys/bus/pci"
+ full_id = get_full_pci_id(pci_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ if 'pci-stub' in os.readlink(drv_path):
+ cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
+ if os.system(cmd):
+ return False
+
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
+ if os.system(cmd):
+ return False
+
+ driver = self.dev_drivers[pci_id]
+ cmd = "echo '%s' > %s/bind" % (full_id, driver)
+ if os.system(cmd):
+ return False
+
+ return True
+
+
+ def get_vf_devs(self):
+ """
+ Catch all VFs PCI IDs.
+
+ @return: List with all PCI IDs for the Virtual Functions avaliable
+ """
+ if not self.sr_iov_setup():
+ return []
+
+ cmd = "lspci | awk '/Virtual Function/ {print $1}'"
+ return commands.getoutput(cmd).split()
+
+
+ def get_pf_devs(self):
+ """
+ Catch all PFs PCI IDs.
+
+ @return: List with all PCI IDs for the physical hardware requested
+ """
+ pf_ids = []
+ for name in self.name_list:
+ pf_id = self._get_pf_pci_id(name, "Ethernet")
+ if not pf_id:
+ continue
+ pf_ids.append(pf_id)
+ return pf_ids
+
+
+ def get_devs(self, count):
+ """
+ Check out all devices' PCI IDs according to their name.
+
+ @param count: count number of PCI devices needed for pass through
+ @return: a list of all devices' PCI IDs
+ """
+ if self.type == "nic_vf":
+ vf_ids = self.get_vf_devs()
+ elif self.type == "nic_pf":
+ vf_ids = self.get_pf_devs()
+ elif self.type == "mixed":
+ vf_ids = self.get_vf_devs()
+ vf_ids.extend(self.get_pf_devs())
+ return vf_ids[0:count]
+
+
+ def get_vfs_count(self):
+ """
+ Get VFs count number according to lspci.
+ """
+ cmd = "lspci | grep 'Virtual Function' | wc -l"
+ # For each VF we'll see 2 prints of 'Virtual Function', so let's
+ # divide the result per 2
+ return int(commands.getoutput(cmd)) / 2
+
+
+ def check_vfs_count(self):
+ """
+ Check VFs count number according to the parameter driver_options.
+ """
+ return (self.get_vfs_count == self.devices_requested)
+
+
+ def is_binded_to_stub(self, full_id):
+ """
+ Verify whether the device with full_id is already binded to pci-stub.
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+ if os.path.exists(os.path.join(stub_path, full_id)):
+ return True
+ return False
+
+
+ def sr_iov_setup(self):
+ """
+ Ensure the PCI device is working in sr_iov mode.
+
+ Check if the PCI hardware device drive is loaded with the appropriate,
+ parameters (number of VFs), and if it's not, perform setup.
+
+ @return: True, if the setup was completed successfuly, False otherwise.
+ """
+ re_probe = False
+ s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
+ if s:
+ re_probe = True
+ elif not self.check_vfs_count():
+ os.system("modprobe -r %s" % self.driver)
+ re_probe = True
+
+ # Re-probe driver with proper number of VFs
+ if re_probe:
+ cmd = "modprobe %s %s" % (self.driver, self.driver_option)
+ s, o = commands.getstatusoutput(cmd)
+ if s:
+ return False
+ if not self.check_vfs_count():
+ return False
+ return True
+
+
+ def request_devs(self):
+ """
+ Implement setup process: unbind the PCI device and then bind it
+ to the pci-stub driver.
+
+ @return: a list of successfully requested devices' PCI IDs.
+ """
+ base_dir = "/sys/bus/pci"
+ stub_path = os.path.join(base_dir, "drivers/pci-stub")
+
+ self.pci_ids = self.get_devs(self.devices_requested)
+ logging.debug("The following pci_ids were found: %s" % self.pci_ids)
+ requested_pci_ids = []
+ self.dev_drivers = {}
+
+ # Setup all devices specified for assignment to guest
+ for pci_id in self.pci_ids:
+ full_id = get_full_pci_id(pci_id)
+ if not full_id:
+ continue
+ drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
+ dev_prev_driver= os.path.realpath(os.path.join(drv_path,
+ os.readlink(drv_path)))
+ self.dev_drivers[pci_id] = dev_prev_driver
+
+ # Judge whether the device driver has been binded to stub
+ if not self.is_binded_to_stub(full_id):
+ logging.debug("Binding device %s to stub" % full_id)
+ vendor_id = get_vendor_from_pci_id(pci_id)
+ stub_new_id = os.path.join(stub_path, 'new_id')
+ unbind_dev = os.path.join(drv_path, 'unbind')
+ stub_bind = os.path.join(stub_path, 'bind')
+
+ info_write_to_files = [(vendor_id, stub_new_id),
+ (full_id, unbind_dev),
+ (full_id, stub_bind)]
+
+ for content, file in info_write_to_files:
+ try:
+ utils.open_write_close(content, file)
+ except IOError:
+ logging.debug("Failed to write %s to file %s" %
+ (content, file))
+ continue
+
+ if not self.is_binded_to_stub(full_id):
+ logging.error("Binding device %s to stub failed" %
+ pci_id)
+ continue
+ else:
+ logging.debug("Device %s already binded to stub" % pci_id)
+ requested_pci_ids.append(pci_id)
+ self.pci_ids = requested_pci_ids
+ return self.pci_ids
+
+
+ def release_devs(self):
+ """
+ Release all PCI devices currently assigned to VMs back to the
+ virtualization host.
+ """
+ try:
+ for pci_id in self.dev_drivers:
+ if not self._release_dev(pci_id):
+ logging.error("Failed to release device %s to host" %
+ pci_id)
+ else:
+ logging.info("Released device %s successfully" % pci_id)
+ except:
+ return
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 100b567..045e3ed 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -304,6 +304,12 @@ class VM:
elif params.get("uuid"):
qemu_cmd += " -uuid %s" % params.get("uuid")
+ # If the PCI assignment step went OK, add each one of the PCI assigned
+ # devices to the qemu command line.
+ if self.pci_assignable:
+ for pci_id in self.pa_pci_ids:
+ qemu_cmd += " -pcidevice host=%s" % pci_id
+
return qemu_cmd
@@ -392,6 +398,50 @@ class VM:
self.uuid = f.read().strip()
f.close()
+ if not params.get("pci_assignable") == "no":
+ pa_type = params.get("pci_assignable")
+ pa_devices_requested = params.get("devices_requested")
+
+ # Virtual Functions (VF) assignable devices
+ if pa_type == "vf":
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ devices_requested=pa_devices_requested)
+ # Physical NIC (PF) assignable devices
+ elif pa_type == "pf":
+ pa_device_names = params.get("device_names")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+ # Working with both VF and PF
+ elif pa_type == "mixed":
+ pa_device_names = params.get("device_names")
+ pa_driver = params.get("driver")
+ pa_driver_option = params.get("driver_option")
+ self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
+ driver=pa_driver,
+ driver_option=pa_driver_option,
+ names=pa_device_names,
+ devices_requested=pa_devices_requested)
+
+ self.pa_pci_ids = self.pci_assignable.request_devs()
+
+ if self.pa_pci_ids:
+ logging.debug("Successfuly assigned devices: %s" %
+ self.pa_pci_ids)
+ else:
+ logging.error("No PCI assignable devices were assigned "
+ "and 'pci_assignable' is defined to %s "
+ "on your config file. Aborting VM creation." %
+ pa_type)
+ return False
+
+ else:
+ self.pci_assignable = None
+
# Make qemu command
qemu_command = self.make_qemu_command()
@@ -537,6 +587,8 @@ class VM:
# Is it already dead?
if self.is_dead():
logging.debug("VM is already down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.debug("Destroying VM with PID %d..." %
@@ -557,6 +609,9 @@ class VM:
return
finally:
session.close()
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
+
# Try to destroy with a monitor command
logging.debug("Trying to kill VM with monitor command...")
@@ -566,6 +621,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
# If the VM isn't dead yet...
@@ -575,6 +632,8 @@ class VM:
# Wait for the VM to be really dead
if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
logging.debug("VM is down")
+ if self.pci_assignable:
+ self.pci_assignable.release_devs()
return
logging.error("Process %s is a zombie!" % self.process.get_pid())
--
1.6.5.2
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH] KVM test: Add PCI device assignment support
2009-11-30 21:08 Lucas Meneghel Rodrigues
@ 2009-12-01 3:21 ` Yolkfull Chow
0 siblings, 0 replies; 7+ messages in thread
From: Yolkfull Chow @ 2009-12-01 3:21 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: autotest, kvm, mgoldish
On Mon, Nov 30, 2009 at 07:08:11PM -0200, Lucas Meneghel Rodrigues wrote:
> Add support to PCI device assignment on the kvm test. It supports
> both SR-IOV virtual functions and physical NIC card device
> assignment.
>
> Single Root I/O Virtualization (SR-IOV) allows a single PCI device to
> be shared amongst multiple virtual machines while retaining the
> performance benefit of assigning a PCI device to a virtual machine.
> A common example is where a single SR-IOV capable NIC - with perhaps
> only a single physical network port - might be shared with multiple
> virtual machines by assigning a virtual function to each VM.
>
> SR-IOV support is implemented in the kernel. The core implementation is
> contained in the PCI subsystem, but there must also be driver support
> for both the Physical Function (PF) and Virtual Function (VF) devices.
> With an SR-IOV capable device one can allocate VFs from a PF. The VFs
> surface as PCI devices which are backed on the physical PCI device by
> resources (queues, and register sets).
>
> Device support:
>
> In 2.6.30, the Intel® 82576 Gigabit Ethernet Controller is the only
> SR-IOV capable device supported. The igb driver has PF support and the
> igbvf has VF support.
>
> In 2.6.31 the Neterion® X3100™ is supported as well. This device uses
> the same vxge driver for the PF as well as the VFs.
>
> In order to configure the test:
>
> * For SR-IOV virtual functions passthrough, we could specify the
> module parameter 'max_vfs' in config file.
> * For physical NIC card pass through, we should specify the device
> name(s).
>
> 3rd try: The patch was heavily modified from the first 2 attempts:
>
> * Naming is consistent with "PCI assignment" instead of
> "PCI passthrough", as it's a more correct term.
> * No more device database file, as all information about devices
> is stored on an attribute of the VM class (an instance of the
> PciAssignable class), so we don't have to bother dumping this
> info to a file.
> * Code simplified to avoid duplication
>
> As it's a fairly involved feature, the more reviews we get the better.
Hi Lucas,
I have some ideas about devices_requested parameter, please see comments
below:
>
> Signed-off-by: Yolkfull Chow <yzhou@redhat.com>
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> ---
> client/tests/kvm/kvm_tests.cfg.sample | 20 +++-
> client/tests/kvm/kvm_utils.py | 278 +++++++++++++++++++++++++++++++++
> client/tests/kvm/kvm_vm.py | 59 +++++++
> 3 files changed, 356 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
> index feffb8d..be60399 100644
> --- a/client/tests/kvm/kvm_tests.cfg.sample
> +++ b/client/tests/kvm/kvm_tests.cfg.sample
> @@ -844,13 +844,31 @@ variants:
> only default
> image_format = raw
>
> -
> variants:
> - @smallpages:
> - hugepages:
> pre_command = "/usr/bin/python scripts/hugepage.py /mnt/kvm_hugepage"
> extra_params += " -mem-path /mnt/kvm_hugepage"
>
> +variants:
> + - @no_pci_assignable:
> + pci_assignable = no
> + - pf_assignable:
> + pci_assignable = pf
> + device_names = eth1
> + - vf_assignable:
> + pci_assignable = vf
> + # Driver (kernel module) that supports SR-IOV hardware.
> + # As of today (30-11-2009), we have 2 drivers for this type of hardware:
> + # Intel® 82576 Gigabit Ethernet Controller - igb
> + # Neterion® X3100™ - vxge
> + driver = igb
> + # Driver option to specify the number of virtual functions
> + # (on vxge the option is , for example, is max_config_dev)
> + # the default below is for the igb driver
> + driver_option = max_vfs
> + # Number of devices that are going to be requested.
> + devices_requested = 7
I think we'd better specify not only number of driver_option 'max_vfs' but
also devices_requested. Reasons:
1) The value of driver option 'max_vfs' is different with devices_requested.
Typically, if we assign 7 to max_vfs, it will virtualize 14(7*2) VFs.
2) Also, we can later write a case that boots 14 VMs and assign one Virtual
Function to each of them. Thus we need to modprobe max_vfs=7, and set
devices_requested=1. It could be boundary test, already a bug exists.
What do you think?
>
> variants:
> - @basic:
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index bf25900..fc04745 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -874,3 +874,281 @@ def unmap_url_cache(cachedir, url, expected_hash, method="md5"):
> file_path = utils.unmap_url(cachedir, src, cachedir)
>
> return file_path
> +
> +
> +def get_full_pci_id(pci_id):
> + """
> + Get full PCI ID of pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -D | awk '/%s/ {print $1}'" % pci_id
> + status, full_id = commands.getstatusoutput(cmd)
> + if status != 0:
> + return None
> + return full_id
> +
> +
> +def get_vendor_from_pci_id(pci_id):
> + """
> + Check out the device vendor ID according to pci_id.
> +
> + @param pci_id: PCI ID of a device.
> + """
> + cmd = "lspci -n | awk '/%s/ {print $3}'" % pci_id
> + return re.sub(":", " ", commands.getoutput(cmd))
> +
> +
> +class PciAssignable(object):
> + """
> + Request PCI assignable devices on host. It will check whether to request
> + PF (physical Functions) or VF (Virtual Functions).
> + """
> + def __init__(self, type="nic_vf", driver=None, driver_option=None,
> + names=None, devices_requested=None):
> + """
> + Initialize parameter 'type' which could be:
> + nic_vf: Virtual Functions
> + nic_pf: Physical Function (actual hardware)
> + mixed: Both includes VFs and PFs
> +
> + If pass through Physical NIC cards, we need to specify which devices
> + to be assigned, e.g. 'eth1 eth2'.
> +
> + If pass through Virtual Functions, we need to specify how many vfs
> + are going to be assigned, e.g. passthrough_count = 8 and max_vfs in
> + config file.
> +
> + @param type: PCI device type.
> + @param driver: Kernel module for the PCI assignable device.
> + @param driver_option: Module option to specify the number of VFs needed.
> + @param names: Physical NIC cards correspondent network interfaces,
> + e.g.'eth1 eth2 ...'
> + """
> + self.type = type
> + self.driver = driver
> + self.driver_option = "%s=%s" % (driver_option, devices_requested)
> + if names:
> + self.name_list = names.split()
> + if devices_requested:
> + self.devices_requested = int(devices_requested)
> +
> +
> + def _get_pf_pci_id(self, name, search_str):
> + """
> + Get the PF PCI ID according to name.
> +
> + @param name: Name of the PCI device.
> + @param search_str: Search string to be used on lspci.
> + """
> + cmd = "ethtool -i %s | awk '/bus-info/ {print $2}'" % name
> + s, pci_id = commands.getstatusoutput(cmd)
> + if not (s or "Cannot get driver information" in pci_id):
> + return pci_id[5:]
> + cmd = "lspci | awk '/%s/ {print $1}'" % search_str
> + pci_ids = [id for id in commands.getoutput(cmd).splitlines()]
> + nic_id = int(re.search('[0-9]+', name).group(0))
> + if (len(pci_ids) - 1) < nic_id:
> + return None
> + return pci_ids[nic_id]
> +
> +
> + def _release_dev(self, pci_id):
> + """
> + Release a single PCI device.
> +
> + @param pci_id: PCI ID of a given PCI device.
> + """
> + base_dir = "/sys/bus/pci"
> + full_id = get_full_pci_id(pci_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + if 'pci-stub' in os.readlink(drv_path):
> + cmd = "echo '%s' > %s/new_id" % (vendor_id, drv_path)
> + if os.system(cmd):
> + return False
> +
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + cmd = "echo '%s' > %s/unbind" % (full_id, stub_path)
> + if os.system(cmd):
> + return False
> +
> + driver = self.dev_drivers[pci_id]
> + cmd = "echo '%s' > %s/bind" % (full_id, driver)
> + if os.system(cmd):
> + return False
> +
> + return True
> +
> +
> + def get_vf_devs(self):
> + """
> + Catch all VFs PCI IDs.
> +
> + @return: List with all PCI IDs for the Virtual Functions avaliable
> + """
> + if not self.sr_iov_setup():
> + return []
> +
> + cmd = "lspci | awk '/Virtual Function/ {print $1}'"
> + return commands.getoutput(cmd).split()
> +
> +
> + def get_pf_devs(self):
> + """
> + Catch all PFs PCI IDs.
> +
> + @return: List with all PCI IDs for the physical hardware requested
> + """
> + pf_ids = []
> + for name in self.name_list:
> + pf_id = self._get_pf_pci_id(name, "Ethernet")
> + if not pf_id:
> + continue
> + pf_ids.append(pf_id)
> + return pf_ids
> +
> +
> + def get_devs(self, count):
> + """
> + Check out all devices' PCI IDs according to their name.
> +
> + @param count: count number of PCI devices needed for pass through
> + @return: a list of all devices' PCI IDs
> + """
> + if self.type == "nic_vf":
> + vf_ids = self.get_vf_devs()
> + elif self.type == "nic_pf":
> + vf_ids = self.get_pf_devs()
> + elif self.type == "mixed":
> + vf_ids = self.get_vf_devs()
> + vf_ids.extend(self.get_pf_devs())
> + return vf_ids[0:count]
> +
> +
> + def get_vfs_count(self):
> + """
> + Get VFs count number according to lspci.
> + """
> + cmd = "lspci | grep 'Virtual Function' | wc -l"
> + # For each VF we'll see 2 prints of 'Virtual Function', so let's
> + # divide the result per 2
> + return int(commands.getoutput(cmd)) / 2
> +
> +
> + def check_vfs_count(self):
> + """
> + Check VFs count number according to the parameter driver_options.
> + """
> + return (self.get_vfs_count == self.devices_requested)
> +
> +
> + def is_binded_to_stub(self, full_id):
> + """
> + Verify whether the device with full_id is already binded to pci-stub.
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> + if os.path.exists(os.path.join(stub_path, full_id)):
> + return True
> + return False
> +
> +
> + def sr_iov_setup(self):
> + """
> + Ensure the PCI device is working in sr_iov mode.
> +
> + Check if the PCI hardware device drive is loaded with the appropriate,
> + parameters (number of VFs), and if it's not, perform setup.
> +
> + @return: True, if the setup was completed successfuly, False otherwise.
> + """
> + re_probe = False
> + s, o = commands.getstatusoutput('lsmod | grep %s' % self.driver)
> + if s:
> + re_probe = True
> + elif not self.check_vfs_count():
> + os.system("modprobe -r %s" % self.driver)
> + re_probe = True
> +
> + # Re-probe driver with proper number of VFs
> + if re_probe:
> + cmd = "modprobe %s %s" % (self.driver, self.driver_option)
> + s, o = commands.getstatusoutput(cmd)
> + if s:
> + return False
> + if not self.check_vfs_count():
> + return False
> + return True
> +
> +
> + def request_devs(self):
> + """
> + Implement setup process: unbind the PCI device and then bind it
> + to the pci-stub driver.
> +
> + @return: a list of successfully requested devices' PCI IDs.
> + """
> + base_dir = "/sys/bus/pci"
> + stub_path = os.path.join(base_dir, "drivers/pci-stub")
> +
> + self.pci_ids = self.get_devs(self.devices_requested)
> + logging.debug("The following pci_ids were found: %s" % self.pci_ids)
> + requested_pci_ids = []
> + self.dev_drivers = {}
> +
> + # Setup all devices specified for assignment to guest
> + for pci_id in self.pci_ids:
> + full_id = get_full_pci_id(pci_id)
> + if not full_id:
> + continue
> + drv_path = os.path.join(base_dir, "devices/%s/driver" % full_id)
> + dev_prev_driver= os.path.realpath(os.path.join(drv_path,
> + os.readlink(drv_path)))
> + self.dev_drivers[pci_id] = dev_prev_driver
> +
> + # Judge whether the device driver has been binded to stub
> + if not self.is_binded_to_stub(full_id):
> + logging.debug("Binding device %s to stub" % full_id)
> + vendor_id = get_vendor_from_pci_id(pci_id)
> + stub_new_id = os.path.join(stub_path, 'new_id')
> + unbind_dev = os.path.join(drv_path, 'unbind')
> + stub_bind = os.path.join(stub_path, 'bind')
> +
> + info_write_to_files = [(vendor_id, stub_new_id),
> + (full_id, unbind_dev),
> + (full_id, stub_bind)]
> +
> + for content, file in info_write_to_files:
> + try:
> + utils.open_write_close(content, file)
> + except IOError:
> + logging.debug("Failed to write %s to file %s" %
> + (content, file))
> + continue
> +
> + if not self.is_binded_to_stub(full_id):
> + logging.error("Binding device %s to stub failed" %
> + pci_id)
> + continue
> + else:
> + logging.debug("Device %s already binded to stub" % pci_id)
> + requested_pci_ids.append(pci_id)
> + self.pci_ids = requested_pci_ids
> + return self.pci_ids
> +
> +
> + def release_devs(self):
> + """
> + Release all PCI devices currently assigned to VMs back to the
> + virtualization host.
> + """
> + try:
> + for pci_id in self.dev_drivers:
> + if not self._release_dev(pci_id):
> + logging.error("Failed to release device %s to host" %
> + pci_id)
> + else:
> + logging.info("Released device %s successfully" % pci_id)
> + except:
> + return
> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
> index 100b567..045e3ed 100755
> --- a/client/tests/kvm/kvm_vm.py
> +++ b/client/tests/kvm/kvm_vm.py
> @@ -304,6 +304,12 @@ class VM:
> elif params.get("uuid"):
> qemu_cmd += " -uuid %s" % params.get("uuid")
>
> + # If the PCI assignment step went OK, add each one of the PCI assigned
> + # devices to the qemu command line.
> + if self.pci_assignable:
> + for pci_id in self.pa_pci_ids:
> + qemu_cmd += " -pcidevice host=%s" % pci_id
> +
> return qemu_cmd
>
>
> @@ -392,6 +398,50 @@ class VM:
> self.uuid = f.read().strip()
> f.close()
>
> + if not params.get("pci_assignable") == "no":
> + pa_type = params.get("pci_assignable")
> + pa_devices_requested = params.get("devices_requested")
> +
> + # Virtual Functions (VF) assignable devices
> + if pa_type == "vf":
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + devices_requested=pa_devices_requested)
> + # Physical NIC (PF) assignable devices
> + elif pa_type == "pf":
> + pa_device_names = params.get("device_names")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> + # Working with both VF and PF
> + elif pa_type == "mixed":
> + pa_device_names = params.get("device_names")
> + pa_driver = params.get("driver")
> + pa_driver_option = params.get("driver_option")
> + self.pci_assignable = kvm_utils.PciAssignable(type=pa_type,
> + driver=pa_driver,
> + driver_option=pa_driver_option,
> + names=pa_device_names,
> + devices_requested=pa_devices_requested)
> +
> + self.pa_pci_ids = self.pci_assignable.request_devs()
> +
> + if self.pa_pci_ids:
> + logging.debug("Successfuly assigned devices: %s" %
> + self.pa_pci_ids)
> + else:
> + logging.error("No PCI assignable devices were assigned "
> + "and 'pci_assignable' is defined to %s "
> + "on your config file. Aborting VM creation." %
> + pa_type)
> + return False
> +
> + else:
> + self.pci_assignable = None
> +
> # Make qemu command
> qemu_command = self.make_qemu_command()
>
> @@ -537,6 +587,8 @@ class VM:
> # Is it already dead?
> if self.is_dead():
> logging.debug("VM is already down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.debug("Destroying VM with PID %d..." %
> @@ -557,6 +609,9 @@ class VM:
> return
> finally:
> session.close()
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> +
>
> # Try to destroy with a monitor command
> logging.debug("Trying to kill VM with monitor command...")
> @@ -566,6 +621,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> # If the VM isn't dead yet...
> @@ -575,6 +632,8 @@ class VM:
> # Wait for the VM to be really dead
> if kvm_utils.wait_for(self.is_dead, 5, 0.5, 0.5):
> logging.debug("VM is down")
> + if self.pci_assignable:
> + self.pci_assignable.release_devs()
> return
>
> logging.error("Process %s is a zombie!" % self.process.get_pid())
> --
> 1.6.5.2
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2010-01-18 16:49 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-27 23:55 [PATCH] KVM test: Add PCI device assignment support Lucas Meneghel Rodrigues
2010-01-12 10:17 ` Yolkfull Chow
-- strict thread matches above, loose matches on Subject: below --
2010-01-12 18:28 Lucas Meneghel Rodrigues
2010-01-13 2:19 ` Yolkfull Chow
2010-01-18 16:49 ` Lucas Meneghel Rodrigues
2009-11-30 21:08 Lucas Meneghel Rodrigues
2009-12-01 3:21 ` Yolkfull Chow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox